• Functions of several variables
  • Definition: Function of two variables

    A real-valued function \(f\) defined on a subset \(\mathcal{D}\) of \(\mathbb{R}^2\) is a rule that assigns to each point \((x,y)\) in \(\mathcal{D}\) a real number \(f(x,y)\). The set \(\mathcal{D}\) is called the domain of \(f\), and the range \(\mathcal{R}\) of \(f\) is the set of all real numbers \(f(x,y)\) as \((x,y)\) varies over the domain \(\mathcal{D}\).

    Surface

    The graph of a function \(z = f(x,y) \) is called a surface.

    Level curves

    Given a function \(\displaystyle z = f(x,y)\) and a number \(c\) in the range of \(f(x,y)\), a level curve of a function of two variables \(f(x,y)\) for the value \(c\) is defined to be the set of points satisfying the equation \(f(x,y) = c\).

    Contour map

    The graph of various level curves is called a contour map.

    Vertical traces

    Given a function \(\displaystyle z = f(x,y)\) with domain \(\mathcal{D}\). A vertical trace can be

    • either the set of points satisfying the equation \(f(a,y) = z\) for a given constant \(a\)

    • or the set of points satisfying the equation \(f(x,b) = z\) for a given constant \(b\)

    Definition: Function of several variables

    A real-valued function \(f\) defined on a subset \(\mathcal{D}\) of \(\mathbb{R}^n\) is a rule that assigns to each point \((x_1,\dots,x_n)\) in \(\mathcal{D}\) a real number \(f(x_1,\dots,x_n)\). The set \(\mathcal{D}\) is called the domain of \(f\), and the range \(\mathcal{R}\) of \(f\) is the set of all real numbers \(f(x_1,\dots,x_n)\) as \((x_1,\dots,x_n)\) varies over the domain \(\mathcal{D}\).

    Level surface

    Given a function \(\displaystyle f(x,y,z)\) and a number \(c\) in the range of \(f(x,y,z)\), a level surface of \(f(x,y,z)\) for the value \(c\) is the set of points satisfying the equation \(f(x,y,z) = c\).

    Definition: $\delta$ disk

    Let $(a,b)\in\mathbb{R}^2$ for $\delta > 0 $ the $\delta$-disk centered at $(a,b)$ is

    $$ \mathcal{B}_{(a,b)}\left(\delta\right) = \left\lbrace (x,y)\in\mathbb{R}^2\mid \sqrt{{\left(x-a\right)}^2 +{\left(y-b\right)}^2 }<\delta \right\rbrace $$

    Definition: $\delta$ disk

    Let $P = (a_1,\dots,a_n)\in\mathbb{R}^n$ for $\delta > 0 $ the $\delta$-disk centered at $P$ is

    $$ \mathcal{B}_{P}\left(\delta\right) = \left\lbrace (x_1,\dots,x_n)\in\mathbb{R}^n\mid \sqrt{\sum_{i=1}^{n}{\left(x_i-a_i\right)}^2 } <\delta \right\rbrace $$

    Definition: Interior and boundary

    A point $P$ in a set $S$ is called interior if there a disk around $P$ that is contained in $S$.

    A point $P$ in a set $S$ is called boundary if every disk around $P$ contains points in $S$ and points outside $S$.

    Definition: Open and closed sets

    A set $S$ is open if every point in $S$ is interior point.

    A set $S$ is closed if $S$ contains all its boundary points.

    Definition: Connected sets

    A set $S$ is connected if there are no two open disjoint sets $A$ and $B$ such that $A\cap S \ne \emptyset$, $B\cap S \ne \emptyset$ and $S = (A\cap S)\cup(B\cap S)$

    A region is a non-empty open connected set.

    Definition: Limit in $\mathbb{R}^2$

    Let $f:\mathbb{R}^2\to\mathbb{R}$ defined around $(a,b)\in\mathbb{R}^2$ but not necessarily at $(a,b)$ then $$\lim_{(x,y)\to(a,b)} f(x,y) = L$$ if for $\epsilon \in\mathbb{R}^+$ there a $\delta(\epsilon)\in\mathbb{R}^+$ such that $$ 0<\sqrt{{\left(x-a\right)}^2 +{\left(y-b\right)}^2} <\delta $$ implies $$ \|f(x,y)-L\| < \epsilon $$

    Properities
    $$\begin{array}{rcl} \displaystyle \lim_{(x,y)\to(a,b)}c & =& c \\[2ex] \displaystyle\lim_{(x,y)\to(a,b)}x & =& a \\[2ex] \displaystyle\lim_{(x,y)\to(a,b)}y & =& b \end{array}$$

    Linearity and product laws

    Let $ \displaystyle\lim_{(x,y)\to(a,b)}f(x,y) = F $ and $ \displaystyle \lim_{(x,y)\to(a,b)}g(x,y) =G $ then

    $$\begin{array}{rcl} \displaystyle\lim_{(x,y)\to(a,b)}\left(\alpha f(x,y) \pm \beta g(x,y) \right) &=&\alpha F\pm \beta G \\[2ex] \displaystyle\lim_{(x,y)\to(a,b)}f(x,y)g(x,y) &=&FG \end{array} $$

    Quotient law

    Let $ \displaystyle\lim_{(x,y)\to(a,b)}f(x,y) = F $ and $ \displaystyle \lim_{(x,y)\to(a,b)}g(x,y) =G \ne 0 $ then

    $$\begin{array}{rcl} \displaystyle\lim_{(x,y)\to(a,b)}\frac{f(x,y)}{g(x,y)} & = & \displaystyle\frac{F}{G} \end{array} $$

    Power law

    if $\displaystyle \lim_{(x,y)\to(a,b)}f(x,y) = F $ and $n$ is positive integer then

    $$\lim_{(x,y)\to(a,b)}{\left(f(x,y)\right)}^{n} =F^n $$

    Root law

    Let $\displaystyle \lim_{(x,y)\to(a,b)}f(x,y) = F $ and $n$ is positive odd integer then

    $$\lim_{(x,y)\to(a,b)}\sqrt[n]{\left(f(x,y)\right)} =\sqrt[n]{F} $$

    if $n$ is even and $F\ge 0 $ the above equation also holds.

    Definition: Continuity

    A function $f(x,y)$ is continuous at $(a,b)$ if

    defined
    $\displaystyle f(a,b)$ exists
    limit exists
    $\displaystyle \lim_{(x,y)\to (a,b)}f(x,y)$ exists
    limit is the function value
    $\displaystyle \lim_{(x,y)\to (a,b)}f(x,y) = f(a,b)$

    Theorem: properties

    Linear combination of continuous functions is continuous.

    Product of continuous functions is continuous.

    Composition of continuous functions is continuous.

    Definition: Partial derivative with respect to $x$

    Let $f:{\mathbb{R}}^2\to{\mathbb{R}}$ be a function of two variables $z= f(x,y)$. The partial derivative of $f$ with respect to $x$ is $$\begin{array}{rcl} f_x = \frac{\partial}{\partial x}\, f =\frac{\partial\,f}{\partial x} &=&\displaystyle \lim_{h\to0} \frac{f(x+h,y)-f(x,y)}{h} \end{array}$$

    Definition: Partial derivative with respect to $y$

    Let $f:{\mathbb{R}}^2\to{\mathbb{R}}$ be a function of two variables $z= f(x,y)$. The partial derivative of $f$ with respect to $y$ is $$\begin{array}{rcl} f_x = \frac{\partial}{\partial x}\, f =\frac{\partial\,f}{\partial x} &=&\displaystyle \lim_{h\to0} \frac{f(x,y+h)-f(x,y)}{h} \end{array}$$

    Definition: Partial derivative with respect to $x_i$

    Let $f:{\mathbb{R}}^n\to{\mathbb{R}}$ be a function of $n$ variables $ f(x_1,\dots,x_n)$. The partial derivative of $f$ with respect to $x_i$ is $$\begin{array}{rcl} f_{x_i}&=& = \frac{\partial}{\partial x_i}\, f =\frac{\partial\,f}{\partial x_i} \\\\&=&\displaystyle \lim_{h\to0} \frac{f(x_1,\dots,x_{i-1},x_i+h,x_{i+1},\dots,x_i)- f(x_1,\dots,x_i,\dots,x_i)}{h} \end{array}$$

    Definition: Higher order partial derivatives

    $$\begin{array}{rcl} f_{xx}=\frac{\partial\,f_x}{{\partial x}} =\frac{\partial}{{\partial x}} \left[\frac{\partial\,f}{{\partial x}}\right] &=& \frac{\partial^2\,f}{{\partial x}^2} \\[2ex] f_{xy}=\frac{\partial\,f_x}{{\partial y}} =\frac{\partial}{{\partial y}} \left[\frac{\partial\,f}{{\partial x}}\right] &=& \frac{\partial^2\,f}{\partial x\partial y} \\[2ex] f_{yx}=\frac{\partial\,f_y}{{\partial x}} =\frac{\partial}{{\partial x}} \left[\frac{\partial\,f}{{\partial y}}\right] &=& \frac{\partial^2\,f}{\partial y\partial x} \\[2ex] f_{yy}=\frac{\partial\,f_y}{{\partial y}} =\frac{\partial}{{\partial y}} \left[\frac{\partial\,f}{{\partial y}}\right] &=& \frac{\partial^2\,f}{{\partial y}^2} \end{array}$$

    Theorem: Clairaut's theorem

    Suppose $f$ is defined on an open disk $\mathcal{B}_P$ that contains the point $P$. If the functions $\displaystyle f_{xy}$ and $\displaystyle f_{yx}$ are both continuous on $\mathcal{B}_P$ then $$f_{xy} = f_{yx}$$

    Definition: Tangent plane

    Let $P$ be a point on a surface $S$. Let $c(t)$ be any curve that passes through $P$ and lies entirely on $S$. If the tangents to all such curves at $P$ lie in the same plane then this plane is called tangent plane to $S$ at $P$

    Definition: Tangent plane equation

    Let $S$ be a surface defined by a function $z=f(x,y)$ and let $\displaystyle P = \left[\begin{array}{r}P_x\\P_y\end{array}\right] $ be a point in the domain of $f$. Then the equation of the tangent plane to $S$ at $P$ is

    $$ z = f(P_x,P_y) + f_x(P_x,P_y)(x-P_x)+ f_y(P_x,P_y)(y-P_y) $$

    Definition: Linear approximation

    Let $z=f(x,y)$ has continuous partial derivatives at $\displaystyle P = \left[\begin{array}{r}P_x\\P_y\end{array}\right] $ the linear approximation of $f$ at $P$ is

    $$ L(x,y) = f(P_x,P_y) + f_x(P_x,P_y)(x-P_x)+ f_y(P_x,P_y)(y-P_y) $$

    Definition: Differentiable function

    A function $f(x,y)$ is differentiable at $\displaystyle P = \left[\begin{array}{r}P_x\\P_y\end{array}\right] $ if for all pairs $(x,y)$ contained in a ball centered at $P$ we can write

    $$ f(x,y) = f(P_x,P_y) + f_x(P_x,P_y)(x-P_x) + f_y(P_x,P_y)(y-P_y)+ E(x,y) $$

    where the error term $E(x,y)$ satisfies

    $$ \lim_{(x,y)\to(P_x,P_y)} \frac{E(x,y)} {\sqrt{{\left(x-P_x\right)}^2+{\left(y-P_y\right)}^2}} =0 $$

    Theorem: Sufficient condition for differentiability

    Let $z=f(x,y)$ and $\displaystyle P = \left[\begin{array}{r}P_x\\P_y\end{array}\right] $ be in the domain of $f$. If $f(P)$, $f_x(P)$ and $f_y(P)$ all exist and are continuous in a neighborhood of $P$ then $f(x,y)$ is differentiable at $P$.

    Theorem: Total differential

    Let $z=f(x,y)$ and $\displaystyle P = \left[\begin{array}{r}P_x\\P_y\end{array}\right] \in\mathcal{D}_f$. Let $\displaystyle \Delta x$ and $\displaystyle \Delta y$ be such that $\displaystyle \left[\begin{array}{r}P_x+\Delta x\\P_y+\Delta y\end{array}\right] \in\mathcal{D}_f$. If $f$ is differentiable then the differentials $\mathrm{d}\,{x}$ and $\mathrm{d}\,{y}$ are defined as

    $$\mathrm{d}\,{x} = \Delta x\qquad \mathrm{d}\,{y} = \Delta y$$

    the total differential $\mathrm{d}\,{z}$ is

    $$\mathrm{d}\,{z} = f_x(P)\mathrm{d}\,{x} + f_y(P)\mathrm{d}\,{y}$$

    Definition: Differentiable function

    A function $f(x_1,\dots,x_n)$ is differentiable at $\displaystyle P = \left[\begin{array}{c}P_{x_1}\\\vdots\\P_{x_n}\end{array}\right] $ if for all values $(\hat{x}_1,\dots,\hat{x}_n)$ contained in a ball centered at $P$ we can write

    $$ f(\hat{x}_1,\dots,\hat{x}_n) = f(P) + \sum_{i=1}^{n}f_{x_i}(P)(\hat{x}_i-P_{x_i}) + E(\hat{x}_1,\dots,\hat{x}_n) $$

    where the error term $E(x_1,\dots,x_n)$ satisfies

    $$ \lim_{(x_1,\dots,x_n)\to P} \frac{E(x_1,\dots,x_n)}{\sqrt{\displaystyle\sum_{i=1}^{n}{\left(x_{i}-P_{x_i}\right)}^2}} =0 $$

    Theorem: One variable chain rule

    Let $z = f(x,y)$ be differentiable function and $x=x(t)$ and $y=y(t)$. Then $z$ is a differentiable function of $t$ and

    $$ \dfrac{\mathrm{d}\,{z}}{\mathrm{d}\,{t}} = \dfrac{\partial z}{\partial {x}}\dfrac{\mathrm{d}\,{x}}{\mathrm{d}\,{t}} +\dfrac{\partial z}{\partial {y}}\dfrac{\mathrm{d}\,{y}}{\mathrm{d}\,{t}} $$

    Theorem: Two variable chain rule

    Let $z = f(x,y)$ be differentiable function and $x=x(s,t)$ and $y=y(s,t)$. Then $$ \dfrac{\partial {z}}{\partial{s}} = \dfrac{\partial z}{\partial {x}}\dfrac{\partial{x}}{\partial{s}} +\dfrac{\partial z}{\partial {y}}\dfrac{\partial{y}}{\partial{s}} \qquad \dfrac{\partial {z}}{\partial{t}} = \dfrac{\partial z}{\partial {x}}\dfrac{\partial{x}}{\partial{t}} +\dfrac{\partial z}{\partial {y}}\dfrac{\partial{y}}{\partial{t}} $$

    Theorem: The chain rule

    Let $z = f(x_1,x_2,\dots,x_n)$ be differentiable function and for all $i\in[1,\dots,n]$ we have that $x_i=x_i(t_1,t_2,\dots, t_m)$. Then

    $$ \dfrac{\partial {z}}{\mathrm{d}\,{t_j}} = \dfrac{\partial z}{\partial{x_1}}\dfrac{\partial{x_1}}{\partial{t_j}} + \dfrac{\partial z}{\partial{x_2}}\dfrac{\partial{x_2}}{\partial{t_j}} + \cdots + \dfrac{\partial z}{\partial{x_n}}\dfrac{\partial{x_n}}{\partial{t_j}} $$

    Theorem: Implicit differentiation

    Suppose $\displaystyle z = f(x,y) $ defines $y$ implicitly as a function $y=g(x)$ of $x$ via the equation $f(x,y) = 0$. Then

    $$ \begin{array}{rcl} \displaystyle \dfrac{\mathrm{d}\,{y}}{\mathrm{d}\,{x}} &=& -\dfrac{\dfrac{\partial f}{\partial x}}{\dfrac{\partial f}{\partial y}} \end{array} $$ provided that $$ \begin{array}{rcl} \displaystyle \displaystyle {\dfrac{\partial f}{\partial y}} & \ne & 0 \end{array} $$

    Theorem: Implicit differentiation

    Suppose $\displaystyle 0 = f(x,y,z) $ defines $z$ implicitly as a differentiable function in $x$ and $y$ Then

    $$ \begin{array}{rcl} \displaystyle \dfrac{\partial{z}}{\partial {x}} = -\dfrac{\dfrac{\partial f}{\partial x}}{\dfrac{\partial f}{\partial z}} &\qquad& \dfrac{\partial{z}}{\partial {x}} = -\dfrac{\dfrac{\partial f}{\partial x}}{\dfrac{\partial f}{\partial z}} \end{array} $$ provided that $$ \begin{array}{rcl} \displaystyle {\dfrac{\partial f}{\partial z}} & \ne & 0 \end{array} $$

    Definition: Directional derivative

    Let $f(x,y):{\mathbb{R}}^2\to{\mathbb{R}}$ with domain ${\mathcal{D}_f}$ and let $(a,b)\in{\mathcal{D}_f}$. Let $\vec{u}$ be a unit vector. The directional derivative of ${f}$ at $(a,b)$ in the direction of $\vec{u}$ is

    $$\begin{array}{rcl} {\mathrm{D}}_{\vec{u}}\,f(a,b) &=& \lim_{h \to 0} \dfrac{f((a,b) + h\vec{u}) - f(a,b)}{h} \\\\ &=& \lim_{h \to 0} \dfrac{f((a+hu_x,b+hu_y)) - f(a,b)}{h} \\\\ &=& \lim_{h \to 0} \dfrac{f((a+h\cos\theta,b+h\sin\theta)) - f(a,b)}{h} \end{array}$$

    if the limit exists

    Theorem: Directional derivative

    Let $f(x,y):{\mathbb{R}}^2\to{\mathbb{R}}$ with domain ${\mathcal{D}_f}$. Let $\vec{u}=\cos\theta \vec{i}+\sin\theta \vec{j}$. Assume that $f_x$ and $f_y$ exist. Then

    $$\begin{array}{rcl} {\mathrm{D}}_{\vec{u}}\,f(a,b) &=& f_x(x,y)\cos\theta +f_y(x,y)\sin\theta \\\\ &=& \left(\begin{array}{c}f_x\\f_y\end{array}\right) \cdot \left(\begin{array}{c}\cos\theta\\\sin\theta\end{array}\right) \end{array}$$

    Definition: Gradient

    Let $f(x,y):{\mathbb{R}}^2\to{\mathbb{R}}$ with domain ${\mathcal{D}_f}$ such that $f_x$ and $f_y$ exist. Then \begin{eqnarray*} \nabla f = \mathrm{grad}\,f &=& f_x(x,y)\vec{i} + f_y(x,y)\vec{j} =\left(\begin{array}{c}f_x\\f_y\end{array}\right) \end{eqnarray*} is called the gradient of $f$.

    Theorem: Properties of directional derivative

    Let $f(x,y):{\mathbb{R}}^2\to{\mathbb{R}}$ be differentiable at $(a,b)\in{\mathcal{D}_f}$. Then

    • if $\displaystyle \nabla f =\left(\begin{array}{c}0\\0\end{array}\right) $ then $\displaystyle {\mathrm{D}}_{\vec{u}}f = 0$
    • if $\displaystyle \nabla f \ne\left(\begin{array}{c}0\\0\end{array}\right) $ then $\displaystyle {\mathrm{D}}_{\vec{u}}f $ is maximized when $\vec{u}$ is in the direction of $\nabla f$ and has maximum value $\|\nabla f (a,b)\|$.
    • if $\displaystyle \nabla f \ne\left(\begin{array}{c}0\\0\end{array}\right) $ then $\displaystyle {\mathrm{D}}_{\vec{u}}f $ is minimized when $\vec{u}$ is in the direction opposite of $\nabla f$ and has minimum value $-\|\nabla f (a,b)\|$.

    Definition: Critical point

    Let $f(x,y):{\mathbb{R}}^2\to{\mathbb{R}}$ on domain ${\mathcal{D}}_{f}$. A point $(a,b)\in{\mathcal{D}}_{f}$ is critical points for for $f(x,y)$ if one of the following holds

    • $f_x(a,b) = 0$ and $f_y(a,b) = 0$, equivalently $\nabla f(a,b) = \vec{0}$
    • $f_x$ or $f_y$ does not exists at $(a,b)$

    Definition: local maximum

    Let $f(x,y):{\mathbb{R}}^2\to{\mathbb{R}}$ on domain ${\mathcal{D}}_{f}$. A point $(a,b)\in{\mathcal{D}}_{f}$ is local maximum for $f(x,y)$ if there is $\displaystyle \mathcal{B}_{(a,b)}\left(\delta\right)\subset {\mathcal{D}}_{f} $ such that $$ \forall (x,y)\in\mathcal{B}_{(a,b)}\left(\delta\right), \, f(x,y) \le f(a,b) $$

    Definition: local minimum

    Let $f(x,y):{\mathbb{R}}^2\to{\mathbb{R}}$ on domain ${\mathcal{D}}_{f}$. A point $(a,b)\in{\mathcal{D}}_{f}$ is local minimum for $f(x,y)$ if there is $\displaystyle \mathcal{B}_{(a,b)}\left(\delta\right)\subset {\mathcal{D}}_{f} $ such that $$ \forall (x,y)\in\mathcal{B}_{(a,b)}\left(\delta\right), \, f(x,y) \ge f(a,b) $$

    Definition: global maximum

    Let $f(x,y):{\mathbb{R}}^2\to{\mathbb{R}}$ on domain ${\mathcal{D}}_{f}$. A point $(a,b)\in{\mathcal{D}}_{f}$ is global maximum for $f(x,y)$ if $$ \forall (x,y)\in\mathcal{D}_{f}, \, f(x,y) \le f(a,b) $$

    Definition: global minimum

    Let $f(x,y):{\mathbb{R}}^2\to{\mathbb{R}}$ on domain ${\mathcal{D}}_{f}$. A point $(a,b)\in{\mathcal{D}}_{f}$ is global maximum for $f(x,y)$ if $$ \forall (x,y)\in\mathcal{D}_{f}, \, f(x,y) \ge f(a,b) $$

    Theorem: Properties of directional derivative

    Let $f(x,y):{\mathbb{R}}^2\to{\mathbb{R}}$ on domain ${\mathcal{D}}$. If a point $(a,b)\in{\mathcal{D}}$ is a local extrema for $f(x,y)$ then $(a,b)$ is a critical point of $f$.

    Definition: saddle point

    Let $f(x,y):{\mathbb{R}}^2\to{\mathbb{R}}$ on domain ${\mathcal{D}}$. If for a point $(a,b)\in{\mathcal{D}}$, we have $\nabla f (a,b) = \vec{0}$ and $f$ has neither local maximum nor local minimum at $(a,b)$, then $(a,b)$ is called a saddle point.

    Theorem: Second derivative test

    Let $f(x,y):{\mathbb{R}}^2\to{\mathbb{R}}$ on domain ${\mathcal{D}}$ and let $(a,b)\in{\mathcal{D}}$. If $f_x,f_y,f_{xx},f_{yy},f_{xy}$ and $f_{yx}$ are continuous on a some ball containing $(a,b)$ then let

    $$D =\det \left(\displaystyle \begin{array}{rr} f_{xx}(a,b)&f_{yx}(a,b) \\ f_{yx}(a,b)&f_{yy}(a,b) \end{array} \right) = f_{xx}(a,b)f_{yy}(a,b) - {\left(f_{xy}(a,b)\right)}^2 $$

    • if $\displaystyle D>0$ and $f_{xx}(a,b) > 0 $ then $f$ has a local minimum at $(a,b)$
    • if $\displaystyle D>0$ and $f_{xx}(a,b) \lt 0 $ then $f$ has a local maximum at $(a,b)$
    • if $\displaystyle D \lt 0$ then $f$ has a saddle point at $(a,b)$
    • otherwise test is inconclusive

    Theorem: Extreme value theorem

    Let $f(x,y):{\mathbb{R}}^2\to{\mathbb{R}}$. If $f$ is continuous on a closed an bounded set ${\mathcal{D}}\subset {\mathbb{R}}^2$, then $f$ attains its global maximum at a point $(a,b)\in {\mathcal{D}} $ and its global minimum at a point $(c,d)\in{\mathcal{D}}$.

    Algorithm: Finding global extremums

    • find critical points of $f$ in ${\mathcal{D}}$
    • find extreme values of $f$ in the boundary of ${\mathcal{D}}$
    • compare values from the previous two steps and identify global maximum and global minimum

    Constrained Optimization (one constraint)

    Maximize (or minimize)
    $f(x,y)$
    Subject to
    $g(x,y) = c$

    Theorem: Lagrange

    Let $f(x,y)$ and $g(x,y)$ be smooth functions, and suppose that $c$ is a scalar constant such that $\nabla g(x,y) \ne \vec{0}$ for all $(x,y)$ that satisfy the equation $g(x,y) = c$. Then to solve the constrained optimization problem

    Maximize (or minimize)
    $f(x,y)$
    Subject to
    $g(x,y) = c$

    find the points $(x,y)$ that solve the equation $\nabla f(x,y) = \lambda \nabla g(x,y)$ for some constant $\lambda$ (the number $\lambda$ is called the Lagrange multiplier). If there is a constrained maximum or minimum, then it must be such a point.

    Constrained Optimization (two constraints)

    Maximize (or minimize)
    $f(x,y,z)$
    Subject to
    $g(x,y,z) = 0$
    $h(x,y,z) = 0$
    $$\nabla f(a,b,c) = \lambda \nabla g(a,b,c) + \mu\nabla h(a,b,c)$$