In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space ${\displaystyle \mathbb {R} ^{n))$ as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to multivariable calculus but is somewhat more sophisticated in that it uses linear algebra (or some functional analysis) more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces.

Calculus on Euclidean space is also a local model of calculus on manifolds, a theory of functions on manifolds.

Basic notions

Functions in one real variable

This section is a brief review of function theory in one-variable calculus.

A real-valued function ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ is continuous at ${\displaystyle a}$ if it is approximately constant near ${\displaystyle a}$; i.e.,

${\displaystyle \lim _{h\to 0}(f(a+h)-f(a))=0.}$

In contrast, the function ${\displaystyle f}$ is differentiable at ${\displaystyle a}$ if it is approximately linear near ${\displaystyle a}$; i.e., there is some real number ${\displaystyle \lambda }$ such that

${\displaystyle \lim _{h\to 0}{\frac {f(a+h)-f(a)-\lambda h}{h))=0.}$[1]

(For simplicity, suppose ${\displaystyle f(a)=0}$. Then the above means that ${\displaystyle f(a+h)=\lambda h+g(a,h)}$ where ${\displaystyle g(a,h)}$ goes to 0 faster than h going to 0 and, in that sense, ${\displaystyle f(a+h)}$ behaves like ${\displaystyle \lambda h}$.)

The number ${\displaystyle \lambda }$ depends on ${\displaystyle a}$ and thus is denoted as ${\displaystyle f'(a)}$. If ${\displaystyle f}$ is differentiable on an open interval ${\displaystyle U}$ and if ${\displaystyle f'}$ is a continuous function on ${\displaystyle U}$, then ${\displaystyle f}$ is called a C1 function. More generally, ${\displaystyle f}$ is called a Ck function if its derivative ${\displaystyle f'}$ is Ck-1 function. Taylor's theorem states that a Ck function is precisely a function that can be approximated by a polynomial of degree k.

If ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ is a C1 function and ${\displaystyle f'(a)\neq 0}$ for some ${\displaystyle a}$, then either ${\displaystyle f'(a)>0}$ or ${\displaystyle f'(a)<0}$; i.e., either ${\displaystyle f}$ is strictly increasing or strictly decreasing in some open interval containing a. In particular, ${\displaystyle f:f^{-1}(U)\to U}$ is bijective for some open interval ${\displaystyle U}$ containing ${\displaystyle f(a)}$. The inverse function theorem then says that the inverse function ${\displaystyle f^{-1))$ is differentiable on U with the derivatives: for ${\displaystyle y\in U}$

${\displaystyle (f^{-1})'(y)={1 \over f'(f^{-1}(y))}.}$

Derivative of a map and chain rule

For functions ${\displaystyle f}$ defined in the plane or more generally on an Euclidean space ${\displaystyle \mathbb {R} ^{n))$, it is necessary to consider functions that are vector-valued or matrix-valued. It is also conceptually helpful to do this in an invariant manner (i.e., a coordinate-free way). Derivatives of such maps at a point are then vectors or linear maps, not real numbers.

Let ${\displaystyle f:X\to Y}$ be a map from an open subset ${\displaystyle X}$ of ${\displaystyle \mathbb {R} ^{n))$ to an open subset ${\displaystyle Y}$ of ${\displaystyle \mathbb {R} ^{m))$. Then the map ${\displaystyle f}$ is said to be differentiable at a point ${\displaystyle x}$ in ${\displaystyle X}$ if there exists a (necessarily unique) linear transformation ${\displaystyle f'(x):\mathbb {R} ^{n}\to \mathbb {R} ^{m))$, called the derivative of ${\displaystyle f}$ at ${\displaystyle x}$, such that

${\displaystyle \lim _{h\to 0}{\frac {1}{|h|))|f(x+h)-f(x)-f'(x)h|=0}$

where ${\displaystyle f'(x)h}$ is the application of the linear transformation ${\displaystyle f'(x)}$ to ${\displaystyle h}$.[2] If ${\displaystyle f}$ is differentiable at ${\displaystyle x}$, then it is continuous at ${\displaystyle x}$ since

${\displaystyle |f(x+h)-f(x)|\leq (|h|^{-1}|f(x+h)-f(x)-f'(x)h|)|h|+|f'(x)h|\to 0}$ as ${\displaystyle h\to 0}$.

As in the one-variable case, there is

Chain rule — [3] Let ${\displaystyle f}$ be as above and ${\displaystyle g:Y\to Z}$ a map for some open subset ${\displaystyle Z}$ of ${\displaystyle \mathbb {R} ^{l))$. If ${\displaystyle f}$ is differentiable at ${\displaystyle x}$ and ${\displaystyle g}$ differentiable at ${\displaystyle y=f(x)}$, then the composition ${\displaystyle g\circ f}$ is differentiable at ${\displaystyle x}$ with the derivative

${\displaystyle (g\circ f)'(x)=g'(y)\circ f'(x).}$

This is proved exactly as for functions in one variable. Indeed, with the notation ${\displaystyle {\widetilde {h))=f(x+h)-f(x)}$, we have:

{\displaystyle {\begin{aligned}&{\frac {1}{|h|))|g(f(x+h))-g(y)-g'(y)f'(x)h|\\&\leq {\frac {1}{|h|))|g(y+{\widetilde {h)))-g(y)-g'(y){\widetilde {h))|+{\frac {1}{|h|))|g'(y)(f(x+h)-f(x)-f'(x)h)|.\end{aligned))}

Here, since ${\displaystyle f}$ is differentiable at ${\displaystyle x}$, the second term on the right goes to zero as ${\displaystyle h\to 0}$. As for the first term, it can be written as:

${\displaystyle {\begin{cases}{\frac {|{\widetilde {h))|}{|h|))|g(y+{\widetilde {h)))-g(y)-g'(y){\widetilde {h))|/|{\widetilde {h))|,&{\widetilde {h))\neq 0,\\0,&{\widetilde {h))=0.\end{cases))}$

Now, by the argument showing the continuity of ${\displaystyle f}$ at ${\displaystyle x}$, we see ${\displaystyle {\frac {|{\widetilde {h))|}{|h|))}$ is bounded. Also, ${\displaystyle {\widetilde {h))\to 0}$ as ${\displaystyle h\to 0}$ since ${\displaystyle f}$ is continuous at ${\displaystyle x}$. Hence, the first term also goes to zero as ${\displaystyle h\to 0}$ by the differentiability of ${\displaystyle g}$ at ${\displaystyle y}$. ${\displaystyle \square }$

The map ${\displaystyle f}$ as above is called continuously differentiable or ${\displaystyle C^{1))$ if it is differentiable on the domain and also the derivatives vary continuously; i.e., ${\displaystyle x\mapsto f'(x)}$ is continuous.

Corollary — If ${\displaystyle f,g}$ are continuously differentiable, then ${\displaystyle g\circ f}$ is continuously differentiable.

As a linear transformation, ${\displaystyle f'(x)}$ is represented by an ${\displaystyle m\times n}$-matrix, called the Jacobian matrix ${\displaystyle Jf(x)}$ of ${\displaystyle f}$ at ${\displaystyle x}$ and we write it as:

${\displaystyle (Jf)(x)={\begin{bmatrix}{\frac {\partial f_{1)){\partial x_{1))}(x)&\cdots &{\frac {\partial f_{1)){\partial x_{n))}(x)\\\vdots &\ddots &\vdots \\{\frac {\partial f_{m)){\partial x_{1))}(x)&\cdots &{\frac {\partial f_{m)){\partial x_{n))}(x)\end{bmatrix)).}$

Taking ${\displaystyle h}$ to be ${\displaystyle he_{j))$, ${\displaystyle h}$ a real number and ${\displaystyle e_{j}=(0,\cdots ,1,\cdots ,0)}$ the j-th standard basis element, we see that the differentiability of ${\displaystyle f}$ at ${\displaystyle x}$ implies:

${\displaystyle \lim _{h\to 0}{\frac {f_{i}(x+he_{j})-f_{i}(x)}{h))={\frac {\partial f_{i)){\partial x_{j))}(x)}$

where ${\displaystyle f_{i))$ denotes the i-th component of ${\displaystyle f}$. That is, each component of ${\displaystyle f}$ is differentiable at ${\displaystyle x}$ in each variable with the derivative ${\displaystyle {\frac {\partial f_{i)){\partial x_{j))}(x)}$. In terms of Jacobian matrices, the chain rule says ${\displaystyle J(g\circ f)(x)=Jg(y)Jf(x)}$; i.e., as ${\displaystyle (g\circ f)_{i}=g_{i}\circ f}$,

${\displaystyle {\frac {\partial (g_{i}\circ f)}{\partial x_{j))}(x)={\frac {\partial g_{i)){\partial y_{1))}(y){\frac {\partial f_{1)){\partial x_{j))}(x)+\cdots +{\frac {\partial g_{i)){\partial y_{m))}(y){\frac {\partial f_{m)){\partial x_{j))}(x),}$

which is the form of the chain rule that is often stated.

A partial converse to the above holds. Namely, if the partial derivatives ${\displaystyle {\partial f_{i))/{\partial x_{j))}$ are all defined and continuous, then ${\displaystyle f}$ is continuously differentiable.[4] This is a consequence of the mean value inequality:

Mean value inequality — [5] Given the map ${\displaystyle f:X\to Y}$ as above and points ${\displaystyle x,y}$ in ${\displaystyle X}$ such that the line segment between ${\displaystyle x,y}$ lies in ${\displaystyle X}$, if ${\displaystyle t\mapsto f(x+ty)}$ is continuous on ${\displaystyle [0,1]}$ and is differentiable on the interior, then, for any vector ${\displaystyle v\in \mathbb {R} ^{m))$,

${\displaystyle |\Delta _{y}f(x)-v|\leq \sup _{0

where ${\displaystyle \Delta _{y}f(x)=f(x+y)-f(x).}$

(This version of mean value inequality follows from mean value inequality in Mean value theorem § Mean value theorem for vector-valued functions applied to the function ${\displaystyle [0,1]\to \mathbb {R} ^{m},\,t\mapsto f(x+ty)-tv}$, where the proof on mean value inequality is given.)

Indeed, let ${\displaystyle g(x)=(Jf)(x)}$. We note that, if ${\displaystyle y=y_{i}e_{i))$, then

${\displaystyle {\frac {d}{dt))f(x+ty)={\frac {\partial f}{\partial x_{i))}(x+ty)y=g(x+ty)(y_{i}e_{i}).}$

For simplicity, assume ${\displaystyle n=2}$ (the argument for the general case is similar). Then, by mean value inequality, with the operator norm ${\displaystyle \|\cdot \|}$,

{\displaystyle {\begin{aligned}&|\Delta _{y}f(x)-g(x)y|\\&\leq |\Delta _{y_{1}e_{1))f(x_{1},x_{2}+y_{2})-g(x)(y_{1}e_{1})|+|\Delta _{y_{2}e_{2))f(x_{1},x_{2})-g(x)(y_{2}e_{2})|\\&\leq |y_{1}|\sup _{0

which implies ${\displaystyle |\Delta _{y}f(x)-g(x)y|/|y|\to 0}$ as required. ${\displaystyle \square }$

Example: Let ${\displaystyle U}$ be the set of all invertible real square matrices of size n. Note ${\displaystyle U}$ can be identified as an open subset of ${\displaystyle \mathbb {R} ^{n^{2))}$ with coordinates ${\displaystyle x_{ij},0\leq i,j\neq n}$. Consider the function ${\displaystyle f(g)=g^{-1))$ = the inverse matrix of ${\displaystyle g}$ defined on ${\displaystyle U}$. To guess its derivatives, assume ${\displaystyle f}$ is differentiable and consider the curve ${\displaystyle c(t)=ge^{tg^{-1}h))$ where ${\displaystyle e^{A))$ means the matrix exponential of ${\displaystyle A}$. By the chain rule applied to ${\displaystyle f(c(t))=e^{-tg^{-1}h}g^{-1))$, we have:

${\displaystyle f'(c(t))\circ c'(t)=-g^{-1}he^{-tg^{-1}h}g^{-1))$.

Taking ${\displaystyle t=0}$, we get:

${\displaystyle f'(g)h=-g^{-1}hg^{-1))$.

Now, we then have:[6]

${\displaystyle \|(g+h)^{-1}-g^{-1}+g^{-1}hg^{-1}\|\leq \|(g+h)^{-1}\|\|h\|\|g^{-1}hg^{-1}\|.}$

Since the operator norm is equivalent to the Euclidean norm on ${\displaystyle \mathbb {R} ^{n^{2))}$ (any norms are equivalent to each other), this implies ${\displaystyle f}$ is differentiable. Finally, from the formula for ${\displaystyle f'}$, we see the partial derivatives of ${\displaystyle f}$ are smooth (infinitely differentiable); whence, ${\displaystyle f}$ is smooth too.

Higher derivatives and Taylor formula

If ${\displaystyle f:X\to \mathbb {R} ^{m))$ is differentiable where ${\displaystyle X\subset \mathbb {R} ^{n))$ is an open subset, then the derivatives determine the map ${\displaystyle f':X\to \operatorname {Hom} (\mathbb {R} ^{n},\mathbb {R} ^{m})}$, where ${\displaystyle \operatorname {Hom} }$ stands for homomorphisms between vector spaces; i.e., linear maps. If ${\displaystyle f'}$ is differentiable, then ${\displaystyle f'':X\to \operatorname {Hom} (\mathbb {R} ^{n},\operatorname {Hom} (\mathbb {R} ^{n},\mathbb {R} ^{m}))}$. Here, the codomain of ${\displaystyle f''}$ can be identified with the space of bilinear maps by:

${\displaystyle \operatorname {Hom} (\mathbb {R} ^{n},\operatorname {Hom} (\mathbb {R} ^{n},\mathbb {R} ^{m})){\overset {\varphi }{\underset {\sim }{\to ))}\{(\mathbb {R} ^{n})^{2}\to \mathbb {R} ^{m}{\text{ bilinear))\))$

where ${\displaystyle \varphi (g)(x,y)=g(x)y}$ and ${\displaystyle \varphi }$ is bijective with the inverse ${\displaystyle \psi }$ given by ${\displaystyle (\psi (g)x)y=g(x,y)}$.[a] In general, ${\displaystyle f^{(k)}=(f^{(k-1)})'}$ is a map from ${\displaystyle X}$ to the space of ${\displaystyle k}$-multilinear maps ${\displaystyle (\mathbb {R} ^{n})^{k}\to \mathbb {R} ^{m))$.

Just as ${\displaystyle f'(x)}$ is represented by a matrix (Jacobian matrix), when ${\displaystyle m=1}$ (a bilinear map is a bilinear form), the bilinear form ${\displaystyle f''(x)}$ is represented by a matrix called the Hessian matrix of ${\displaystyle f}$ at ${\displaystyle x}$; namely, the square matrix ${\displaystyle H}$ of size ${\displaystyle n}$ such that ${\displaystyle f''(x)(y,z)=(Hy,z)}$, where the paring refers to an inner product of ${\displaystyle \mathbb {R} ^{n))$, and ${\displaystyle H}$ is none other than the Jacobian matrix of ${\displaystyle f':X\to (\mathbb {R} ^{n})^{*}\simeq \mathbb {R} ^{n))$. The ${\displaystyle (i,j)}$-th entry of ${\displaystyle H}$ is thus given explicitly as ${\displaystyle H_{ij}={\frac {\partial ^{2}f}{\partial x_{i}\partial x_{j))}(x)}$.

Moreover, if ${\displaystyle f''}$ exists and is continuous, then the matrix ${\displaystyle H}$ is symmetric, the fact known as the symmetry of second derivatives.[7] This is seen using the mean value inequality. For vectors ${\displaystyle u,v}$ in ${\displaystyle \mathbb {R} ^{n))$, using mean value inequality twice, we have:

${\displaystyle |\Delta _{v}\Delta _{u}f(x)-f''(x)(u,v)|\leq \sup _{0

which says

${\displaystyle f''(x)(u,v)=\lim _{s,t\to 0}(\Delta _{tv}\Delta _{su}f(x)-f(x))/(st).}$

Since the right-hand side is symmetric in ${\displaystyle u,v}$, so is the left-hand side: ${\displaystyle f''(x)(u,v)=f''(x)(v,u)}$. By induction, if ${\displaystyle f}$ is ${\displaystyle C^{k))$, then the k-multilinear map ${\displaystyle f^{(k)}(x)}$ is symmetric; i.e., the order of taking partial derivatives does not matter.[7]

As in the case of one variable, the Taylor series expansion can then be proved by integration by parts:

${\displaystyle f(z+(h,k))=\sum _{a+b

Taylor's formula has an effect of dividing a function by variables, which can be illustrated by the next typical theoretical use of the formula.

Example:[8] Let ${\displaystyle T:{\mathcal {S))\to {\mathcal {S))}$ be a linear map between the vector space ${\displaystyle {\mathcal {S))}$ of smooth functions on ${\displaystyle \mathbb {R} ^{n))$ with rapidly decreasing derivatives; i.e., ${\displaystyle \sup |x^{\beta }\partial ^{\alpha }\varphi |<\infty }$ for any multi-index ${\displaystyle \alpha ,\beta }$. (The space ${\displaystyle {\mathcal {S))}$ is called a Schwartz space.) For each ${\displaystyle \varphi }$ in ${\displaystyle {\mathcal {S))}$, Taylor's formula implies we can write:

${\displaystyle \varphi -\psi \varphi (y)=\sum _{j=1}^{n}(x_{j}-y_{j})\varphi _{j))$

with ${\displaystyle \varphi _{j}\in {\mathcal {S))}$, where ${\displaystyle \psi }$ is a smooth function with compact support and ${\displaystyle \psi (y)=1}$. Now, assume ${\displaystyle T}$ commutes with coordinates; i.e., ${\displaystyle T(x_{j}\varphi )=x_{j}T\varphi }$. Then

${\displaystyle T\varphi -\varphi (y)T\psi =\sum _{j=1}^{n}(x_{j}-y_{j})T\varphi _{j))$.

Evaluating the above at ${\displaystyle y}$, we get ${\displaystyle T\varphi (y)=\varphi (y)T\psi (y).}$ In other words, ${\displaystyle T}$ is a multiplication by some function ${\displaystyle m}$; i.e., ${\displaystyle T\varphi =m\varphi }$. Now, assume further that ${\displaystyle T}$ commutes with partial differentiations. We then easily see that ${\displaystyle m}$ is a constant; ${\displaystyle T}$ is a multiplication by a constant.

(Aside: the above discussion almost proves the Fourier inversion formula. Indeed, let ${\displaystyle F,R:{\mathcal {S))\to {\mathcal {S))}$ be the Fourier transform and the reflection; i.e., ${\displaystyle (R\varphi )(x)=\varphi (-x)}$. Then, dealing directly with the integral that is involved, one can see ${\displaystyle T=RF^{2))$ commutes with coordinates and partial differentiations; hence, ${\displaystyle T}$ is a multiplication by a constant. This is almost a proof since one still has to compute this constant.)

A partial converse to the Taylor formula also holds; see Borel's lemma and Whitney extension theorem.

Inverse function theorem and submersion theorem

Inverse function theorem — Let ${\displaystyle f:X\to Y}$ be a map between open subsets ${\displaystyle X,Y}$ in ${\displaystyle \mathbb {R} ^{n},\mathbb {R} ^{m))$. If ${\displaystyle f}$ is continuously differentiable (or more generally ${\displaystyle C^{k))$) and ${\displaystyle f'(x)}$ is bijective, there exists neighborhoods ${\displaystyle U,V}$ of ${\displaystyle x,f(x)}$ and the inverse ${\displaystyle f^{-1}:V\to U}$ that is continuously differentiable (or respectively ${\displaystyle C^{k))$).

A ${\displaystyle C^{k))$-map with the ${\displaystyle C^{k))$-inverse is called a ${\displaystyle C^{k))$-diffeomorphism. Thus, the theorem says that, for a map ${\displaystyle f}$ satisfying the hypothesis at a point ${\displaystyle x}$, ${\displaystyle f}$ is a diffeomorphism near ${\displaystyle x,f(x).}$ For a proof, see Inverse function theorem § A proof using successive approximation.

The implicit function theorem says:[9] given a map ${\displaystyle f:\mathbb {R} ^{n}\times \mathbb {R} ^{m}\to \mathbb {R} ^{m))$, if ${\displaystyle f(a,b)=0}$, ${\displaystyle f}$ is ${\displaystyle C^{k))$ in a neighborhood of ${\displaystyle (a,b)}$ and the derivative of ${\displaystyle y\mapsto f(a,y)}$ at ${\displaystyle b}$ is invertible, then there exists a differentiable map ${\displaystyle g:U\to V}$ for some neighborhoods ${\displaystyle U,V}$ of ${\displaystyle a,b}$ such that ${\displaystyle f(x,g(x))=0}$. The theorem follows from the inverse function theorem; see Inverse function theorem § Implicit function theorem.

Another consequence is the submersion theorem.

Integrable functions on Euclidean spaces

A partition of an interval ${\displaystyle [a,b]}$ is a finite sequence ${\displaystyle a=t_{0}\leq t_{1}\leq \cdots \leq t_{k}=b}$. A partition ${\displaystyle P}$ of a rectangle ${\displaystyle D}$ (product of intervals) in ${\displaystyle \mathbb {R} ^{n))$ then consists of partitions of the sides of ${\displaystyle D}$; i.e., if ${\displaystyle D=\prod _{1}^{n}[a_{i},b_{i}]}$, then ${\displaystyle P}$ consists of ${\displaystyle P_{1},\dots ,P_{n))$ such that ${\displaystyle P_{i))$ is a partition of ${\displaystyle [a_{i},b_{i}]}$.[10]

Given a function ${\displaystyle f}$ on ${\displaystyle D}$, we then define the upper Riemann sum of it as:

${\displaystyle U(f,P)=\sum _{Q\in P}(\sup _{Q}f)\operatorname {vol} (Q)}$

where

• ${\displaystyle Q}$ is a partition element of ${\displaystyle P}$; i.e., ${\displaystyle Q=\prod _{i=1}^{n}[t_{i,j_{i)),t_{i,j_{i}+1}]}$ when ${\displaystyle P_{i}:a_{i}=t_{i,0}\leq \dots \cdots \leq t_{i,k_{i))=b_{i))$ is a partition of ${\displaystyle [a_{i},b_{i}]}$.[11]
• The volume ${\displaystyle \operatorname {vol} (Q)}$ of ${\displaystyle Q}$ is the usual Euclidean volume; i.e., ${\displaystyle \operatorname {vol} (Q)=\prod _{1}^{n}(t_{i,j_{i}+1}-t_{i,j_{i)))}$.

The lower Riemann sum ${\displaystyle L(f,P)}$ of ${\displaystyle f}$ is then defined by replacing ${\displaystyle \sup }$ by ${\displaystyle \inf }$. Finally, the function ${\displaystyle f}$ is called integrable if it is bounded and ${\displaystyle \sup\{L(f,P)\mid P\}=\inf\{U(f,P)\mid P\))$. In that case, the common value is denoted as ${\displaystyle \int _{D}f\,dx}$.[12]

A subset of ${\displaystyle \mathbb {R} ^{n))$ is said to have measure zero if for each ${\displaystyle \epsilon >0}$, there are some possibly infinitely many rectangles ${\displaystyle D_{1},D_{2},\dots ,}$ whose union contains the set and ${\displaystyle \sum _{i}\operatorname {vol} (D_{i})<\epsilon .}$[13]

A key theorem is

Theorem — [14] A bounded function ${\displaystyle f}$ on a closed rectangle is integrable if and only if the set ${\displaystyle \{x|f{\text{ is not continuous at ))x\))$ has measure zero.

The next theorem allows us to compute the integral of a function as the iteration of the integrals of the function in one-variables:

Fubini's theorem — If ${\displaystyle f}$ is a continuous function on a closed rectangle ${\displaystyle D=\prod [a_{i},b_{i}]}$ (in fact, this assumption is too strong), then

${\displaystyle \int _{D}f\,dx=\int _{a_{n))^{b_{n))\cdots \left(\int _{a_{1))^{b_{1))f(x_{1},\dots ,x_{n})dx_{1}\right)dx_{2}\cdots dx_{n}.}$

In particular, the order of integrations can be changed.

Finally, if ${\displaystyle M\subset \mathbb {R} ^{n))$ is a bounded open subset and ${\displaystyle f}$ a function on ${\displaystyle M}$, then we define ${\displaystyle \int _{M}f\,dx:=\int _{D}\chi _{M}f\,dx}$ where ${\displaystyle D}$ is a closed rectangle containing ${\displaystyle M}$ and ${\displaystyle \chi _{M))$ is the characteristic function on ${\displaystyle M}$; i.e., ${\displaystyle \chi _{M}(x)=1}$ if ${\displaystyle x\in M}$ and ${\displaystyle =0}$ if ${\displaystyle x\not \in M,}$ provided ${\displaystyle \chi _{M}f}$ is integrable.[15]

Surface integral

If a bounded surface ${\displaystyle M}$ in ${\displaystyle \mathbb {R} ^{3))$ is parametrized by ${\displaystyle {\textbf {r))={\textbf {r))(u,v)}$ with domain ${\displaystyle D}$, then the surface integral of a measurable function ${\displaystyle F}$ on ${\displaystyle M}$ is defined and denoted as:

${\displaystyle \int _{M}F\,dS:=\int \int _{D}(F\circ {\textbf {r)))|{\textbf {r))_{u}\times {\textbf {r))_{v}|\,dudv}$

If ${\displaystyle F:M\to \mathbb {R} ^{3))$ is vector-valued, then we define

${\displaystyle \int _{M}F\cdot dS:=\int _{M}(F\cdot {\textbf {n)))\,dS}$

where ${\displaystyle {\textbf {n))}$ is an outward unit normal vector to ${\displaystyle M}$. Since ${\displaystyle {\textbf {n))={\frac ((\textbf {r))_{u}\times {\textbf {r))_{v)){|{\textbf {r))_{u}\times {\textbf {r))_{v}|))}$, we have:

${\displaystyle \int _{M}F\cdot dS=\int \int _{D}(F\circ {\textbf {r)))\cdot ({\textbf {r))_{u}\times {\textbf {r))_{v})\,dudv=\int \int _{D}\det(F\circ {\textbf {r)),{\textbf {r))_{u},{\textbf {r))_{v})\,dudv.}$

Vector analysis

Tangent vectors and vector fields

Let ${\displaystyle c:[0,1]\to \mathbb {R} ^{n))$ be a differentiable curve. Then the tangent vector to the curve ${\displaystyle c}$ at ${\displaystyle t}$ is a vector ${\displaystyle v}$ at the point ${\displaystyle c(t)}$ whose components are given as:

${\displaystyle v=(c_{1}'(t),\dots ,c_{n}'(t))}$.[16]

For example, if ${\displaystyle c(t)=(a\cos(t),a\sin(t),bt),a>0,b>0}$ is a helix, then the tangent vector at t is:

${\displaystyle c'(t)=(-a\sin(t),a\cos(t),b).}$

It corresponds to the intuition that the a point on the helix moves up in a constant speed.

If ${\displaystyle M\subset \mathbb {R} ^{n))$ is a differentiable curve or surface, then the tangent space to ${\displaystyle M}$ at a point p is the set of all tangent vectors to the differentiable curves ${\displaystyle c:[0,1]\to M}$ with ${\displaystyle c(0)=p}$.

A vector field X is an assignment to each point p in M a tangent vector ${\displaystyle X_{p))$ to M at p such that the assignment varies smoothly.

Differential forms

The dual notion of a vector field is a differential form. Given an open subset ${\displaystyle M}$ in ${\displaystyle \mathbb {R} ^{n))$, by definition, a differential 1-form (often just 1-form) ${\displaystyle \omega }$ is an assignment to a point ${\displaystyle p}$ in ${\displaystyle M}$ a linear functional ${\displaystyle \omega _{p))$ on the tangent space ${\displaystyle T_{p}M}$ to ${\displaystyle M}$ at ${\displaystyle p}$ such that the assignment varies smoothly. For a (real or complex-valued) smooth function ${\displaystyle f}$, define the 1-form ${\displaystyle df}$ by: for a tangent vector ${\displaystyle v}$ at ${\displaystyle p}$,

${\displaystyle df_{p}(v)=v(f)}$

where ${\displaystyle v(f)}$ denotes the directional derivative of ${\displaystyle f}$ in the direction ${\displaystyle v}$ at ${\displaystyle p}$.[17] For example, if ${\displaystyle x_{i))$ is the ${\displaystyle i}$-th coordinate function, then ${\displaystyle dx_{i,p}(v)=v_{i))$; i.e., ${\displaystyle dx_{i,p))$ are the dual basis to the standard basis on ${\displaystyle T_{p}M}$. Then every differential 1-form ${\displaystyle \omega }$ can be written uniquely as

${\displaystyle \omega =f_{1}\,dx_{1}+\cdots +f_{n}\,dx_{n))$

for some smooth functions ${\displaystyle f_{1},\dots ,f_{n))$ on ${\displaystyle M}$ (since, for every point ${\displaystyle p}$, the linear functional ${\displaystyle \omega _{p))$ is a unique linear combination of ${\displaystyle dx_{i))$ over real numbers). More generally, a differential k-form is an assignment to a point ${\displaystyle p}$ in ${\displaystyle M}$ a vector ${\displaystyle \omega _{p))$ in the ${\displaystyle k}$-th exterior power ${\displaystyle \bigwedge ^{k}T_{p}^{*}M}$ of the dual space ${\displaystyle T_{p}^{*}M}$ of ${\displaystyle T_{p}M}$ such that the assignment varies smoothly.[17] In particular, a 0-form is the same as a smooth function. Also, any ${\displaystyle k}$-form ${\displaystyle \omega }$ can be written uniquely as:

${\displaystyle \omega =\sum _{i_{1}<\cdots

for some smooth functions ${\displaystyle f_{i_{1}\dots i_{k))}$.[17]

Like a smooth function, we can differentiate and integrate differential forms. If ${\displaystyle f}$ is a smooth function, then ${\displaystyle df}$ can be written as:[18]

${\displaystyle df=\sum _{i=1}^{n}{\frac {\partial f}{\partial x_{i))}\,dx_{i))$

since, for ${\displaystyle v=\partial /\partial x_{j}|_{p))$, we have: ${\displaystyle df_{p}(v)={\frac {\partial f}{\partial x_{j))}(p)=\sum _{i=1}^{n}{\frac {\partial f}{\partial x_{i))}(p)\,dx_{i}(v)}$. Note that, in the above expression, the left-hand side (whence the right-hand side) is independent of coordinates ${\displaystyle x_{1},\dots ,x_{n))$; this property is called the invariance of differential.

The operation ${\displaystyle d}$ is called the exterior derivative and it extends to any differential forms inductively by the requirement (Leibniz rule)

${\displaystyle d(\alpha \wedge \beta )=d\alpha \wedge \beta +(-1)^{p}\alpha \wedge d\beta .}$

where ${\displaystyle \alpha ,\beta }$ are a p-form and a q-form.

The exterior derivative has the important property that ${\displaystyle d\circ d=0}$; that is, the exterior derivative ${\displaystyle d}$ of a differential form ${\displaystyle d\omega }$ is zero. This property is a consequence of the symmetry of second derivatives (mixed partials are equal).

Boundary and orientation

A circle can be oriented clockwise or counterclockwise. Mathematically, we say that a subset ${\displaystyle M}$ of ${\displaystyle \mathbb {R} ^{n))$ is oriented if there is a consistent choice of normal vectors to ${\displaystyle M}$ that varies continuously. For example, a circle or, more generally, an n-sphere can be oriented; i.e., orientable. On the other hand, a Möbius strip (a surface obtained by identified by two opposite sides of the rectangle in a twisted way) cannot oriented: if we start with a normal vector and travel around the strip, the normal vector at end will point to the opposite direction.

Proposition — A bounded differentiable region ${\displaystyle M}$ in ${\displaystyle \mathbb {R} ^{n))$ of dimension ${\displaystyle k}$ is oriented if and only if there exists a nowhere-vanishing ${\displaystyle k}$-form on ${\displaystyle M}$ (called a volume form).

The proposition is useful because it allows us to give an orientation by giving a volume form.

Integration of differential forms

If ${\displaystyle \omega =f\,dx_{1}\wedge \cdots \wedge dx_{n))$ is a differential n-form on an open subset M in ${\displaystyle \mathbb {R} ^{n))$ (any n-form is that form), then the integration of it over ${\displaystyle M}$ with the standard orientation is defined as:

${\displaystyle \int _{M}\omega =\int _{M}f\,dx_{1}\cdots dx_{n}.}$

If M is given the orientation opposite to the standard one, then ${\displaystyle \int _{M}\omega }$ is defined as the negative of the right-hand side.

Then we have the fundamental formula relating exterior derivative and integration:

Stokes' formula — For a bounded region ${\displaystyle M}$ in ${\displaystyle \mathbb {R} ^{n))$ of dimension ${\displaystyle k}$ whose boundary is a union of finitely many ${\displaystyle C^{1))$-subsets, if ${\displaystyle M}$ is oriented, then

${\displaystyle \int _{\partial M}\omega =\int _{M}d\omega }$

for any differential ${\displaystyle (k-1)}$-form ${\displaystyle \omega }$ on the boundary ${\displaystyle \partial M}$ of ${\displaystyle M}$.

Here is a sketch of proof of the formula.[19] If ${\displaystyle f}$ is a smooth function on ${\displaystyle \mathbb {R} ^{n))$ with compact support, then we have:

${\displaystyle \int d(f\omega )=0}$

(since, by the fundamental theorem of calculus, the above can be evaluated on boundaries of the set containing the support.) On the other hand,

${\displaystyle \int d(f\omega )=\int df\wedge \omega +\int f\,d\omega .}$

Let ${\displaystyle f}$ approach the characteristic function on ${\displaystyle M}$. Then the second term on the right goes to ${\displaystyle \int _{M}d\omega }$ while the first goes to ${\displaystyle -\int _{\partial M}\omega }$, by the argument similar to proving the fundamental theorem of calculus. ${\displaystyle \square }$

The formula generalizes the fundamental theorem of calculus as well as Stokes' theorem in multivariable calculus. Indeed, if ${\displaystyle M=[a,b]}$ is an interval and ${\displaystyle \omega =f}$, then ${\displaystyle d\omega =f'\,dx}$ and the formula says:

${\displaystyle \int _{M}f'\,dx=f(b)-f(a)}$.

Similarly, if ${\displaystyle M}$ is an oriented bounded surface in ${\displaystyle \mathbb {R} ^{3))$ and ${\displaystyle \omega =f\,dx+g\,dy+h\,dz}$, then ${\displaystyle d(f\,dx)=df\wedge dx={\frac {\partial f}{\partial y))\,dy\wedge dx+{\frac {\partial f}{\partial z))\,dz\wedge dx}$ and similarly for ${\displaystyle d(g\,dy)}$ and ${\displaystyle d(g\,dy)}$. Collecting the terms, we thus get:

${\displaystyle d\omega =\left({\frac {\partial h}{\partial y))-{\frac {\partial g}{\partial z))\right)dy\wedge dz+\left({\frac {\partial f}{\partial z))-{\frac {\partial h}{\partial x))\right)dz\wedge dx+\left({\frac {\partial g}{\partial x))-{\frac {\partial f}{\partial y))\right)dx\wedge dy.}$

Then, from the definition of the integration of ${\displaystyle \omega }$, we have ${\displaystyle \int _{M}d\omega =\int _{M}(\nabla \times F)\cdot dS}$ where ${\displaystyle F=(f,g,h)}$ is the vector-valued function and ${\displaystyle \nabla =\left({\frac {\partial }{\partial x)),{\frac {\partial }{\partial y)),{\frac {\partial }{\partial z))\right)}$. Hence, Stokes’ formula becomes

${\displaystyle \int _{M}(\nabla \times F)\cdot dS=\int _{\partial M}(f\,dx+g\,dy+h\,dz),}$

which is the usual form of the Stokes' theorem on surfaces. Green’s theorem is also a special case of Stokes’ formula.

Stokes' formula also yields a general version of Cauchy's integral formula. To state and prove it, for the complex variable ${\displaystyle z=x+iy}$ and the conjugate ${\displaystyle {\bar {z))}$, let us introduce the operators

${\displaystyle {\frac {\partial }{\partial z))={\frac {1}{2))\left({\frac {\partial }{\partial x))-i{\frac {\partial }{\partial y))\right),\,{\frac {\partial }{\partial {\bar {z))))={\frac {1}{2))\left({\frac {\partial }{\partial x))+i{\frac {\partial }{\partial y))\right).}$

In these notations, a function ${\displaystyle f}$ is holomorphic (complex-analytic) if and only if ${\displaystyle {\frac {\partial f}{\partial {\bar {z))))=0}$ (the Cauchy–Riemann equations). Also, we have:

${\displaystyle df={\frac {\partial f}{\partial z))dz+{\frac {\partial f}{\partial {\bar {z))))d{\bar {z)).}$

Let ${\displaystyle D_{\epsilon }=\{z\in \mathbb {C} \mid \epsilon <|z-z_{0}| be a punctured disk with center ${\displaystyle z_{0))$. Since ${\displaystyle 1/(z-z_{0})}$ is holomorphic on ${\displaystyle D_{\epsilon ))$, We have:

${\displaystyle d\left({\frac {f}{z-z_{0))}dz\right)={\frac {\partial f}{\partial {\bar {z)))){\frac {d{\bar {z))\wedge dz}{z-z_{0))))$.

By Stokes’ formula,

${\displaystyle \int _{D_{\epsilon )){\frac {\partial f}{\partial {\bar {z)))){\frac {d{\bar {z))\wedge dz}{z-z_{0))}=\left(\int _{|z-z_{0}|=r}-\int _{|z-z_{0}|=\epsilon }\right){\frac {f}{z-z_{0))}dz.}$

Letting ${\displaystyle \epsilon \to 0}$ we then get:[20][21]

${\displaystyle 2\pi i\,f(z_{0})=\int _{|z-z_{0}|=r}{\frac {f}{z-z_{0))}dz+\int _{|z-z_{0}|\leq r}{\frac {\partial f}{\partial {\bar {z)))){\frac {dz\wedge d{\bar {z))}{z-z_{0))}.}$

Winding numbers and Poincaré lemma

This section needs expansion. You can help by adding to it. (May 2022)

A differential form ${\displaystyle \omega }$ is called closed if ${\displaystyle d\omega =0}$ and is called exact if ${\displaystyle \omega =d\eta }$ for some differential form ${\displaystyle \eta }$ (often called a potential). Since ${\displaystyle d\circ d=0}$, an exact form is closed. But the converse does not hold in general; there might be a non-exact closed form. A classic example of such a form is:[22]

${\displaystyle \omega ={\frac {-y}{x^{2}+y^{2))}+{\frac {x}{x^{2}+y^{2))))$,

which is a differential form on ${\displaystyle \mathbb {R} ^{2}-0}$. Suppose we switch to polar coordinates: ${\displaystyle x=r\cos \theta ,y=r\sin \theta }$ where ${\displaystyle r={\sqrt {x^{2}+y^{2))))$. Then

${\displaystyle \omega =r^{-2}(-r\sin \theta \,dx+r\cos \theta \,dy)=d\theta .}$

This does not show that ${\displaystyle \omega }$ is exact: the trouble is that ${\displaystyle \theta }$ is not a well-defined continuous function on ${\displaystyle \mathbb {R} ^{2}-0}$. Since any function ${\displaystyle f}$ on ${\displaystyle \mathbb {R} ^{2}-0}$ with ${\displaystyle df=\omega }$ differ from ${\displaystyle \theta }$ by constant, this means that ${\displaystyle \omega }$ is not exact. The calculation, however, shows that ${\displaystyle \omega }$ is exact, for example, on ${\displaystyle \mathbb {R} ^{2}-\{x=0\))$ since we can take ${\displaystyle \theta =\arctan(y/x)}$ there.

There is a result (Poincaré lemma) that gives a condition that guarantees closed forms are exact. To state it, we need some notions from topology. Given two continuous maps ${\displaystyle f,g:X\to Y}$ between subsets of ${\displaystyle \mathbb {R} ^{m},\mathbb {R} ^{n))$ (or more generally topological spaces), a homotopy from ${\displaystyle f}$ to ${\displaystyle g}$ is a continuous function ${\displaystyle H:X\times [0,1]\to Y}$ such that ${\displaystyle f(x)=H(x,0)}$ and ${\displaystyle g(x)=H(x,1)}$. Intuitively, a homotopy is a continuous variation of one function to another. A loop in a set ${\displaystyle X}$ is a curve whose starting point coincides with the end point; i.e., ${\displaystyle c:[0,1]\to X}$ such that ${\displaystyle c(0)=c(1)}$. Then a subset of ${\displaystyle \mathbb {R} ^{n))$ is called simply connected if every loop is homotopic to a constant function. A typical example of a simply connected set is a disk ${\displaystyle D=\{(x,y)\mid {\sqrt {x^{2}+y^{2))}\leq r\}\subset \mathbb {R} ^{2))$. Indeed, given a loop ${\displaystyle c:[0,1]\to D}$, we have the homotopy ${\displaystyle H:[0,1]^{2}\to D,\,H(x,t)=(1-t)c(x)+tc(0)}$ from ${\displaystyle c}$ to the constant function ${\displaystyle c(0)}$. A punctured disk, on the other hand, is not simply connected.

Poincaré lemma — If ${\displaystyle M}$ is a simply connected open subset of ${\displaystyle \mathbb {R} ^{n))$, then each closed 1-form on ${\displaystyle M}$ is exact.

Geometry of curves and surfaces

This section needs expansion. You can help by adding to it. (May 2022)

Moving frame

Vector fields ${\displaystyle E_{1},\dots ,E_{3))$ on ${\displaystyle \mathbb {R} ^{3))$ are called a frame field if they are orthogonal to each other at each point; i.e., ${\displaystyle E_{i}\cdot E_{j}=\delta _{ij))$ at each point.[23] The basic example is the standard frame ${\displaystyle U_{i))$; i.e., ${\displaystyle U_{i}(x)}$ is a standard basis for each point ${\displaystyle x}$ in ${\displaystyle \mathbb {R} ^{3))$. Another example is the cylindrical frame

${\displaystyle E_{1}=\cos \theta U_{1}+\sin \theta U_{2},\,E_{2}=-\sin \theta U_{1}+\cos \theta U_{2},\,E_{3}=U_{3}.}$[24]

For the study of the geometry of a curve, the important frame to use is a Frenet frame ${\displaystyle T,N,B}$ on a unit-speed curve ${\displaystyle \beta :I\to \mathbb {R} ^{3))$ given as:

The Gauss–Bonnet theorem

The Gauss–Bonnet theorem relates the topology of a surface and its geometry.

The Gauss–Bonnet theorem — [25] For each bounded surface ${\displaystyle M}$ in ${\displaystyle \mathbb {R} ^{3))$, we have:

${\displaystyle 2\pi \,\chi (M)=\int _{M}K\,dS}$

where ${\displaystyle \chi (M)}$ is the Euler characteristic of ${\displaystyle M}$ and ${\displaystyle K}$ the curvature.

Calculus of variations

Method of Lagrange multiplier

Lagrange multiplier — [26] Let ${\displaystyle g:U\to \mathbb {R} ^{r))$ be a differentiable function from an open subset of ${\displaystyle \mathbb {R} ^{n))$ such that ${\displaystyle g'}$ has rank ${\displaystyle r}$ at every point in ${\displaystyle g^{-1}(0)}$. For a differentiable function ${\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }$, if ${\displaystyle f}$ attains either a maximum or minimum at a point ${\displaystyle p}$ in ${\displaystyle g^{-1}(0)}$, then there exists real numbers ${\displaystyle \lambda _{1},\dots ,\lambda _{r))$ such that

${\displaystyle \nabla f(p)=\lambda _{i}\sum _{i=1}^{r}\nabla g_{i}(p)}$.

In other words, ${\displaystyle p}$ is a stationary point of ${\displaystyle f-\sum _{1}^{r}\lambda _{i}g_{i))$.

The set ${\displaystyle g^{-1}(0)}$ is usually called a constraint.

Example:[27] Suppose we want to find the minimum distance between the circle ${\displaystyle x^{2}+y^{2}=1}$ and the line ${\displaystyle x+y=4}$. That means that we want to minimize the function ${\displaystyle f(x,y,u,v)=(x-u)^{2}+(y-v)^{2))$, the square distance between a point ${\displaystyle (x,y)}$ on the circle and a point ${\displaystyle (u,v)}$ on the line, under the constraint ${\displaystyle g=(x^{2}+y^{2}-1,u+v-4)}$. We have:

${\displaystyle \nabla f=(2(x-u),2(y-v),-2(x-u),-2(y-v)).}$
${\displaystyle \nabla g_{1}=(2x,2y,0,0),\nabla g_{2}=(0,0,1,1).}$

Since the Jacobian matrix of ${\displaystyle g}$ has rank 2 everywhere on ${\displaystyle g^{-1}(0)}$, the Lagrange multiplier gives:

${\displaystyle x-u=\lambda _{1}x,\,y-v=\lambda _{1}y,\,2(x-u)=-\lambda _{2},\,2(y-v)=-\lambda _{2}.}$

If ${\displaystyle \lambda _{1}=0}$, then ${\displaystyle x=u,y=v}$, not possible. Thus, ${\displaystyle \lambda _{1}\neq 0}$ and

${\displaystyle x={\frac {x-u}{\lambda _{1))},\,y={\frac {y-v}{\lambda _{1))}.}$

From this, it easily follows that ${\displaystyle x=y=1/{\sqrt {2))}$ and ${\displaystyle u=v=2}$. Hence, the minimum distance is ${\displaystyle 2{\sqrt {2))-1}$ (as a minimum distance clearly exists).

Here is an application to linear algebra.[28] Let ${\displaystyle V}$ be a finite-dimensional real vector space and ${\displaystyle T:V\to V}$ a self-adjoint operator. We shall show ${\displaystyle V}$ has a basis consisting of eigenvectors of ${\displaystyle T}$ (i.e., ${\displaystyle T}$ is diagonalizable) by induction on the dimension of ${\displaystyle V}$. Choosing a basis on ${\displaystyle V}$ we can identify ${\displaystyle V=\mathbb {R} ^{n))$ and ${\displaystyle T}$ is represented by the matrix ${\displaystyle [a_{ij}]}$. Consider the function ${\displaystyle f(x)=(Tx,x)}$, where the bracket means the inner product. Then ${\displaystyle \nabla f=2(\sum a_{1i}x_{i},\dots ,\sum a_{ni}x_{i})}$. On the other hand, for ${\displaystyle g=\sum x_{i}^{2}-1}$, since ${\displaystyle g^{-1}(0)}$ is compact, ${\displaystyle f}$ attains a maximum or minimum at a point ${\displaystyle u}$ in ${\displaystyle g^{-1}(0)}$. Since ${\displaystyle \nabla g=2(x_{1},\dots ,x_{n})}$, by Lagrange multiplier, we find a real number ${\displaystyle \lambda }$ such that ${\displaystyle 2\sum _{i}a_{ji}u_{i}=2\lambda u_{j},1\leq j\leq n.}$ But that means ${\displaystyle Tu=\lambda u}$. By inductive hypothesis, the self-adjoint operator ${\displaystyle T:W\to W}$, ${\displaystyle W}$ the orthogonal complement to ${\displaystyle u}$, has a basis consisting of eigenvectors. Hence, we are done. ${\displaystyle \square }$.

Weak derivatives

Up to measure-zero sets, two functions can be determined to be equal or not by means of integration against other functions (called test functions). Namely, the following sometimes called the fundamental lemma of calculus of variations:

Lemma[29] — If ${\displaystyle f,g}$ are locally integrable functions on an open subset ${\displaystyle M\subset \mathbb {R} ^{n))$ such that

${\displaystyle \int (f-g)\varphi \,dx=0}$

for every ${\displaystyle \varphi \in C_{c}^{\infty }(M)}$ (called a test function). Then ${\displaystyle f=g}$ almost everywhere. If, in addition, ${\displaystyle f,g}$ are continuous, then ${\displaystyle f=g}$.

Given a continuous function ${\displaystyle f}$, by the lemma, a continuously differentiable function ${\displaystyle u}$ is such that ${\displaystyle {\frac {\partial u}{\partial x_{i))}=f}$ if and only if

${\displaystyle \int {\frac {\partial u}{\partial x_{i))}\varphi \,dx=\int f\varphi \,dx}$

for every ${\displaystyle \varphi \in C_{c}^{\infty }(M)}$. But, by integration by parts, the partial derivative on the left-hand side of ${\displaystyle u}$ can be moved to that of ${\displaystyle \varphi }$; i.e.,

${\displaystyle -\int u{\frac {\partial \varphi }{\partial x_{i))}\,dx=\int f\varphi \,dx}$

where there is no boundary term since ${\displaystyle \varphi }$ has compact support. Now the key point is that this expression makes sense even if ${\displaystyle u}$ is not necessarily differentiable and thus can be used to give sense to a derivative of such a function.

Note each locally integrable function ${\displaystyle u}$ defines the linear functional ${\displaystyle \varphi \mapsto \int u\varphi \,dx}$ on ${\displaystyle C_{c}^{\infty }(M)}$ and, moreover, each locally integrable function can be identified with such linear functional, because of the early lemma. Hence, quite generally, if ${\displaystyle u}$ is a linear functional on ${\displaystyle C_{c}^{\infty }(M)}$, then we define ${\displaystyle {\frac {\partial u}{\partial x_{i))))$ to be the linear functional ${\displaystyle \varphi \mapsto -\left\langle u,{\frac {\partial \varphi }{\partial x_{i))}\right\rangle }$ where the bracket means ${\displaystyle \langle \alpha ,\varphi \rangle =\alpha (\varphi )}$. It is then called the weak derivative of ${\displaystyle u}$ with respect to ${\displaystyle x_{i))$. If ${\displaystyle u}$ is continuously differentiable, then the weak derivate of it coincides with the usual one; i.e., the linear functional ${\displaystyle {\frac {\partial u}{\partial x_{i))))$ is the same as the linear functional determined by the usual partial derivative of ${\displaystyle u}$ with respect to ${\displaystyle x_{i))$. A usual derivative is often then called a classical derivative. When a linear functional on ${\displaystyle C_{c}^{\infty }(M)}$ is continuous with respect to a certain topology on ${\displaystyle C_{c}^{\infty }(M)}$, such a linear functional is called a distribution, an example of a generalized function.

A classic example of a weak derivative is that of the Heaviside function ${\displaystyle H}$, the characteristic function on the interval ${\displaystyle (0,\infty )}$.[30] For every test function ${\displaystyle \varphi }$, we have:

${\displaystyle \langle H',\varphi \rangle =-\int _{0}^{\infty }\varphi '\,dx=\varphi (0).}$

Let ${\displaystyle \delta _{a))$ denote the linear functional ${\displaystyle \varphi \mapsto \varphi (a)}$, called the Dirac delta function (although not exactly a function). Then the above can be written as:

${\displaystyle H'=\delta _{0}.}$

Cauchy's integral formula has a similar interpretation in terms of weak derivatives. For the complex variable ${\displaystyle z=x+iy}$, let ${\displaystyle E_{z_{0))(z)={\frac {1}{\pi (z-z_{0})))}$. For a test function ${\displaystyle \varphi }$, if the disk ${\displaystyle |z-z_{0}|\leq r}$ contains the support of ${\displaystyle \varphi }$, by Cauchy's integral formula, we have:

${\displaystyle \varphi (z_{0})={1 \over 2\pi i}\int {\frac {\partial \varphi }{\partial {\bar {z)))){\frac {dz\wedge d{\bar {z))}{z-z_{0))}.}$

Since ${\displaystyle dz\wedge d{\bar {z))=-2idx\wedge dy}$, this means:

${\displaystyle \varphi (z_{0})=-\int E_{z_{0)){\frac {\partial \varphi }{\partial {\bar {z))))dxdy=\left\langle {\frac {\partial E_{z_{0))}{\partial {\bar {z)))),\varphi \right\rangle ,}$

or

${\displaystyle {\frac {\partial E_{z_{0))}{\partial {\bar {z))))=\delta _{z_{0)).}$[31]

In general, a generalized function is called a fundamental solution for a linear partial differential operator if the application of the operator to it is the Dirac delta. Hence, the above says ${\displaystyle E_{z_{0))}$ is the fundamental solution for the differential operator ${\displaystyle \partial /\partial {\bar {z))}$.

Hamilton–Jacobi theory

 Main article: Hamilton–Jacobi equation
This section needs expansion. You can help by adding to it. (May 2022)

Calculus on manifolds

Definition of a manifold

This section requires some background in general topology.

A manifold is a Hausdorff topological space that is locally modeled by an Euclidean space. By definition, an atlas of a topological space ${\displaystyle M}$ is a set of maps ${\displaystyle \varphi _{i}:U_{i}\to \mathbb {R} ^{n))$, called charts, such that

• ${\displaystyle U_{i))$ are an open cover of ${\displaystyle M}$; i.e., each ${\displaystyle U_{i))$ is open and ${\displaystyle M=\cup _{i}U_{i))$,
• ${\displaystyle \varphi _{i}:U_{i}\to \varphi _{i}(U_{i})}$ is a homeomorphism and
• ${\displaystyle \varphi _{j}\circ \varphi _{i}^{-1}:\varphi _{i}(U_{i}\cap U_{j})\to \varphi _{j}(U_{i}\cap U_{j})}$ is smooth; thus a diffeomorphism.

By definition, a manifold is a second-countable Hausdorff topological space with a maximal atlas (called a differentiable structure); "maximal" means that it is not contained in strictly larger atlas. The dimension of the manifold ${\displaystyle M}$ is the dimension of the model Euclidean space ${\displaystyle \mathbb {R} ^{n))$; namely, ${\displaystyle n}$ and a manifold is called an n-manifold when it has dimension n. A function on a manifold ${\displaystyle M}$ is said to be smooth if ${\displaystyle f|_{U}\circ \varphi ^{-1))$ is smooth on ${\displaystyle \varphi (U)}$ for each chart ${\displaystyle \varphi :U\to \mathbb {R} ^{n))$ in the differentiable structure.

A manifold is paracompact; this has an implication that it admits a partition of unity subordinate to a given open cover.

If ${\displaystyle \mathbb {R} ^{n))$ is replaced by an upper half-space ${\displaystyle \mathbb {H} ^{n))$, then we get the notion of a manifold-with-boundary. The set of points that map to the boundary of ${\displaystyle \mathbb {H} ^{n))$ under charts is denoted by ${\displaystyle \partial M}$ and is called the boundary of ${\displaystyle M}$. This boundary may not be the topological boundary of ${\displaystyle M}$. Since the interior of ${\displaystyle \mathbb {H} ^{n))$ is diffeomorphic to ${\displaystyle \mathbb {R} ^{n))$, a manifold is a manifold-with-boundary with empty boundary.

The next theorem furnishes many examples of manifolds.

Theorem — [32] Let ${\displaystyle g:U\to \mathbb {R} ^{r))$ be a differentiable map from an open subset ${\displaystyle U\subset \mathbb {R} ^{n))$ such that ${\displaystyle g'(p)}$ has rank ${\displaystyle r}$ for every point ${\displaystyle p}$ in ${\displaystyle g^{-1}(0)}$. Then the zero set ${\displaystyle g^{-1}(0)}$ is an ${\displaystyle (n-r)}$-manifold.

For example, for ${\displaystyle g(x)=x_{1}^{2}+\cdots +x_{n+1}^{2}-1}$, the derivative ${\displaystyle g'(x)={\begin{bmatrix}2x_{1}&2x_{2}&\cdots &2x_{n+1}\end{bmatrix))}$ has rank one at every point ${\displaystyle p}$ in ${\displaystyle g^{-1}(0)}$. Hence, the n-sphere ${\displaystyle g^{-1}(0)}$ is an n-manifold.

The theorem is proved as a corollary of the inverse function theorem.

Many familiar manifolds are subsets of ${\displaystyle \mathbb {R} ^{n))$. The next theoretically important result says that there is no other kind of manifolds. An immersion is a smooth map whose differential is injective. An embedding is an immersion that is homeomorphic (thus diffeomorphic) to the image.

Whitney's embedding theorem — Each ${\displaystyle k}$-manifold can be embedded into ${\displaystyle \mathbb {R} ^{2k))$.

The proof that a manifold can be embedded into ${\displaystyle \mathbb {R} ^{N))$ for some N is considerably easier and can be readily given here. It is known [citation needed] that a manifold has a finite atlas ${\displaystyle \{\varphi _{i}:U_{i}\to \mathbb {R} ^{n}\mid 1\leq i\leq r\))$. Let ${\displaystyle \lambda _{i))$ be smooth functions such that ${\displaystyle \operatorname {Supp} (\lambda _{i})\subset U_{i))$ and ${\displaystyle \{\lambda _{i}=1\))$ cover ${\displaystyle M}$ (e.g., a partition of unity). Consider the map

${\displaystyle f=(\lambda _{1}\varphi _{1},\dots ,\lambda _{r}\varphi _{r},\lambda _{1},\dots ,\lambda _{r}):M\to \mathbb {R} ^{(k+1)r))$

It is easy to see that ${\displaystyle f}$ is an injective immersion. It may not be an embedding. To fix that, we shall use:

${\displaystyle (f,g):M\to \mathbb {R} ^{(k+1)r+1))$

where ${\displaystyle g}$ is a smooth proper map. The existence of a smooth proper map is a consequence of a partition of unity. See [1] for the rest of the proof in the case of an immersion. ${\displaystyle \square }$

Nash's embedding theorem says that, if ${\displaystyle M}$ is equipped with a Riemannian metric, then the embedding can be taken to be isometric with an expense of increasing ${\displaystyle 2k}$; for this, see this T. Tao's blog.

Tubular neighborhood and transversality

This section needs expansion. You can help by adding to it. (May 2022)

A technically important result is:

Tubular neighborhood theorem — Let M be a manifold and ${\displaystyle N\subset M}$ a compact closed submanifold. Then there exists a neighborhood ${\displaystyle U}$ of ${\displaystyle N}$ such that ${\displaystyle U}$ is diffeomorphic to the normal bundle ${\displaystyle \nu _{N}=TM|_{N}/TN}$ to ${\displaystyle i:N\hookrightarrow M}$ and ${\displaystyle N}$ corresponds to the zero section of ${\displaystyle \nu _{i))$ under the diffeomorphism.

This can be proved by putting a Riemannian metric on the manifold ${\displaystyle M}$. Indeed, the choice of metric makes the normal bundle ${\displaystyle \nu _{i))$ a complementary bundle to ${\displaystyle TN}$; i.e., ${\displaystyle TM|_{N))$ is the direct sum of ${\displaystyle TN}$ and ${\displaystyle \nu _{N))$. Then, using the metric, we have the exponential map ${\displaystyle \exp :U\to V}$ for some neighborhood ${\displaystyle U}$ of ${\displaystyle N}$ in the normal bundle ${\displaystyle \nu _{N))$ to some neighborhood ${\displaystyle V}$ of ${\displaystyle N}$ in ${\displaystyle M}$. The exponential map here may not be injective but it is possible to make it injective (thus diffeomorphic) by shrinking ${\displaystyle U}$ (for now, see see [2]).

Integration on manifolds and distribution densities

This section needs expansion. You can help by adding to it. (May 2022)

The starting point for the topic of integration on manifolds is that there is no invariant way to integrate functions on manifolds. This may be obvious if we asked: what is an integration of functions on a finite-dimensional real vector space? (In contrast, there is an invariant way to do differentiation since, by definition, a manifold comes with a differentiable structure). There are several ways to introduce integration theory to manifolds:

• Integrate differential forms.
• Do integration against some measure.
• Equip a manifold with a Riemannian metric and do integration against such a metric.

For example, if a manifold is embedded into an Euclidean space ${\displaystyle \mathbb {R} ^{n))$, then it acquires the Lebesgue measure restricting from the ambient Euclidean space and then the second approach works. The first approach is fine in many situations but it requires the manifold to be oriented (and there is a non-orientable manifold that is not pathological). The third approach generalizes and that gives rise to the notion of a density.

Generalizations

Extensions to infinite-dimensional normed spaces

The notions like differentiability extend to normed spaces.

Notes

1. ^ This is just the tensor-hom adjunction.

Citations

1. ^ Spivak 1965, Ch 2. Basic definitions.
2. ^ Hörmander 2015, Definition 1.1.4.
3. ^ Hörmander 2015, (1.1.3.)
4. ^ Hörmander 2015, Theorem 1.1.6.
5. ^ Hörmander 2015, (1.1.2)'
6. ^ Hörmander 2015, p. 8
7. ^ a b Hörmander 2015, Theorem 1.1.8.
8. ^ Hörmander 2015, Lemma 7.1.4.
9. ^ Spivak 1965, Theorem 2-12.
10. ^ Spivak 1965, p. 46
11. ^ Spivak 1965, p. 47
12. ^ Spivak 1965, p. 48
13. ^ Spivak 1965, p. 50
14. ^ Spivak 1965, Theorem 3-8.
15. ^ Spivak 1965, p. 55
16. ^ Spivak 1965, Exercise 4.14.
17. ^ a b c Spivak 1965, p. 89
18. ^ Spivak 1965, Theorem 4-7.
19. ^ Hörmander 2015, p. 151
20. ^ Theorem 1.2.1. in Hörmander, Lars (1990). An Introduction to Complex Analysis in Several Variables (Third ed.). North Holland..
21. ^ Spivak 1965, Exercise 4-33.
22. ^ Spivak 1965, p. 93
23. ^ O'Neill 2006, Definition 6.1.
24. ^ O'Neill 2006, Example 6.2. (1)
25. ^ O'Neill 2006, Theorem 6.10.
26. ^ Spivak 1965, Exercise 5-16.
27. ^ Edwards 1994, Ch. II, \$ 5. Example 9.
28. ^ Spivak 1965, Exercise 5-17.
29. ^ Hörmander 2015, Theorem 1.2.5.
30. ^ Hörmander 2015, Example 3.1.2.
31. ^ Hörmander 2015, p. 63
32. ^ Spivak 1965, Theorem 5-1.