History
The Helmholtz decomposition in three dimensions was first described in 1849[9] by George Gabriel Stokes for a theory of diffraction. Hermann von Helmholtz published his paper on some hydrodynamic basic equations in 1858,[10][11] which was part of his research on the Helmholtz's theorems describing the motion of fluid in the vicinity of vortex lines.[11] Their derivation required the vector fields to decay sufficiently fast at infinity. Later, this condition could be relaxed, and the Helmholtz decomposition could be extended to higher dimensions.[8][12][13] For riemannian manifolds, the Helmholtz-Hodge decomposition using differential geometry and tensor calculus was derived.[8][11][14][15]
The decomposition has become an important tool for many problems in theoretical physics,[11][14] but has also found applications in animation, computer vision as well as robotics.[15]
Three-dimensional space
Many physics textbooks restrict the Helmholtz decomposition to the three-dimensional space and limit its application to vector fields that decay sufficiently fast at infinity or to bump function that are defined on a bounded domain. Then, a vector potential
can be defined, such that the rotation field is given by
, using the curl of a vector field.[16]
Let
be a vector field on a bounded domain
, which is twice continuously differentiable inside
, and let
be the surface that encloses the domain
. Then
can be decomposed into a curl-free component and a divergence-free component as follows:[17]
![{\displaystyle \mathbf {F} =-\nabla \Phi +\nabla \times \mathbf {A} ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4589f8077925a27d028064b61ace8ae0f1da3229)
where
![{\displaystyle {\begin{aligned}\Phi (\mathbf {r} )&={\frac {1}{4\pi ))\int _{V}{\frac {\nabla '\cdot \mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|))\,\mathrm {d} V'-{\frac {1}{4\pi ))\oint _{S}\mathbf {\hat {n)) '\cdot {\frac {\mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|))\,\mathrm {d} S'\\[8pt]\mathbf {A} (\mathbf {r} )&={\frac {1}{4\pi ))\int _{V}{\frac {\nabla '\times \mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|))\,\mathrm {d} V'-{\frac {1}{4\pi ))\oint _{S}\mathbf {\hat {n)) '\times {\frac {\mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|))\,\mathrm {d} S'\end{aligned))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/eac70ccfda5b216df7e6fa4c0b885fca40b60768)
and
is the nabla operator with respect to
, not
.
If
and is therefore unbounded, and
vanishes faster than
as
, then one has[18]
![{\displaystyle {\begin{aligned}\Phi (\mathbf {r} )&={\frac {1}{4\pi ))\int _{\mathbb {R} ^{3)){\frac {\nabla '\cdot \mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|))\,\mathrm {d} V'\\[8pt]\mathbf {A} (\mathbf {r} )&={\frac {1}{4\pi ))\int _{\mathbb {R} ^{3)){\frac {\nabla '\times \mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|))\,\mathrm {d} V'\end{aligned))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/eb7c801c2c724ce19131c2e23e9a6c83eb62afa7)
This holds in particular if
is twice continuously differentiable in
and of bounded support.
Derivation
Proof
Suppose we have a vector function
of which we know the curl,
, and the divergence,
, in the domain and the fields on the boundary. Writing the function using delta function in the form
![{\displaystyle \delta ^{3}(\mathbf {r} -\mathbf {r} ')=-{\frac {1}{4\pi ))\nabla ^{2}{\frac {1}{|\mathbf {r} -\mathbf {r} '|))\,,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0a4a32e5efcc54e45aef54811fe4b00264d667ef)
where
![{\displaystyle \nabla ^{2}:=\nabla \cdot \nabla }](https://wikimedia.org/api/rest_v1/media/math/render/svg/f68c5eb30f8accdabf13c53a51d577a226f5e158)
is the Laplace operator, we have
![{\displaystyle {\begin{aligned}\mathbf {F} (\mathbf {r} )&=\int _{V}\mathbf {F} \left(\mathbf {r} '\right)\delta ^{3}(\mathbf {r} -\mathbf {r} ')\mathrm {d} V'\\&=\int _{V}\mathbf {F} (\mathbf {r} ')\left(-{\frac {1}{4\pi ))\nabla ^{2}{\frac {1}{\left|\mathbf {r} -\mathbf {r} '\right|))\right)\mathrm {d} V'\\&=-{\frac {1}{4\pi ))\nabla ^{2}\int _{V}{\frac {\mathbf {F} (\mathbf {r} ')}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'\\&=-{\frac {1}{4\pi ))\left[\nabla \left(\nabla \cdot \int _{V}{\frac {\mathbf {F} (\mathbf {r} ')}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'\right)-\nabla \times \left(\nabla \times \int _{V}{\frac {\mathbf {F} (\mathbf {r} ')}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'\right)\right]\\&=-{\frac {1}{4\pi ))\left[\nabla \left(\int _{V}\mathbf {F} (\mathbf {r} ')\cdot \nabla {\frac {1}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'\right)+\nabla \times \left(\int _{V}\mathbf {F} (\mathbf {r} ')\times \nabla {\frac {1}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'\right)\right]\\&=-{\frac {1}{4\pi ))\left[-\nabla \left(\int _{V}\mathbf {F} (\mathbf {r} ')\cdot \nabla '{\frac {1}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'\right)-\nabla \times \left(\int _{V}\mathbf {F} (\mathbf {r} ')\times \nabla '{\frac {1}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'\right)\right]\end{aligned))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3eeae76841c8576de93f6113dc147a818b15a2d6)
where we have used the definition of the vector Laplacian:
![{\displaystyle \nabla ^{2}\mathbf {a} =\nabla (\nabla \cdot \mathbf {a} )-\nabla \times (\nabla \times \mathbf {a} )\ ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bd14f28200f139036592c06d5392ac6970e5572c)
differentiation/integration with respect to
by
and in the last line, linearity of function arguments:
![{\displaystyle \nabla {\frac {1}{\left|\mathbf {r} -\mathbf {r} '\right|))=-\nabla '{\frac {1}{\left|\mathbf {r} -\mathbf {r} '\right|))\ .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f23d514a4eab08652fc7326e2e3544afdddddf16)
Then using the vectorial identities
![{\displaystyle {\begin{aligned}\mathbf {a} \cdot \nabla \psi &=-\psi (\nabla \cdot \mathbf {a} )+\nabla \cdot (\psi \mathbf {a} )\\\mathbf {a} \times \nabla \psi &=\psi (\nabla \times \mathbf {a} )-\nabla \times (\psi \mathbf {a} )\end{aligned))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/31a62a563046d06ae69015f4de9d1563de9d5443)
we get
![{\displaystyle {\begin{aligned}\mathbf {F} (\mathbf {r} )=-{\frac {1}{4\pi )){\bigg [}&-\nabla \left(-\int _{V}{\frac {\nabla '\cdot \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'+\int _{V}\nabla '\cdot {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'\right)\\&-\nabla \times \left(\int _{V}{\frac {\nabla '\times \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'-\int _{V}\nabla '\times {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'\right){\bigg ]}.\end{aligned))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e68ffe47992c705df5dce699d0c1a2d320a418db)
Thanks to the divergence theorem the equation can be rewritten as
![{\displaystyle {\begin{aligned}\mathbf {F} (\mathbf {r} )&=-{\frac {1}{4\pi )){\bigg [}-\nabla \left(-\int _{V}{\frac {\nabla '\cdot \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'+\oint _{S}\mathbf {\hat {n)) '\cdot {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} S'\right)\\&\qquad \qquad -\nabla \times \left(\int _{V}{\frac {\nabla '\times \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'-\oint _{S}\mathbf {\hat {n)) '\times {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} S'\right){\bigg ]}\\&=-\nabla \left[{\frac {1}{4\pi ))\int _{V}{\frac {\nabla '\cdot \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'-{\frac {1}{4\pi ))\oint _{S}\mathbf {\hat {n)) '\cdot {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} S'\right]\\&\quad +\nabla \times \left[{\frac {1}{4\pi ))\int _{V}{\frac {\nabla '\times \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'-{\frac {1}{4\pi ))\oint _{S}\mathbf {\hat {n)) '\times {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} S'\right]\end{aligned))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/40f0b5ca659060eba8fd5a766c0275764c7ab5f7)
with outward surface normal
.
Defining
![{\displaystyle \Phi (\mathbf {r} )\equiv {\frac {1}{4\pi ))\int _{V}{\frac {\nabla '\cdot \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'-{\frac {1}{4\pi ))\oint _{S}\mathbf {\hat {n)) '\cdot {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} S'}](https://wikimedia.org/api/rest_v1/media/math/render/svg/01d8732d6ee65c1d98a7ac3b0d04ce8acf07a433)
![{\displaystyle \mathbf {A} (\mathbf {r} )\equiv {\frac {1}{4\pi ))\int _{V}{\frac {\nabla '\times \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'-{\frac {1}{4\pi ))\oint _{S}\mathbf {\hat {n)) '\times {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} S'}](https://wikimedia.org/api/rest_v1/media/math/render/svg/096de373e9c34355420f79b45b9710a6ace42736)
we finally obtain
![{\displaystyle \mathbf {F} =-\nabla \Phi +\nabla \times \mathbf {A} .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ce52032deb7a4f7a0a914f67f55fac83520d680)
Solution space
If
is a Helmholtz decomposition of
, then
is another decomposition if, and only if,
and ![{\displaystyle \quad \mathbf {A} _{1}-\mathbf {A} _{2}={\mathbf {A} }_{\lambda }+\nabla \varphi ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/31d8700a95f375525bfc7ea3c8499f5484d4e984)
- where
is a harmonic scalar field,
is a vector field which fulfills ![{\displaystyle \nabla \times {\mathbf {A} }_{\lambda }=\nabla \lambda ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dfdf88743d708aad7dea31fb235f9dc427f7c32d)
is a scalar field.
Proof:
Set
and
. According to the definition
of the Helmholtz decomposition, the condition is equivalent to
.
Taking the divergence of each member of this equation yields
, hence
is harmonic.
Conversely, given any harmonic function
,
is solenoidal since
![{\displaystyle \nabla \cdot (\nabla \lambda )=\nabla ^{2}\lambda =0.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c7379585a57deca6013cbd1ec681de5224c8ed4d)
Thus, according to the above section, there exists a vector field
such that
.
If
is another such vector field,
then
fulfills
, hence
for some scalar field
.
Fields with prescribed divergence and curl
The term "Helmholtz theorem" can also refer to the following. Let C be a solenoidal vector field and d a scalar field on R3 which are sufficiently smooth and which vanish faster than 1/r2 at infinity. Then there exists a vector field F such that
![{\displaystyle \nabla \cdot \mathbf {F} =d\quad {\text{ and ))\quad \nabla \times \mathbf {F} =\mathbf {C} ;}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b5c6ef04bac1742ce4c003d3d3eafe011c9f0539)
if additionally the vector field F vanishes as r → ∞, then F is unique.[18]
In other words, a vector field can be constructed with both a specified divergence and a specified curl, and if it also vanishes at infinity, it is uniquely specified by its divergence and curl. This theorem is of great importance in electrostatics, since Maxwell's equations for the electric and magnetic fields in the static case are of exactly this type.[18] The proof is by a construction generalizing the one given above: we set
![{\displaystyle \mathbf {F} =-\nabla ({\mathcal {G))(d))+\nabla \times ({\mathcal {G))(\mathbf {C} )),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/36dc8d0e26349346ea407333557b35601a0a96d3)
where
represents the Newtonian potential operator. (When acting on a vector field, such as ∇ × F, it is defined to act on each component.)
Weak formulation
The Helmholtz decomposition can be generalized by reducing the regularity assumptions (the need for the existence of strong derivatives). Suppose Ω is a bounded, simply-connected, Lipschitz domain. Every square-integrable vector field u ∈ (L2(Ω))3 has an orthogonal decomposition:[19][20][21]
![{\displaystyle \mathbf {u} =\nabla \varphi +\nabla \times \mathbf {A} }](https://wikimedia.org/api/rest_v1/media/math/render/svg/816d6d5e47148c95598ca836c53ada94fa407b6f)
where φ is in the Sobolev space H1(Ω) of square-integrable functions on Ω whose partial derivatives defined in the distribution sense are square integrable, and A ∈ H(curl, Ω), the Sobolev space of vector fields consisting of square integrable vector fields with square integrable curl.
For a slightly smoother vector field u ∈ H(curl, Ω), a similar decomposition holds:
![{\displaystyle \mathbf {u} =\nabla \varphi +\mathbf {v} }](https://wikimedia.org/api/rest_v1/media/math/render/svg/83f1f53edbd9bf4ca2188edd937966b31edbbcaa)
where φ ∈ H1(Ω), v ∈ (H1(Ω))d.
Derivation from the Fourier transform
Note that in the theorem stated here, we have imposed the condition that if
is not defined on a bounded domain, then
shall decay faster than
. Thus, the Fourier transform of
, denoted as
, is guaranteed to exist. We apply the convention
![{\displaystyle \mathbf {F} (\mathbf {r} )=\iiint \mathbf {G} (\mathbf {k} )e^{i\mathbf {k} \cdot \mathbf {r} }dV_{k))](https://wikimedia.org/api/rest_v1/media/math/render/svg/97f6727c7b6a395f6f91355d919b3933a94f8731)
The Fourier transform of a scalar field is a scalar field, and the Fourier transform of a vector field is a vector field of same dimension.
Now consider the following scalar and vector fields:
![{\displaystyle {\begin{aligned}G_{\Phi }(\mathbf {k} )&=i{\frac {\mathbf {k} \cdot \mathbf {G} (\mathbf {k} )}{\|\mathbf {k} \|^{2))}\\\mathbf {G} _{\mathbf {A} }(\mathbf {k} )&=i{\frac {\mathbf {k} \times \mathbf {G} (\mathbf {k} )}{\|\mathbf {k} \|^{2))}\\[8pt]\Phi (\mathbf {r} )&=\iiint G_{\Phi }(\mathbf {k} )e^{i\mathbf {k} \cdot \mathbf {r} }dV_{k}\\\mathbf {A} (\mathbf {r} )&=\iiint \mathbf {G} _{\mathbf {A} }(\mathbf {k} )e^{i\mathbf {k} \cdot \mathbf {r} }dV_{k}\end{aligned))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/16acf598eb6d627f8977f9486787d2c5032a5456)
Hence
![{\displaystyle {\begin{aligned}\mathbf {G} (\mathbf {k} )&=-i\mathbf {k} G_{\Phi }(\mathbf {k} )+i\mathbf {k} \times \mathbf {G} _{\mathbf {A} }(\mathbf {k} )\\[6pt]\mathbf {F} (\mathbf {r} )&=-\iiint i\mathbf {k} G_{\Phi }(\mathbf {k} )e^{i\mathbf {k} \cdot \mathbf {r} }dV_{k}+\iiint i\mathbf {k} \times \mathbf {G} _{\mathbf {A} }(\mathbf {k} )e^{i\mathbf {k} \cdot \mathbf {r} }dV_{k}\\&=-\nabla \Phi (\mathbf {r} )+\nabla \times \mathbf {A} (\mathbf {r} )\end{aligned))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/10d9c781899f7a7e818436632e636e7c13806834)
Longitudinal and transverse fields
A terminology often used in physics refers to the curl-free component of a vector field as the longitudinal component and the divergence-free component as the transverse component.[22] This terminology comes from the following construction: Compute the three-dimensional Fourier transform
of the vector field
. Then decompose this field, at each point k, into two components, one of which points longitudinally, i.e. parallel to k, the other of which points in the transverse direction, i.e. perpendicular to k. So far, we have
![{\displaystyle {\hat {\mathbf {F} ))(\mathbf {k} )={\hat {\mathbf {F} ))_{t}(\mathbf {k} )+{\hat {\mathbf {F} ))_{l}(\mathbf {k} )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d40beec06a4febff50d184da67e134d1abc614a5)
![{\displaystyle \mathbf {k} \cdot {\hat {\mathbf {F} ))_{t}(\mathbf {k} )=0.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8515efe1aa6b9653a068c3edaad48f4f9ef8ddce)
![{\displaystyle \mathbf {k} \times {\hat {\mathbf {F} ))_{l}(\mathbf {k} )=\mathbf {0} .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6d92fc9b5060ea189326dd4326e0d29e22395f7f)
Now we apply an inverse Fourier transform to each of these components. Using properties of Fourier transforms, we derive:
![{\displaystyle \mathbf {F} (\mathbf {r} )=\mathbf {F} _{t}(\mathbf {r} )+\mathbf {F} _{l}(\mathbf {r} )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9c9eed7b1e660ae9c143709de645ed4738904d15)
![{\displaystyle \nabla \cdot \mathbf {F} _{t}(\mathbf {r} )=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d6d3cc1810012534650eb00e7774f46b1ff68067)
![{\displaystyle \nabla \times \mathbf {F} _{l}(\mathbf {r} )=\mathbf {0} }](https://wikimedia.org/api/rest_v1/media/math/render/svg/2f0193a1a477bc4ec1a002300664fbb8b8dfabd8)
Since
and
,
we can get
![{\displaystyle \mathbf {F} _{t}=\nabla \times \mathbf {A} ={\frac {1}{4\pi ))\nabla \times \int _{V}{\frac {\nabla '\times \mathbf {F} }{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1ab12aeda5890c49f64974ed805ee239fcbc3c31)
![{\displaystyle \mathbf {F} _{l}=-\nabla \Phi =-{\frac {1}{4\pi ))\nabla \int _{V}{\frac {\nabla '\cdot \mathbf {F} }{\left|\mathbf {r} -\mathbf {r} '\right|))\mathrm {d} V'}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fccb822e8cda2a61caa33f3aeb504b4bcd54b8b7)
so this is indeed the Helmholtz decomposition.[23]
Generalization to higher dimensions
Matrix approach
The generalization to
dimensions cannot be done with a vector potential, since the rotation operator and the cross product are defined (as vectors) only in three dimensions.
Let
be a vector field on a bounded domain
which decays faster than
for
and
.
The scalar potential is defined similar to the three dimensional case as:
![{\displaystyle \Phi (\mathbf {r} )=-\int _{\mathbb {R} ^{d))\operatorname {div} (\mathbf {F} (\mathbf {r} '))K(\mathbf {r} ,\mathbf {r} ')\mathrm {d} V'=-\int _{\mathbb {R} ^{d))\sum _{i}{\frac {\partial F_{i)){\partial r_{i))}(\mathbf {r} ')K(\mathbf {r} ,\mathbf {r} ')\mathrm {d} V',}](https://wikimedia.org/api/rest_v1/media/math/render/svg/96101ff542860d822d8ae889068db9afc16b45ba)
where as the integration kernel
is again the fundamental solution of Laplace's equation, but in d-dimensional space:
![{\displaystyle K(\mathbf {r} ,\mathbf {r} ')={\begin{cases}{\frac {1}{2\pi ))\log {|\mathbf {r} -\mathbf {r} '|}&d=2,\\{\frac {1}{d(2-d)V_{d))}|\mathbf {r} -\mathbf {r} '|^{2-d}&{\text{otherwise)),\end{cases))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4a90594fd67e82960e7caca9478ebd910c84c382)
with
the volume of the d-dimensional unit balls and
the gamma function.
For
,
is just equal to
, yielding the same prefactor as above.
The rotational potential is an antisymmetric matrix with the elements:
![{\displaystyle A_{ij}(\mathbf {r} )=\int _{\mathbb {R} ^{d))\left({\frac {\partial F_{i)){\partial x_{j))}(\mathbf {r} ')-{\frac {\partial F_{j)){\partial x_{i))}(\mathbf {r} ')\right)K(\mathbf {r} ,\mathbf {r} ')\mathrm {d} V'.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ed5991948e4193a1055a426d7158879a3865e367)
Above the diagonal are
entries which occur again mirrored at the diagonal, but with a negative sign.
In the three-dimensional case, the matrix elements just correspond to the components of the vector potential
.
However, such a matrix potential can be written as a vector only in the three-dimensional case, because
is valid only for
.
As in the three-dimensional case, the gradient field is defined as
![{\displaystyle \mathbf {G} (\mathbf {r} )=-\nabla \Phi (\mathbf {r} ).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/41c9440503fe338515ed8cb4a86ca150496186a3)
The rotational field, on the other hand, is defined in the general case as the row divergence of the matrix:
![{\displaystyle \mathbf {R} (\mathbf {r} )=\left[\sum \nolimits _{k}\partial _{r_{k))A_{ik}(\mathbf {r} );{1\leq i\leq d}\right].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/27d0fb2c11481f1deb7fefde56f45bccfe1b6dcb)
In three-dimensional space, this is equivalent to the rotation of the vector potential.[8][24]
Tensor approach
In a
-dimensional vector space with
,
can be replaced by the appropriate Green's function for the Laplacian, defined by
![{\displaystyle \nabla ^{2}G(\mathbf {r} ,\mathbf {r} ')={\frac {\partial }{\partial r_{\mu ))}{\frac {\partial }{\partial r_{\mu ))}G(\mathbf {r} ,\mathbf {r} ')=\delta ^{d}(\mathbf {r} -\mathbf {r} ')}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2094c4fe52810a29c489fa4808d63ed1862a1076)
where Einstein summation convention is used for the index
. For example,
in 2D.
Following the same steps as above, we can write
![{\displaystyle F_{\mu }(\mathbf {r} )=\int _{V}F_{\mu }(\mathbf {r} '){\frac {\partial }{\partial r_{\mu ))}{\frac {\partial }{\partial r_{\mu ))}G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '=\delta _{\mu \nu }\delta _{\rho \sigma }\int _{V}F_{\nu }(\mathbf {r} '){\frac {\partial }{\partial r_{\rho ))}{\frac {\partial }{\partial r_{\sigma ))}G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '}](https://wikimedia.org/api/rest_v1/media/math/render/svg/455d9f3a6931903a958c811e4ef8ea704181bd9f)
where
is the Kronecker delta (and the summation convention is again used). In place of the definition of the vector Laplacian used above, we now make use of an identity for the Levi-Civita symbol
,
![{\displaystyle \varepsilon _{\alpha \mu \rho }\varepsilon _{\alpha \nu \sigma }=(d-2)!(\delta _{\mu \nu }\delta _{\rho \sigma }-\delta _{\mu \sigma }\delta _{\nu \rho })}](https://wikimedia.org/api/rest_v1/media/math/render/svg/64372c75e79792b5bdc47c0bd147d8cbd5330ca1)
which is valid in
dimensions, where
is a
-component multi-index. This gives
![{\displaystyle F_{\mu }(\mathbf {r} )=\delta _{\mu \sigma }\delta _{\nu \rho }\int _{V}F_{\nu }(\mathbf {r} '){\frac {\partial }{\partial r_{\rho ))}{\frac {\partial }{\partial r_{\sigma ))}G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '+{\frac {1}{(d-2)!))\varepsilon _{\alpha \mu \rho }\varepsilon _{\alpha \nu \sigma }\int _{V}F_{\nu }(\mathbf {r} '){\frac {\partial }{\partial r_{\rho ))}{\frac {\partial }{\partial r_{\sigma ))}G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '}](https://wikimedia.org/api/rest_v1/media/math/render/svg/59804e8708a74b013be4fa19c425d47098ce883d)
We can therefore write
![{\displaystyle F_{\mu }(\mathbf {r} )=-{\frac {\partial }{\partial r_{\mu ))}\Phi (\mathbf {r} )+\varepsilon _{\mu \rho \alpha }{\frac {\partial }{\partial r_{\rho ))}A_{\alpha }(\mathbf {r} )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5f6b109a8fdbdcd4a6eea160e99cab507831ab51)
where
![{\displaystyle {\begin{aligned}\Phi (\mathbf {r} )&=-\int _{V}F_{\nu }(\mathbf {r} '){\frac {\partial }{\partial r_{\nu ))}G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '\\A_{\alpha }&={\frac {1}{(d-2)!))\varepsilon _{\alpha \nu \sigma }\int _{V}F_{\nu }(\mathbf {r} '){\frac {\partial }{\partial r_{\sigma ))}G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '\end{aligned))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/779a975aeec32721d70c45b474cecadf6957c34d)
Note that the vector potential is replaced by a rank-
tensor in
dimensions.
Because
is a function of only
, one can replace
, giving
![{\displaystyle {\begin{aligned}\Phi (\mathbf {r} )&=\int _{V}F_{\nu }(\mathbf {r} '){\frac {\partial }{\partial r'_{\nu ))}G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '\\A_{\alpha }&=-{\frac {1}{(d-2)!))\varepsilon _{\alpha \nu \sigma }\int _{V}F_{\nu }(\mathbf {r} '){\frac {\partial }{\partial r_{\sigma }'))G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '\end{aligned))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/85a43828c6088a2995d1b1127d30f8ace935811b)
Integration by parts can then be used to give
![{\displaystyle {\begin{aligned}\Phi (\mathbf {r} )&=-\int _{V}G(\mathbf {r} ,\mathbf {r} '){\frac {\partial }{\partial r'_{\nu ))}F_{\nu }(\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '+\oint _{S}G(\mathbf {r} ,\mathbf {r} ')F_{\nu }(\mathbf {r} '){\hat {n))'_{\nu }\,\mathrm {d} ^{d-1}\mathbf {r} '\\A_{\alpha }&={\frac {1}{(d-2)!))\varepsilon _{\alpha \nu \sigma }\int _{V}G(\mathbf {r} ,\mathbf {r} '){\frac {\partial }{\partial r_{\sigma }'))F_{\nu }(\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '-{\frac {1}{(d-2)!))\varepsilon _{\alpha \nu \sigma }\oint _{S}G(\mathbf {r} ,\mathbf {r} ')F_{\nu }(\mathbf {r} '){\hat {n))'_{\sigma }\,\mathrm {d} ^{d-1}\mathbf {r} '\end{aligned))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/64ef04c5fba4544f1a9c34b813503efce1f7a7d4)
where
is the boundary of
. These expressions are analogous to those given above for three-dimensional space.
For a further generalization to manifolds, see the discussion of Hodge decomposition below.
Extensions to fields not decaying at infinity
Most textbooks only deal with vector fields decaying faster than
with
at infinity.[16][13][27] However, Otto Blumenthal showed in 1905 that an adapted integration kernel can be used to integrate fields decaying faster than
with
, which is substantially less strict.
To achieve this, the kernel
in the convolution integrals has to be replaced by
.[28]
With even more complex integration kernels, solutions can be found even for divergent functions that need not grow faster than polynomial.[12][13][24][29]
For all analytic vector fields that need not go to zero even at infinity, methods based on partial integration and the Cauchy formula for repeated integration[30] can be used to compute closed-form solutions of the rotation and scalar potentials, as in the case of multivariate polynomial, sine, cosine, and exponential functions.[8]
Uniqueness of the solution
In general, the Helmholtz decomposition is not uniquely defined.
A harmonic function
is a function that satisfies
.
By adding
to the scalar potential
, a different Helmholtz decomposition can be obtained:
![{\displaystyle {\begin{aligned}\mathbf {G} '(\mathbf {r} )&=\nabla (\Phi (\mathbf {r} )+H(\mathbf {r} ))=\mathbf {G} (\mathbf {r} )+\nabla H(\mathbf {r} ),\\\mathbf {R} '(\mathbf {r} )&=\mathbf {R} (\mathbf {r} )-\nabla H(\mathbf {r} ).\end{aligned))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/29bf674a88c5d207288f6f2b10f6abb9f79759bb)
For vector fields
, decaying at infinity, it is a plausible choice that scalar and rotation potentials also decay at infinity.
Because
is the only harmonic function with this property, which follows from Liouville's theorem, this guarantees the uniqueness of the gradient and rotation fields.[31]
This uniqueness does not apply to the potentials: In the three-dimensional case, the scalar and vector potential jointly have four components, whereas the vector field has only three. The vector field is invariant to gauge transformations and the choice of appropriate potentials known as gauge fixing is the subject of gauge theory. Important examples from physics are the Lorenz gauge condition and the Coulomb gauge. An alternative is to use the poloidal–toroidal decomposition.
Applications
Electrodynamics
The Helmholtz theorem is of particular interest in electrodynamics, since it can be used to write Maxwell's equations in the potential image and solve them more easily. The Helmholtz decomposition can be used to prove that, given electric current density and charge density, the electric field and the magnetic flux density can be determined. They are unique if the densities vanish at infinity and one assumes the same for the potentials.[16]
Fluid dynamics
In fluid dynamics, the Helmholtz projection plays an important role, especially for the solvability theory of the Navier-Stokes equations. If the Helmholtz projection is applied to the linearized incompressible Navier-Stokes equations, the Stokes equation is obtained. This depends only on the velocity of the particles in the flow, but no longer on the static pressure, allowing the equation to be reduced to one unknown. However, both equations, the Stokes and linearized equations, are equivalent. The operator
is called the Stokes operator.[32]
Dynamical systems theory
In the theory of dynamical systems, Helmholtz decomposition can be used to determine "quasipotentials" as well as to compute Lyapunov functions in some cases.[33][34][35]
For some dynamical systems such as the Lorenz system (Edward N. Lorenz, 1963[36]), a simplified model for atmospheric convection, a closed-form expression of the Helmholtz decomposition can be obtained:
![{\displaystyle {\dot {\mathbf {r} ))=\mathbf {F} (\mathbf {r} )={\big [}a(r_{2}-r_{1}),r_{1}(b-r_{3})-r_{2},r_{1}r_{2}-cr_{3}{\big ]}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6bcbda3a979788dd0059ede1b3fd444fb48ce19c)
The Helmholtz decomposition of
, with the scalar potential
is given as:
![{\displaystyle \mathbf {G} (\mathbf {r} )={\big [}-ar_{1},-r_{2},-cr_{3}{\big ]},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c15b6dcade0b683263bbaff5aa5bf6e8d5c9fb85)
![{\displaystyle \mathbf {R} (\mathbf {r} )={\big [}+ar_{2},br_{1}-r_{1}r_{3},r_{1}r_{2}{\big ]}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b9e0139e00ad02bf866fa117c8beeac44e98ba2b)
The quadratic scalar potential provides motion in the direction of the coordinate origin, which is responsible for the stable fix point for some parameter range. For other parameters, the rotation field ensures that a strange attractor is created, causing the model to exhibit a butterfly effect.[8][37]
Medical Imaging
In magnetic resonance elastography, a variant of MR imaging where mechanical waves are used to probe the viscoelasticity of organs, the Helmholtz decomposition is sometimes used to separate the measured displacement fields into its shear component (divergence-free) and its compression component (curl-free).[38] In this way, the complex shear modulus can be calculated without contributions from compression waves.
Computer animation and robotics
The Helmholtz decomposition is also used in the field of computer engineering. This includes robotics, image reconstruction but also computer animation, where the decomposition is used for realistic visualization of fluids or vector fields.[15][39]