Integral expressing the amount of overlap of one function as it is shifted over another
In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions ($f$ and $g$) that produces a third function ($f*g$). The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result (see commutativity). Graphically, it expresses how the 'shape' of one function is modified by the other.
Some features of convolution are similar to cross-correlation: for real-valued functions, of a continuous or discrete variable, convolution ($f*g$) differs from cross-correlation ($f\star g$) only in that either $f(x)$ or $g(x)$ is reflected about the y-axis in convolution; thus it is a cross-correlation of $g(-x)$ and $f(x)$, or $f(-x)$ and $g(x)$.^{[A]} For complex-valued functions, the cross-correlation operator is the adjoint of the convolution operator.
The convolution of $f$ and $g$ is written $f*g$, denoting the operator with the symbol $*$.^{[B]} It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. As such, it is a particular kind of integral transform:
While the symbol $t$ is used above, it need not represent the time domain. At each $t$, the convolution formula can be described as the area under the function $f(\tau )$ weighted by the function $g(-\tau )$ shifted by the amount $t$. As $t$ changes, the weighting function $g(t-\tau )$ emphasizes different parts of the input function $f(\tau )$; If $t$ is a positive value, then $g(t-\tau )$ is equal to $g(-\tau )$ that slides or is shifted along the $\tau$-axis toward the right (toward $+\infty$) by the amount of $t$, while if $t$ is a negative value, then $g(t-\tau )$ is equal to $g(-\tau )$ that slides or is shifted toward the left (toward $-\infty$) by the amount of $|t|$.
For functions $f$, $g$supported on only $[0,\infty )$ (i.e., zero for negative arguments), the integration limits can be truncated, resulting in:
which has to be interpreted carefully to avoid confusion. For instance, $f(t)*g(t-t_{0})$ is equivalent to $(f*g)(t-t_{0})$, but $f(t-t_{0})*g(t-t_{0})$ is in fact equivalent to $(f*g)(t-2t_{0})$.^{[3]}
respectively, the convolution operation $(f*g)(t)$ can be defined as the inverse Laplace transform of the product of $F(s)$ and $G(s)$.^{[4]}^{[5]} More precisely,
Note that $F(s)\cdot G(s)$ is the bilateral Laplace transform of $(f*g)(t)$. A similar derivation can be done using the unilateral Laplace transform (one-sided Laplace transform).
The convolution operation also describes the output (in terms of the input) of an important class of operations known as linear time-invariant (LTI). See LTI system theory for a derivation of convolution as the result of LTI constraints. In terms of the Fourier transforms of the input and output of an LTI operation, no new frequency components are created. The existing ones are only modified (amplitude and/or phase). In other words, the output transform is the pointwise product of the input transform with a third transform (known as a transfer function). See Convolution theorem for a derivation of that property of convolution. Conversely, convolution can be derived as the inverse Fourier transform of the pointwise product of two Fourier transforms.
Express each function in terms of a dummy variable$\tau .$
Reflect one of the functions: $g(\tau )$ → $g(-\tau ).$
Add a time-offset $t$, which allows $g(-\tau )$ to slide along the $\tau$-axis. If t is a positive value, then $g(t-\tau )$ is equal to $g(-\tau )$ that slides or is shifted along the $\tau$-axis toward the right (toward $+\infty$) by the amount of $t$. If $t$ is a negative value, then $g(t-\tau )$ is equal to $g(-\tau )$ that slides or is shifted toward the left (toward $-\infty$) by the amount of $|t|$.
Start $t$ at $-\infty$ and slide it all the way to $+\infty$. Wherever the two functions intersect, find the integral of their product. In other words, at time $t$, compute the area under the function $f(\tau )$ weighted by the weighting function $g(t-\tau ).$
The resulting waveform (not shown here) is the convolution of functions $f$ and $g$.
If $f(t)$ is a unit impulse, the result of this process is simply $g(t)$. Formally:
In this example, the red-colored "pulse", $\ g(\tau ),$ is an even function$(\ g(-\tau )=g(\tau )\ ),$ so convolution is equivalent to correlation. A snapshot of this "movie" shows functions $g(t-\tau )$ and $f(\tau )$ (in blue) for some value of parameter $t,$ which is arbitrarily defined as the distance along the $\tau$ axis from the point $\tau =0$ to the center of the red pulse. The amount of yellow is the area of the product $f(\tau )\cdot g(t-\tau ),$ computed by the convolution/correlation integral. The movie is created by continuously changing $t$ and recomputing the integral. The result (shown in black) is a function of $t,$ but is plotted on the same axis as $\tau ,$ for convenience and comparison.
In this depiction, $f(\tau )$ could represent the response of a resistor-capacitor circuit to a narrow pulse that occurs at $\tau =0.$ In other words, if $g(\tau )=\delta (\tau ),$ the result of convolution is just $f(t).$ But when $g(\tau )$ is the wider pulse (in red), the response is a "smeared" version of $f(t).$ It begins at $t=-0.5,$ because we defined $t$ as the distance from the $\tau =0$ axis to the center of the wide pulse (instead of the leading edge).
One of the earliest uses of the convolution integral appeared in D'Alembert's derivation of Taylor's theorem in Recherches sur différents points importants du système du monde, published in 1754.^{[6]}
Also, an expression of the type:
$\int f(u)\cdot g(x-u)\,du$
is used by Sylvestre François Lacroix on page 505 of his book entitled Treatise on differences and series, which is the last of 3 volumes of the encyclopedic series: Traité du calcul différentiel et du calcul intégral, Chez Courcier, Paris, 1797–1800.^{[7]} Soon thereafter, convolution operations appear in the works of Pierre Simon Laplace, Jean-Baptiste Joseph Fourier, Siméon Denis Poisson, and others. The term itself did not come into wide use until the 1950s or 1960s. Prior to that it was sometimes known as Faltung (which means folding in German), composition product, superposition integral, and Carson's integral.^{[8]} Yet it appears as early as 1903, though the definition is rather unfamiliar in older uses.^{[9]}^{[10]}
When a function $g_{T))$ is periodic, with period $T$, then for functions, $f$, such that $f*g_{T))$ exists, the convolution is also periodic and identical to:
The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. When the sequences are the coefficients of two polynomials, then the coefficients of the ordinary product of the two polynomials are the convolution of the original two sequences. This is known as the Cauchy product of the coefficients of the sequences.
Thus when g has finite support in the set $\{-M,-M+1,\dots ,M-1,M\))$ (representing, for instance, a finite impulse response), a finite summation may be used:^{[13]}
When a function $g_{_{N))$ is periodic, with period $N,$ then for functions, $f,$ such that $f*g_{_{N))$ exists, the convolution is also periodic and identical to:
In many situations, discrete convolutions can be converted to circular convolutions so that fast transforms with a convolution property can be used to implement the computation. For example, convolution of digit sequences is the kernel operation in multiplication of multi-digit numbers, which can therefore be efficiently implemented with transform techniques (Knuth 1997, §4.3.3.C; von zur Gathen & Gerhard 2003, §8.2).
Eq.1 requires N arithmetic operations per output value and N^{2} operations for N outputs. That can be significantly reduced with any of several fast algorithms. Digital signal processing and other applications typically use fast convolution algorithms to reduce the cost of the convolution to O(N log N) complexity.
The most common fast convolution algorithms use fast Fourier transform (FFT) algorithms via the circular convolution theorem. Specifically, the circular convolution of two finite-length sequences is found by taking an FFT of each sequence, multiplying pointwise, and then performing an inverse FFT. Convolutions of the type defined above are then efficiently implemented using that technique in conjunction with zero-extension and/or discarding portions of the output. Other fast convolution algorithms, such as the Schönhage–Strassen algorithm or the Mersenne transform,^{[14]} use fast Fourier transforms in other rings. The Winograd method is used as an alternative to the FFT.^{[15]} It significantly speeds up 1D,^{[16]} 2D,^{[17]} and 3D^{[18]} convolution.
If one sequence is much longer than the other, zero-extension of the shorter sequence and fast circular convolution is not the most computationally efficient method available.^{[19]} Instead, decomposing the longer sequence into blocks and convolving each block allows for faster algorithms such as the overlap–save method and overlap–add method.^{[20]} A hybrid convolution method that combines block and FIR algorithms allows for a zero input-output latency that is useful for real-time convolution computations.^{[21]}
and is well-defined only if f and g decay sufficiently rapidly at infinity in order for the integral to exist. Conditions for the existence of the convolution may be tricky, since a blow-up in g at infinity can be easily offset by sufficiently rapid decay in f. The question of existence thus may involve different conditions on f and g:
If f and g are compactly supportedcontinuous functions, then their convolution exists, and is also compactly supported and continuous (Hörmander 1983, Chapter 1). More generally, if either function (say f) is compactly supported and the other is locally integrable, then the convolution f∗g is well-defined and continuous.
Convolution of f and g is also well defined when both functions are locally square integrable on R and supported on an interval of the form [a, +∞) (or both supported on [−∞, a]).
Likewise, if f ∈ L^{1}(R^{d}) and g ∈ L^{p}(R^{d}) where 1 ≤ p ≤ ∞, then f*g ∈ L^{p}(R^{d}), and
$\|{f}*g\|_{p}\leq \|f\|_{1}\|g\|_{p}.$
In the particular case p = 1, this shows that L^{1} is a Banach algebra under the convolution (and equality of the two sides holds if f and g are non-negative almost everywhere).
More generally, Young's inequality implies that the convolution is a continuous bilinear map between suitable L^{p} spaces. Specifically, if 1 ≤ p, q, r ≤ ∞ satisfy:
so that the convolution is a continuous bilinear mapping from L^{p}×L^{q} to L^{r}.
The Young inequality for convolution is also true in other contexts (circle group, convolution on Z). The preceding inequality is not sharp on the real line: when 1 < p, q, r < ∞, there exists a constant B_{p,q} < 1 such that:
The optimal value of B_{p,q} was discovered in 1975^{[22]} and independently in 1976,^{[23]} see Brascamp–Lieb inequality.
A stronger estimate is true provided 1 < p, q, r < ∞:
$\|f*g\|_{r}\leq C_{p,q}\|f\|_{p}\|g\|_{q,w))$
where $\|g\|_{q,w))$ is the weak L^{q} norm. Convolution also defines a bilinear continuous map $L^{p,w}\times L^{q,w}\to L^{r,w))$ for $1<p,q,r<\infty$, owing to the weak Young inequality:^{[24]}
In addition to compactly supported functions and integrable functions, functions that have sufficiently rapid decay at infinity can also be convolved. An important feature of the convolution is that if f and g both decay rapidly, then f∗g also decays rapidly. In particular, if f and g are rapidly decreasing functions, then so is the convolution f∗g. Combined with the fact that convolution commutes with differentiation (see #Properties), it follows that the class of Schwartz functions is closed under convolution (Stein & Weiss 1971, Theorem 3.3).
More generally, it is possible to extend the definition of the convolution in a unique way with $\varphi$ the same as f above, so that the associative law
$f*(g*\varphi )=(f*g)*\varphi$
remains valid in the case where f is a distribution, and g a compactly supported distribution (Hörmander 1983, §4.2).
where $A\subset \mathbf {R} ^{d))$ is a measurable set and $1_{A))$ is the indicator function of $A$.
This agrees with the convolution defined above when μ and ν are regarded as distributions, as well as the convolution of L^{1} functions when μ and ν are absolutely continuous with respect to the Lebesgue measure.
The convolution of measures also satisfies the following version of Young's inequality
$\|\mu *\nu \|\leq \|\mu \|\|\nu \|$
where the norm is the total variation of a measure. Because the space of measures of bounded variation is a Banach space, convolution of measures can be treated with standard methods of functional analysis that may not apply for the convolution of distributions.
The convolution defines a product on the linear space of integrable functions. This product satisfies the following algebraic properties, which formally mean that the space of integrable functions with the product given by convolution is a commutative associative algebra without identity (Strichartz 1994, §3.3). Other linear spaces of functions, such as the space of continuous functions of compact support, are closed under the convolution, and so also form commutative associative algebras.
$f*g=g*f$ Proof: By definition: $(f*g)(t)=\int _{-\infty }^{\infty }f(\tau )g(t-\tau )\,d\tau$ Changing the variable of integration to $u=t-\tau$ the result follows.
No algebra of functions possesses an identity for the convolution. The lack of identity is typically not a major inconvenience, since most collections of functions on which the convolution is performed can be convolved with a delta distribution (a unitary impulse, centered at zero) or, at the very least (as is the case of L^{1}) admit approximations to the identity. The linear space of compactly supported distributions does, however, admit an identity under the convolution. Specifically, $f*\delta =f$ where δ is the delta distribution.
Inverse element
Some distributions S have an inverse elementS^{−1} for the convolution which then must satisfy $S^{-1}*S=\delta$ from which an explicit formula for S^{−1} may be obtained.The set of invertible distributions forms an abelian group under the convolution.
If f and g are integrable functions, then the integral of their convolution on the whole space is simply obtained as the product of their integrals:^{[25]}
where ${\frac {d}{dx))$ is the derivative. More generally, in the case of functions of several variables, an analogous formula holds with the partial derivative:
A particular consequence of this is that the convolution can be viewed as a "smoothing" operation: the convolution of f and g is differentiable as many times as f and g are in total.
These identities hold for example under the condition that f and g are absolutely integrable and at least one of them has an absolutely integrable (L^{1}) weak derivative, as a consequence of Young's convolution inequality. For instance, when f is continuously differentiable with compact support, and g is an arbitrary locally integrable function,
${\frac {d}{dx))(f*g)={\frac {df}{dx))*g.$
These identities also hold much more broadly in the sense of tempered distributions if one of f or g is a
rapidly decreasing tempered distribution, a
compactly supported tempered distribution or a Schwartz function and the other is a tempered distribution. On the other hand, two positive integrable and infinitely differentiable functions may have a nowhere continuous convolution.
In the discrete case, the difference operatorDf(n) = f(n + 1) − f(n) satisfies an analogous relationship:
The convolution commutes with translations, meaning that
$\tau _{x}(f*g)=(\tau _{x}f)*g=f*(\tau _{x}g)$
where τ_{x}f is the translation of the function f by x defined by
$(\tau _{x}f)(y)=f(y-x).$
If f is a Schwartz function, then τ_{x}f is the convolution with a translated Dirac delta function τ_{x}f = f ∗ τ_{x}δ. So translation invariance of the convolution of Schwartz functions is a consequence of the associativity of convolution.
Furthermore, under certain conditions, convolution is the most general translation invariant operation. Informally speaking, the following holds
Suppose that S is a bounded linear operator acting on functions which commutes with translations: S(τ_{x}f) = τ_{x}(Sf) for all x. Then S is given as convolution with a function (or distribution) g_{S}; that is Sf = g_{S} ∗ f.
Thus some translation invariant operations can be represented as convolution. Convolutions play an important role in the study of time-invariant systems, and especially LTI system theory. The representing function g_{S} is the impulse response of the transformation S.
A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition that S must be a continuous linear operator with respect to the appropriate topology. It is known, for instance, that every continuous translation invariant continuous linear operator on L^{1} is the convolution with a finite Borel measure. More generally, every continuous translation invariant continuous linear operator on L^{p} for 1 ≤ p < ∞ is the convolution with a tempered distribution whose Fourier transform is bounded. To wit, they are all given by bounded Fourier multipliers.
If G is a suitable group endowed with a measure λ, and if f and g are real or complex valued integrable functions on G, then we can define their convolution by
It is not commutative in general. In typical cases of interest G is a locally compactHausdorfftopological group and λ is a (left-) Haar measure. In that case, unless G is unimodular, the convolution defined in this way is not the same as ${\textstyle \int f\left(xy^{-1}\right)g(y)\,d\lambda (y)}$. The preference of one over the other is made so that convolution with a fixed function g commutes with left translation in the group:
$L_{h}(f*g)=(L_{h}f)*g.$
Furthermore, the convention is also required for consistency with the definition of the convolution of measures given below. However, with a right instead of a left Haar measure, the latter integral is preferred over the former.
On locally compact abelian groups, a version of the convolution theorem holds: the Fourier transform of a convolution is the pointwise product of the Fourier transforms. The circle groupT with the Lebesgue measure is an immediate example. For a fixed g in L^{1}(T), we have the following familiar operator acting on the Hilbert spaceL^{2}(T):
The operator T is compact. A direct calculation shows that its adjoint T* is convolution with
${\bar {g))(-y).$
By the commutativity property cited above, T is normal: T* T = TT* . Also, T commutes with the translation operators. Consider the family S of operators consisting of all such convolutions and the translation operators. Then S is a commuting family of normal operators. According to spectral theory, there exists an orthonormal basis {h_{k}} that simultaneously diagonalizes S. This characterizes convolutions on the circle. Specifically, we have
$h_{k}(x)=e^{ikx},\quad k\in \mathbb {Z} ,\;$
which are precisely the characters of T. Each convolution is a compact multiplication operator in this basis. This can be viewed as a version of the convolution theorem discussed above.
A similar result holds for compact groups (not necessarily abelian): the matrix coefficients of finite-dimensional unitary representations form an orthonormal basis in L^{2} by the Peter–Weyl theorem, and an analog of the convolution theorem continues to hold, along with many other aspects of harmonic analysis that depend on the Fourier transform.
Let G be a (multiplicatively written) topological group.
If μ and ν are finite Borel measures on G, then their convolution μ∗ν is defined as the pushforward measure of the group action and can be written as
In convex analysis, the infimal convolution of proper (not identically $+\infty$) convex functions$f_{1},\dots ,f_{m))$ on $\mathbb {R} ^{n))$ is defined by:^{[33]}$(f_{1}*\cdots *f_{m})(x)=\inf _{x}\{f_{1}(x_{1})+\cdots +f_{m}(x_{m})|x_{1}+\cdots +x_{m}=x\}.$
It can be shown that the infimal convolution of convex functions is convex. Furthermore, it satisfies an identity analogous to that of the Fourier transform of a traditional convolution, with the role of the Fourier transform is played instead by the Legendre transform:
$\varphi ^{*}(x)=\sup _{y}(x\cdot y-\varphi (y)).$
We have:
$(f_{1}*\cdots *f_{m})^{*}(x)=f_{1}^{*}(x)+\cdots +f_{m}^{*}(x).$
Let (X, Δ, ∇, ε, η) be a bialgebra with comultiplication Δ, multiplication ∇, unit η, and counit ε. The convolution is a product defined on the endomorphism algebra End(X) as follows. Let φ, ψ ∈ End(X), that is, φ, ψ: X → X are functions that respect all algebraic structure of X, then the convolution φ∗ψ is defined as the composition
The convolution appears notably in the definition of Hopf algebras (Kassel 1995, §III.3). A bialgebra is a Hopf algebra if and only if it has an antipode: an endomorphism S such that
In acoustics, reverberation is the convolution of the original sound with echoes from objects surrounding the sound source.
In digital signal processing, convolution is used to map the impulse response of a real room on a digital audio signal.
In electronic music convolution is the imposition of a spectral or rhythmic structure on a sound. Often this envelope or structure is taken from another sound. The convolution of two signals is the filtering of one through the other.^{[37]}
In electrical engineering, the convolution of one function (the input signal) with a second function (the impulse response) gives the output of a linear time-invariant system (LTI). At any given moment, the output is an accumulated effect of all the prior values of the input function, with the most recent values typically having the most influence (expressed as a multiplicative factor). The impulse response function provides that factor as a function of the elapsed time since each input value occurred.
In time-resolved fluorescence spectroscopy, the excitation signal can be treated as a chain of delta pulses, and the measured fluorescence is a sum of exponential decays from each delta pulse.
In kernel density estimation, a distribution is estimated from sample points by convolution with a kernel, such as an isotropic Gaussian.^{[38]}
In radiotherapy treatment planning systems, most part of all modern codes of calculation applies a convolution-superposition algorithm.^{[clarification needed]}
In structural reliability, the reliability index can be defined based on the convolution theorem.
The definition of reliability index for limit state functions with nonnormal distributions can be established corresponding to the joint distribution function. In fact, the joint distribution function can be obtained using the convolution theory.^{[39]}
In Smoothed-particle hydrodynamics, simulations of fluid dynamics are calculated using particles, each with surrounding kernels. For any given particle $i$, some physical quantity $A_{i))$ is calculated as a convolution of $A_{j))$ with a weighting function, where $j$ denotes the neighbors of particle $i$: those that are located within its kernel. The convolution is approximated as a summation over each neighbor.^{[40]}
In Fractional calculus convolution is instrumental in various definitions of fractional integral and fractional derivative.
^The symbol U+2217∗ASTERISK OPERATOR is different than U+002A*ASTERISK, which is often used to denote complex conjugation. See Asterisk § Mathematical typography.
^
Smith, Stephen W (1997). "13.Convolution". The Scientist and Engineer's Guide to Digital Signal Processing (1 ed.). California Technical Publishing. ISBN0-9660176-3-3. Retrieved 22 April 2016.
^
According to
[Lothar von Wolfersdorf (2000), "Einige Klassen quadratischer Integralgleichungen",
Sitzungsberichte der Sächsischen Akademie der Wissenschaften zu Leipzig,
Mathematisch-naturwissenschaftliche Klasse, volume 128, number 2, 6–7], the source is Volterra, Vito (1913),
"Leçons sur les fonctions de linges". Gauthier-Villars, Paris 1913.
^Selesnick, Ivan W.; Burrus, C. Sidney (1999). "Fast Convolution and Filtering". In Madisetti, Vijay K. (ed.). Digital Signal Processing Handbook. CRC Press. p. Section 8. ISBN978-1-4200-4563-5.
^Juang, B.H. "Lecture 21: Block Convolution"(PDF). EECS at the Georgia Institute of Technology. Archived(PDF) from the original on 2004-07-29. Retrieved 17 May 2013.
^Ninh, Pham; Pagh, Rasmus (2013). Fast and scalable polynomial kernels via explicit feature maps. SIGKDD international conference on Knowledge discovery and data mining. Association for Computing Machinery. doi:10.1145/2487575.2487591.
Bracewell, R. (1986), The Fourier Transform and Its Applications (2nd ed.), McGraw–Hill, ISBN0-07-116043-4.
Damelin, S.; Miller, W. (2011), The Mathematics of Signal Processing, Cambridge University Press, ISBN978-1107601048
Diggle, P. J. (1985), "A kernel method for smoothing point process data", Journal of the Royal Statistical Society, Series C, 34 (2): 138–147, doi:10.2307/2347366, JSTOR2347366, S2CID116746157
Ghasemi, S. Hooman; Nowak, Andrzej S. (2017), "Reliability Index for Non-normal Distributions of Limit State Functions", Structural Engineering and Mechanics, 62 (3): 365–372, doi:10.12989/sem.2017.62.3.365
Grinshpan, A. Z. (2017), "An inequality for multiple convolutions with respect to Dirichlet probability measure", Advances in Applied Mathematics, 82 (1): 102–119, doi:10.1016/j.aam.2016.08.001
Hewitt, Edwin; Ross, Kenneth A. (1979), Abstract harmonic analysis. Vol. I, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 115 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN978-3-540-09434-0, MR0551496.
Hewitt, Edwin; Ross, Kenneth A. (1970), Abstract harmonic analysis. Vol. II: Structure and analysis for compact groups. Analysis on locally compact Abelian groups, Die Grundlehren der mathematischen Wissenschaften, Band 152, Berlin, New York: Springer-Verlag, MR0262773.
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN978-1584888666. OCLC144216834.
Reed, Michael; Simon, Barry (1975), Methods of modern mathematical physics. II. Fourier analysis, self-adjointness, New York-London: Academic Press Harcourt Brace Jovanovich, Publishers, pp. xv+361, ISBN0-12-585002-6, MR0493420
Rudin, Walter (1962), Fourier analysis on groups, Interscience Tracts in Pure and Applied Mathematics, vol. 12, New York–London: Interscience Publishers, ISBN0-471-52364-X, MR0152834.
Strichartz, R. (1994), A Guide to Distribution Theory and Fourier Transforms, CRC Press, ISBN0-8493-8273-4.
Titchmarsh, E (1948), Introduction to the theory of Fourier integrals (2nd ed.), New York, N.Y.: Chelsea Pub. Co. (published 1986), ISBN978-0-8284-0324-5.
Uludag, A. M. (1998), "On possible deterioration of smoothness under the operation of convolution", J. Math. Anal. Appl., 227 (2): 335–358, doi:10.1006/jmaa.1998.6091
von zur Gathen, J.; Gerhard, J . (2003), Modern Computer Algebra, Cambridge University Press, ISBN0-521-82646-2.