Fourier transforms 

In mathematics, Fourier analysis (/ˈfʊrieɪ, iər/)^{[1]} is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.
The subject of Fourier analysis encompasses a vast spectrum of mathematics. In the sciences and engineering, the process of decomposing a function into oscillatory components is often called Fourier analysis, while the operation of rebuilding the function from these pieces is known as Fourier synthesis. For example, determining what component frequencies are present in a musical note would involve computing the Fourier transform of a sampled musical note. One could then resynthesize the same sound by including the frequency components as revealed in the Fourier analysis. In mathematics, the term Fourier analysis often refers to the study of both operations.
The decomposition process itself is called a Fourier transformation. Its output, the Fourier transform, is often given a more specific name, which depends on the domain and other properties of the function being transformed. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known as harmonic analysis. Each transform used for analysis (see list of Fourierrelated transforms) has a corresponding inverse transform that can be used for synthesis.
To use Fourier analysis, data must be equally spaced. Different approaches have been developed for analyzing unequally spaced data, notably the leastsquares spectral analysis (LSSA) methods that use a least squares fit of sinusoids to data samples, similar to Fourier analysis.^{[2]}^{[3]} Fourier analysis, the most used spectral method in science, generally boosts longperiodic noise in long gapped records; LSSA mitigates such problems.^{[4]}
Fourier analysis has many scientific applications – in physics, partial differential equations, number theory, combinatorics, signal processing, digital image processing, probability theory, statistics, forensics, option pricing, cryptography, numerical analysis, acoustics, oceanography, sonar, optics, diffraction, geometry, protein structure analysis, and other areas.
This wide applicability stems from many useful properties of the transforms:
In forensics, laboratory infrared spectrophotometers use Fourier transform analysis for measuring the wavelengths of light at which a material will absorb in the infrared spectrum. The FT method is used to decode the measured signals and record the wavelength data. And by using a computer, these Fourier calculations are rapidly carried out, so that in a matter of seconds, a computeroperated FTIR instrument can produce an infrared absorption pattern comparable to that of a prism instrument.^{[9]}
Fourier transformation is also useful as a compact representation of a signal. For example, JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated entirely, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fouriertransformed components, which are then inversetransformed to produce an approximation of the original image.
In signal processing, the Fourier transform often takes a time series or a function of continuous time, and maps it into a frequency spectrum. That is, it takes a function from the time domain into the frequency domain; it is a decomposition of a function into sinusoids of different frequencies; in the case of a Fourier series or discrete Fourier transform, the sinusoids are harmonics of the fundamental frequency of the function being analyzed.
When a function is a function of time and represents a physical signal, the transform has a standard interpretation as the frequency spectrum of the signal. The magnitude of the resulting complexvalued function at frequency represents the amplitude of a frequency component whose initial phase is given by the angle of (polar coordinates).
Fourier transforms are not limited to functions of time, and temporal frequencies. They can equally be applied to analyze spatial frequencies, and indeed for nearly any function domain. This justifies their use in such diverse branches as image processing, heat conduction, and automatic control.
When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate narrowband components of a compound waveform, concentrating them for easier detection or removal. A large family of signal processing techniques consist of Fouriertransforming a signal, manipulating the Fouriertransformed data in a simple way, and reversing the transformation.^{[10]}
Some examples include:
Main article: Fourier transform 
Most often, the unqualified term Fourier transform refers to the transform of functions of a continuous real argument, and it produces a continuous function of frequency, known as a frequency distribution. One function is transformed into another, and the operation is reversible. When the domain of the input (initial) function is time (t), and the domain of the output (final) function is ordinary frequency, the transform of function s(t) at frequency f is given by the complex number:
Evaluating this quantity for all values of f produces the frequencydomain function. Then s(t) can be represented as a recombination of complex exponentials of all possible frequencies:
which is the inverse transform formula. The complex number, S(f), conveys both amplitude and phase of frequency f.
See Fourier transform for much more information, including:
Main article: Fourier series 
The Fourier transform of a periodic function, s_{P}(t), with period P, becomes a Dirac comb function, modulated by a sequence of complex coefficients:
The inverse transform, known as Fourier series, is a representation of s_{P}(t) in terms of a summation of a potentially infinite number of harmonically related sinusoids or complex exponential functions, each with an amplitude and phase specified by one of the coefficients:
Any s_{P}(t) can be expressed as a periodic summation of another function, s(t):
and the coefficients are proportional to samples of S(f) at discrete intervals of 1/P:
Note that any s(t) whose transform has the same discrete sample values can be used in the periodic summation. A sufficient condition for recovering s(t) (and therefore S(f)) from just these samples (i.e. from the Fourier series) is that the nonzero portion of s(t) be confined to a known interval of duration P, which is the frequency domain dual of the Nyquist–Shannon sampling theorem.
See Fourier series for more information, including the historical development.
Main article: Discretetime Fourier transform 
The DTFT is the mathematical dual of the timedomain Fourier series. Thus, a convergent periodic summation in the frequency domain can be represented by a Fourier series, whose coefficients are samples of a related continuous time function:
which is known as the DTFT. Thus the DTFT of the s[n] sequence is also the Fourier transform of the modulated Dirac comb function.^{[B]}
The Fourier series coefficients (and inverse transform), are defined by:
Parameter T corresponds to the sampling interval, and this Fourier series can now be recognized as a form of the Poisson summation formula. Thus we have the important result that when a discrete data sequence, s[n], is proportional to samples of an underlying continuous function, s(t), one can observe a periodic summation of the continuous Fourier transform, S(f). Note that any s(t) with the same discrete sample values produces the same DTFT But under certain idealized conditions one can theoretically recover S(f) and s(t) exactly. A sufficient condition for perfect recovery is that the nonzero portion of S(f) be confined to a known frequency interval of width 1/T. When that interval is [−1/2T, 1/2T], the applicable reconstruction formula is the Whittaker–Shannon interpolation formula. This is a cornerstone in the foundation of digital signal processing.
Another reason to be interested in S_{1/T}(f) is that it often provides insight into the amount of aliasing caused by the sampling process.
Applications of the DTFT are not limited to sampled functions. See Discretetime Fourier transform for more information on this and other topics, including:
Main article: Discrete Fourier transform 
Similar to a Fourier series, the DTFT of a periodic sequence, , with period , becomes a Dirac comb function, modulated by a sequence of complex coefficients (see DTFT § Periodic data):
The S[k] sequence is what is customarily known as the DFT of one cycle of s_{N}. It is also Nperiodic, so it is never necessary to compute more than N coefficients. The inverse transform, also known as a discrete Fourier series, is given by:
When s_{N}[n] is expressed as a periodic summation of another function:
the coefficients are proportional to samples of S_{1/T}(f) at disrete intervals of 1/P = 1/NT:
Conversely, when one wants to compute an arbitrary number (N) of discrete samples of one cycle of a continuous DTFT, S_{1/T}(f), it can be done by computing the relatively simple DFT of s_{N}[n], as defined above. In most cases, N is chosen equal to the length of nonzero portion of s[n]. Increasing N, known as zeropadding or interpolation, results in more closely spaced samples of one cycle of S_{1/T}(f). Decreasing N, causes overlap (adding) in the timedomain (analogous to aliasing), which corresponds to decimation in the frequency domain. (see Discretetime Fourier transform § L=N×I) In most cases of practical interest, the s[n] sequence represents a longer sequence that was truncated by the application of a finitelength window function or FIR filter array.
The DFT can be computed using a fast Fourier transform (FFT) algorithm, which makes it a practical and important transformation on computers.
See Discrete Fourier transform for much more information, including:
For periodic functions, both the Fourier transform and the DTFT comprise only a discrete set of frequency components (Fourier series), and the transforms diverge at those frequencies. One common practice (not discussed above) is to handle that divergence via Dirac delta and Dirac comb functions. But the same spectral information can be discerned from just one cycle of the periodic function, since all the other cycles are identical. Similarly, finiteduration functions can be represented as a Fourier series, with no actual loss of information except that the periodicity of the inverse transform is a mere artifact.
It is common in practice for the duration of s(•) to be limited to the period, P or N. But these formulas do not require that condition.
Continuous frequency  Discrete frequencies  

Transform  
Inverse 
Continuous frequency  Discrete frequencies  

Transform 
 
Inverse 


When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a onetoone mapping between the four components of a complex time function and the four components of its complex frequency transform:^{[11]}
From this, various relationships are apparent, for example:
See also: Fourier series § Historical development 
An early form of harmonic series dates back to ancient Babylonian mathematics, where they were used to compute ephemerides (tables of astronomical positions).^{[12]}^{[13]}^{[14]}^{[15]}
The Classical Greek concepts of deferent and epicycle in the Ptolemaic system of astronomy were related to Fourier series (see Deferent and epicycle § Mathematical formalism).
In modern times, variants of the discrete Fourier transform were used by Alexis Clairaut in 1754 to compute an orbit,^{[16]} which has been described as the first formula for the DFT,^{[17]} and in 1759 by Joseph Louis Lagrange, in computing the coefficients of a trigonometric series for a vibrating string.^{[17]} Technically, Clairaut's work was a cosineonly series (a form of discrete cosine transform), while Lagrange's work was a sineonly series (a form of discrete sine transform); a true cosine+sine DFT was used by Gauss in 1805 for trigonometric interpolation of asteroid orbits.^{[18]} Euler and Lagrange both discretized the vibrating string problem, using what would today be called samples.^{[17]}
An early modern development toward Fourier analysis was the 1770 paper Réflexions sur la résolution algébrique des équations by Lagrange, which in the method of Lagrange resolvents used a complex Fourier decomposition to study the solution of a cubic:^{[19]} Lagrange transformed the roots x_{1}, x_{2}, x_{3} into the resolvents:
where ζ is a cubic root of unity, which is the DFT of order 3.
A number of authors, notably Jean le Rond d'Alembert, and Carl Friedrich Gauss used trigonometric series to study the heat equation,^{[20]} but the breakthrough development was the 1807 paper Mémoire sur la propagation de la chaleur dans les corps solides by Joseph Fourier, whose crucial insight was to model all functions by trigonometric series, introducing the Fourier series.
Historians are divided as to how much to credit Lagrange and others for the development of Fourier theory: Daniel Bernoulli and Leonhard Euler had introduced trigonometric representations of functions, and Lagrange had given the Fourier series solution to the wave equation, so Fourier's contribution was mainly the bold claim that an arbitrary function could be represented by a Fourier series.^{[17]}
The subsequent development of the field is known as harmonic analysis, and is also an early instance of representation theory.
The first fast Fourier transform (FFT) algorithm for the DFT was discovered around 1805 by Carl Friedrich Gauss when interpolating measurements of the orbit of the asteroids Juno and Pallas, although that particular FFT algorithm is more often attributed to its modern rediscoverers Cooley and Tukey.^{[18]}^{[16]}
Further information: Time–frequency analysis 
In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information.
As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a tradeoff between these. These can be generalizations of the Fourier transform, such as the shorttime Fourier transform, the Gabor transform or fractional Fourier transform (FRFT), or can use different functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform.
The Fourier variants can also be generalized to Fourier transforms on arbitrary locally compact Abelian topological groups, which are studied in harmonic analysis; there, the Fourier transform takes functions on a group to functions on the dual group. This treatment also allows a general formulation of the convolution theorem, which relates Fourier transforms and convolutions. See also the Pontryagin duality for the generalized underpinnings of the Fourier transform.
More specific, Fourier analysis can be done on cosets,^{[21]} even discrete cosets.