This article uses bare URLs, which are uninformative and vulnerable to link rot. Please consider converting them to full citations to ensure the article remains verifiable and maintains a consistent citation style. Several templates and tools are available to assist in formatting, such as Reflinks (documentation), reFill (documentation) and Citation bot (documentation). (August 2022) (Learn how and when to remove this template message)
Spectrogram of the spoken words "nineteenth century".  Frequencies are shown increasing up the vertical axis, and time on the horizontal axis. The legend to the right shows that the color intensity increases with the density.
Spectrogram of the spoken words "nineteenth century". Frequencies are shown increasing up the vertical axis, and time on the horizontal axis. The legend to the right shows that the color intensity increases with the density.
A 3D spectrogram: The RF spectrum of a battery charger is shown over time
A 3D spectrogram: The RF spectrum of a battery charger is shown over time

A spectrogram is a visual representation of the spectrum of frequencies of a signal as it varies with time. When applied to an audio signal, spectrograms are sometimes called sonographs, voiceprints, or voicegrams. When the data are represented in a 3D plot they may be called waterfall displays.

Spectrograms are used extensively in the fields of music, linguistics, sonar, radar, speech processing,[1] seismology, and others. Spectrograms of audio can be used to identify spoken words phonetically, and to analyse the various calls of animals.

A spectrogram can be generated by an optical spectrometer, a bank of band-pass filters, by Fourier transform or by a wavelet transform (in which case it is also known as a scaleogram or scalogram).[2]

Scaleograms from the DWT and CWT for an audio sample
Scaleograms from the DWT and CWT for an audio sample

A spectrogram is usually depicted as a heat map, i.e., as an image with the intensity shown by varying the colour or brightness.


A common format is a graph with two geometric dimensions: one axis represents time, and the other axis represents frequency; a third dimension indicating the amplitude of a particular frequency at a particular time is represented by the intensity or color of each point in the image.

There are many variations of format: sometimes the vertical and horizontal axes are switched, so time runs up and down; sometimes as a waterfall plot where the amplitude is represented by height of a 3D surface instead of color or intensity. The frequency and amplitude axes can be either linear or logarithmic, depending on what the graph is being used for. Audio would usually be represented with a logarithmic amplitude axis (probably in decibels, or dB), and frequency would be linear to emphasize harmonic relationships, or logarithmic to emphasize musical, tonal relationships.

Sound spectrography of infrasound recording 30301


Spectrograms of light may be created directly using an optical spectrometer over time.

Spectrograms may be created from a time-domain signal in one of two ways: approximated as a filterbank that results from a series of band-pass filters (this was the only way before the advent of modern digital signal processing), or calculated from the time signal using the Fourier transform. These two methods actually form two different time–frequency representations, but are equivalent under some conditions.

The bandpass filters method usually uses analog processing to divide the input signal into frequency bands; the magnitude of each filter's output controls a transducer that records the spectrogram as an image on paper.[3]

Creating a spectrogram using the FFT is a digital process. Digitally sampled data, in the time domain, is broken up into chunks, which usually overlap, and Fourier transformed to calculate the magnitude of the frequency spectrum for each chunk. Each chunk then corresponds to a vertical line in the image; a measurement of magnitude versus frequency for a specific moment in time (the midpoint of the chunk). These spectrums or time plots are then "laid side by side" to form the image or a three-dimensional surface,[4] or slightly overlapped in various ways, i.e. windowing. This process essentially corresponds to computing the squared magnitude of the short-time Fourier transform (STFT) of the signal — that is, for a window width , .[5]

Limitations and resynthesis

From the formula above, it appears that a spectrogram contains no information about the exact, or even approximate, phase of the signal that it represents. For this reason, it is not possible to reverse the process and generate a copy of the original signal from a spectrogram, though in situations where the exact initial phase is unimportant it may be possible to generate a useful approximation of the original signal. The Analysis & Resynthesis Sound Spectrograph[6] is an example of a computer program that attempts to do this. The Pattern Playback was an early speech synthesizer, designed at Haskins Laboratories in the late 1940s, that converted pictures of the acoustic patterns of speech (spectrograms) back into sound.

In fact, there is some phase information in the spectrogram, but it appears in another form, as time delay (or group delay) which is the dual of the instantaneous frequency.[7]

The size and shape of the analysis window can be varied. A smaller (shorter) window will produce more accurate results in timing, at the expense of precision of frequency representation. A larger (longer) window will provide a more precise frequency representation, at the expense of precision in timing representation. This is an instance of the Heisenberg uncertainty principle, that the product of the precision in two conjugate variables is greater than or equal to a constant (B*T>=1 in the usual notation).[8]


See also


  1. ^ JL Flanagan, Speech Analysis, Synthesis and Perception, Springer- Verlag, New York, 1972
  2. ^ Sejdic, E.; Djurovic, I.; Stankovic, L. (August 2008). "Quantitative Performance Analysis of Scalogram as Instantaneous Frequency Estimator". IEEE Transactions on Signal Processing. 56 (8): 3837–3845. Bibcode:2008ITSP...56.3837S. doi:10.1109/TSP.2008.924856. ISSN 1053-587X. S2CID 16396084.
  3. ^ "Spectrograph". Retrieved 7 April 2018.
  4. ^ "Spectrograms". Retrieved 7 April 2018.
  5. ^ "STFT Spectrograms VI – NI LabVIEW 8.6 Help". Retrieved 7 April 2018.
  6. ^ "The Analysis & Resynthesis Sound Spectrograph". Retrieved 7 April 2018.
  7. ^ Boashash, B. (1992). "Estimating and interpreting the instantaneous frequency of a signal. I. Fundamentals". Proceedings of the IEEE. Institute of Electrical and Electronics Engineers (IEEE). 80 (4): 520–538. doi:10.1109/5.135376. ISSN 0018-9219.
  8. ^
  9. ^ "BIRD SONGS AND CALLS WITH SPECTROGRAMS ( SONOGRAMS ) OF SOUTHERN TUSCANY ( Toscana – Italy )". Retrieved 7 April 2018.
  10. ^ Saunders, Frank A.; Hill, William A.; Franklin, Barbara (1 December 1981). "A wearable tactile sensory aid for profoundly deaf children". Journal of Medical Systems. 5 (4): 265–270. doi:10.1007/BF02222144. PMID 7320662. S2CID 26620843.
  11. ^ "Spectrogram Reading". Archived from the original on 27 April 1999. Retrieved 7 April 2018.
  12. ^ "Praat: doing Phonetics by Computer". Retrieved 7 April 2018.
  13. ^ "The Aphex Face – bastwood". Retrieved 7 April 2018.
  14. ^ "SRC Comparisons". Retrieved 7 April 2018.
  15. ^ " – constantwave Resources and Information". Retrieved 7 April 2018.
  16. ^ "Spectrograms for vector network analyzers". Archived from the original on 2012-08-10.
  17. ^ "Real-time Spectrogram Displays". Retrieved 7 April 2018.
  18. ^ "IRIS: MUSTANG: Noise-Spectrogram: Docs: v. 1: Help".
  19. ^ Geitgey, Adam (2016-12-24). "Machine Learning is Fun Part 6: How to do Speech Recognition with Deep Learning". Medium. Retrieved 2018-03-21.