Kenneth Noble Stevens
Born(1924-03-24)March 24, 1924
DiedAugust 19, 2013(2013-08-19) (aged 89)
Alma materMIT, University of Toronto
AwardsNational Medal of Science (1999)
Scientific career
FieldsElectrical engineering, Acoustic phonetics
Doctoral advisorLeo Beranek
Other academic advisorsJ. C. R. Licklider, Walter A. Rosenblith
Doctoral studentsJames L. Flanagan
Carol Espy-Wilson
Lawrence R. Rabiner
Victor Zue
Abeer Alwan

Kenneth Noble Stevens (March 24, 1924[1] – August 19, 2013) was the Clarence J. LeBel Professor of Electrical Engineering and Computer Science, and professor of health sciences and technology at the research laboratory of electronics at MIT. Stevens was head of the speech communication group[2] in MIT's research laboratory of electronics (RLE), and was one of the world's leading scientists in acoustic phonetics.

He was awarded the National Medal of Science from President Bill Clinton in 1999, and the IEEE James L. Flanagan Speech and Audio Processing Award in 2004.

He died in 2013 from complications of Alzheimer's disease.[3]


Early education

Ken Stevens was born in Toronto on March 23, 1924.[4] His older brother, Pete, was born in England; Ken was born four years later, shortly after the family emigrated to Canada. His childhood ambition was to become a doctor, because he admired an uncle who was a doctor.[5] He attended high school at a school attached to the department of education at the University of Toronto.

Stevens attended college in the school of engineering at the University of Toronto on a full scholarship. He lived at home throughout his undergraduate years. Though Stevens himself could not fight in World War II because of his visual impairment, his brother was away for the entire war; his parents tuned in nightly to the BBC for updates.[5] Stevens majored in engineering physics at the university,[6] covering topics from the design of motorized machines through to basic physics, which was taught by the physics department. During summers he worked in the defense industry, including one summer at a company that was developing radar. He received both his S.B. and S.M. degrees in 1945.[7]

Stevens had been a teacher since his undergraduate years, when he lectured sections of home economics that involved some aspect of physics.[5] After receiving his master's degree, he stayed at the University of Toronto as an instructor, teaching courses to young men returning from the war, including his own older brother.[5] He was a fellow of the Ontario Foundation from 1945 to 1946, then worked as an instructor at the University of Toronto until 1948.[7]

During his master's research Stevens became interested in control theory, and took courses from the applied mathematics department, where one of his professors recommended that he should apply to MIT for doctoral studies.

Doctoral studies

Shortly after Stevens was admitted to MIT, a new professor named Leo Beranek noticed that Stevens had taken acoustics. Beranek contacted Stevens in Toronto, to ask if he would be a teaching assistant for Beranek's new acoustics course, and Stevens agreed. Shortly after that, Beranek contacted Stevens again to offer him a research position on a new speech project, which Stevens also accepted. The Radiation Laboratory at MIT (building 20) was converted, after the war, into the Research Laboratory of Electronics (RLE); among other labs, RLE hosted Beranek's new Acoustics Lab.

In November 1949,[8] the office next to Ken's was given to a visiting doctoral student from Sweden named Gunnar Fant, with whom he formed a friendship and collaboration that would last more than half a century. Stevens focused on the study of vowels during his doctoral research; in 1950 he published a short paper arguing that the autocorrelation could be used to discriminate vowels,[9] while his 1952 doctoral thesis reported perceptual results for vowels synthesized using a set of electronic resonators.[10] Fant convinced Stevens that a transmission-line model of the vocal tract was more flexible than a resonator model and the two published this work together in 1953.[11]

Ken credits Fant with the association between the Linguistics Department and the Research Laboratory for Electronics at MIT.[5] Roman Jakobson, a phonologist at Harvard, had an office at MIT by 1957, while Morris Halle joined the MIT Linguistics Department and moved to RLE in 1951. Stevens' collaborations with Halle began with acoustics,[12] but grew to focus on the way in which acoustics and articulation organize the sound systems of language.[13][14][15]

Stevens defended his doctoral thesis in 1952; his doctoral committee included his adviser Leo Beranek, as well as J. C. R. Licklider and Walter A. Rosenblith.[5] After receiving his doctorate, Stevens went to work at Bolt, Beranek and Newman (now BBN Technologies) in Harvard Square.[5] In the early 1950s, Beranek decided to retire from the MIT faculty in order to work full-time at BBN. He knew that Stevens loved to teach, so he encouraged Stevens to apply for a position on the MIT faculty. Stevens did so, and joined the faculty in 1954.

Research, teaching and service

Scientific contributions

Stevens is best known for his contributions to the fields of phonology, speech perception, and speech production. Stevens' most well-known book, Acoustic Phonetics,[16] is organized according to the distinctive features of Stevens' phonological system.

Contributions to phonology

Stevens is perhaps best known for his proposal of a theory that answers the question: Why are the sounds of the world's languages (their phonemes or segments) so similar to one another? On first learning a foreign language, one is struck by the remarkable differences that can exist between one language's sound system and that of any other. Stevens turned the student's perception on its head: rather than asking why languages are different, he asked, if the sound system of each language is completely arbitrary, why are languages so similar? His answer is the quantal theory of speech.[17] Quantal theory is supported by a theory of language change, developed in collaboration with Samuel Jay Keyser, which postulates the existence of redundant or enhancement features.[18]

Stevens' methodology in the investigation of speech sounds is organized into three steps. The first step is to use physics (mainly tube models) to model the shape of the articulators (e.g. the shapes of the front and back cavity, rounding or non-rounding of lips, etc). Based on the articulatory tube models, resonant frequencies can be calculated, which are the formant frequencies. Once the resonant frequencies are calculated, speech data are collected and analyzed to compare to theoretical calculations. This second stage is mainly experimental, where tokens of interest are usually recorded either in isolation, and/or embedded in a controlled carrier phrase, usually spoken by both several female/male native speakers of the language. The key to data collection is controlling for as many factors as possible so that the acoustic evidence of interest can be investigated with minimum amount of artifacts. The last stage in the investigation is to compare the data results with the theoretical predictions and to account for the differences that occur. Differences can sometimes be explained by the fact that tube models usually are simplified as to not account for loss due to softness of vocal walls (though resistors can be added to the theoretical model). Subglottal system might also affect the vocal tract productive system when the glottal opening is large (please see research on subglottal resonance on effects of speech). Theoretical model predictions can give general predictions about what one can expect to find in real speech, and evidence from real speech can also help refine the original model, and give better insight to the production of speech sounds.

Quantal theory aims to elegantly describe (using physics) and organize all the acoustic features of all possible sounds into a matrix. (See chapter five in Acoustics Phonetics) The ultimate constraint on all speech sounds is the physical articulatory system itself, thus supporting the claim that there can only be a finite set of sounds among languages. The reason that the set of speech sounds is finite is that while the movement of the articulators is continuous, only certain configurations tend to be articulatorily and/or acoustically stable, giving rise to fix frequencies for formants that form sounds that are relatively universal for all languages (i.e. vowels and consonants). Each acoustic sound can thus be described by a handful of defining features (usually binary). For example, lip-round (either on or off) is a feature. Tongue height (either high or low) is another feature. In addition to these defining features which serve as the essential description of the acoustic sounds, there are also enhancing features which help to make the sounds more recognizable. For each of these features, one can apply Stevens' methodology to first use a tube model to model the articulators, and predict the resonant frequencies, then collect data to examine the acoustic properties of that feature, and finally to reconcile with the theoretical model and summarize the acoustic properties of that feature.

To get an introduction to the world of speech science, one can first read the book "The Speech Chain" by Denes P. and Pinson E., where one is given a broad overview of the production and transmission of speech. One is introduced to spectrograms and formant frequencies, which are the main acoustic description of sound segments.

the glottis

As the vocal folds vibrate, puffs of air pushed through (filtered) by the vocal tract, producing sound. This sound source is modeled as a current source in a circuit modeling the production of sound. Changes in the vocal tract would cause change to the sound that is produced. The frequency of vibration for females vocal folds tend to be higher than that of males, giving female voices higher pitch than male voices.

Research (Hanson, H.M. 1997) has shown there is a difference between how females and males vibrate their vocal folds; there is a greater spread for female glottis, which gives female voices a more breathy quality than male voices.

the subglottal system

The subglottal system refers to the system that is below the glottis in the human body. It includes the trachea, bronchi, and the lungs. It is essentially a fixed system, so does not change for each individual speaker. Research results have shown that during the open phase of the glottal cycle (when the glottis is open), coupling is introduced due to the subglottal system, manifesting acoustically as pole/zero pairs in the frequency domain. These pole/zero pairs introduced by the coupling serve are hypothesized to serve as prohibited or unstable regions in the spectra, serving as natural boundaries for vowel features such as +front or +back.

For adult males, the resonant frequencies of their subglottal system have been measured (using invasive methods) to be 600, 1550, and 2200 Hz. (Acoustic Phonetics, pg 197, Ishizaka et. al., Crane & Boves). The subglottal resonant frequencies of females are slightly higher due to their smaller dimensions. One non-invasive way of measuring these peaks is to use an accelerometer placed above the sternal notch (Henke) to record the acceleration of the skin during phonation. The vibration would capture the resonant frequencies below the glottis (of the subglottal system).

the vocal tract

The vocal tract refers to the passage way that is above the glottis, all the way to opening of the lips. A two-tube model is usually used to model the vocal tract, one capturing the dimension (cross-sectional area and length) of the back cavity, the other modeling the front cavity. Resonant frequencies calculated from the tube model are the formant frequencies. To produce the schwa vowel /ə/, the vocal tract is relatively open all the way from the glottis to the mouth, thus the tube model can be thought of as a relatively uniform open tube, making the resonant frequencies (or formants) evenly apart. The radiation at the mouth would cause these resonant frequencies to be about five percent lower. (Acoustics Phonetics, pg 139) Female vocal tracts (average of 14.1 cm) are on average shorter than the male vocal tracts (average of 17.7 cm), thus making them having higher formant frequencies than males.

Since the vocal tract walls are soft, energy is lost in the vocal tract, which increases the bandwidth of the formants.

the nasal cavity

When the velopharyngeal port opens during the production of certain sounds, such as /n/ and /m/, coupling is introduced due to the naval cavity, which gives the output a nasal quality.

Contributions to speech perception

The quantal theory suggests that the phonological inventory of a language is defined primarily by the acoustic characteristics of each segment, with boundaries specified by the acoustic-articulatory mapping. The implication is that phonological segments must have some type of acoustic invariance.[19] Blumstein and Stevens[20] demonstrated what appeared to be an invariant relationship between the acoustic spectrum and the perceived sound: by adding energy to the burst spectrum of "pa" at a particular frequency, it is possible to turn it into "ta" or "ka" respectively, depending on the frequency. Presence of the extra energy causes perception of the lingual consonant; its absence causes perception of the labial.

Stevens' recent work has re-structured the theory of acoustic invariance into a shallow hierarchical perceptual model, the model of acoustic landmarks and distinctive features.

Contributions to speech production

While on sabbatical at KTH in Sweden in 1962, Stevens volunteered as a participant in cineradiography experiments being conducted by Sven Öhman. Stevens' cineradiographic films are among the most widely distributed; copies exist on laserdisc, and some are available online.[21]

After returning to MIT, Stevens agreed to supervise the research of a dentistry student named Joseph S. Perkell. Perkell's knowledge of oral anatomy permitted him to trace Stevens' X-ray films onto paper, and to publish the results.[22]

Other contributions to the study of speech production include a model by which one can predict the spectral shape of turbulent speech excitation (depending on the dimensions of the turbulent jet), and work related to the vocal fold configurations that lead to different modes of phonation.[23]

In fact, the spectral properties (formants, bandwidth of formants, other glottal characteristics) of all possible sound phonemes in all languages can theoretically be modeled and predicted using physics-based resonator models. Basic tube resonators can be used to give a general prediction of formants for vowels. Additional refinement to the basic model is used by adding resistors and/or capacitors to the model to represent energy losses due to vocal tract walls. Acoustical coupling due to the subglottal system can also be modeled by adding additional tubes to the model of the original vocal tract, introducing pole/zero in the spectra that represent the effects of subglottal coupling. (The locations of these pole/zero pairs are the resonant frequencies of the subglottal system). Glottal characteristics such as vocal pitch (F0), open quotient (H1-H2), and degree of breathiness (H1-A3) can also be modeled and measured from the spectra. (Hanson & Stevens).

Stevens as a mentor

Stevens joined MIT as an assistant professor in 1954.[24] He became an associate professor in 1957, a full professor in 1963, and was appointed as the Clarence J. Lebel Chaired Professor in 1977.[7] One of his long-time collaborators, Dennis Klatt (who wrote DECtalk while working in Stevens' lab), said that "As a leader, Ken is known for his devotion to students and his miraculous ability to run a busy laboratory while appearing to manage by a principle of benevolent anarchy."[4]

The first doctoral thesis Stevens signed at MIT was that of his fellow student, James L. Flanagan, in 1955. Flanagan started graduate school at MIT in the same year as Stevens, but without a prior master's degree; he earned his M.S. in 1950 under Beranek's supervision, then finished his doctoral thesis under Stevens' supervision in 1955.[25]

Stevens estimated in 2001 that he had supervised approximately forty Ph.D. candidates.[5]

On the occasion of his receipt of the Gold Medal of the Acoustical Society of America, in 1995, colleagues wrote of Stevens' Speech Group that "during its existence of almost four decades" it "has been outstanding in the support that it has provided to women researchers, many of whom have gone on to populate the upper echelons of research labs throughout the world.".[4] Stevens’ laboratory has been referred to by colleagues as a "national treasure" [6]

Professional service

Stevens was active in the Acoustical Society of America since his time as a graduate student. He was a member of the executive council from 1963 to 1966,[26] Vice President from 1971–2, and President of the Society from 1976–7.[27] He is a Fellow of the ASA. In 1983 he received its Silver Medal in Speech Communication, and in 1995 he received the Gold Medal from the society.[4]

Stevens was also active in the IEEE, where he held the rank of IEEE Life Fellow. In 2004, Ken Stevens and Gunnar Fant were the joint first winners of the IEEE James L. Flanagan Speech and Audio Processing Award.[28]

Stevens was a Fellow of the American Academy of Arts and Sciences, a member of the National Academy of Engineering,[29] a member of the National Academy of Sciences,[30] and a 1999 recipient of the United States National Medal of Science.[6]


  1. ^ according to naturalization papers and what he said, he was born March 23, 1924.
  2. ^ "MIT Speech Communication Group".
  3. ^ "Kenneth Stevens, professor emeritus in EECS, dies at 89". 23 August 2013.
  4. ^ a b c d "Acoustical Society of America Gold Medal Award, 1995: Kenneth N. Stevens". Archived from the original on 2007-06-27. Retrieved 2013-07-02.
  5. ^ a b c d e f g h "AIP Oral History Transcript — Dr. Kenneth Stevens". Archived from the original on 2013-08-28. Retrieved 2013-07-02.
  6. ^ a b c "D. Halber, "RLE Professor Kenneth Stevens wins National Medal of Science," January 1, 2000". January 2000.
  7. ^ a b c "Sensimetrics Consulting Resume, Kenneth N. Stevens".
  8. ^ "Gunnar Fant, "Phonetics and Phonology in the Last 50 Years," Presented at "From Sound to Sense: 50+ Years of Discoveries in Speech Communication," June, 2004" (PDF).
  9. ^ "K.N. Stevens, "Autocorrelation analysis of speech sounds," J. Acoust. Soc. Am. 22:769–771, 1950". Archived from the original on 2013-07-02.
  10. ^ K.N. Stevens, "The perception of sounds shaped by resonant circuits," 1952. OCLC 15508683.
  11. ^ "Stevens, K. N., Kasowski, S. and Fant, G. (1953) An electrical analog of the vocal tract, Journal of the Acoustical Society of America 25, 734–742". Archived from the original on 2013-07-02.
  12. ^ Halle, Morris; Kenneth N. Stevens (1959). "Analysis by synthesis." Proc. Seminar on Speech Compression and Processing. Vol. 2.
  13. ^ Stevens, Kenneth N.; Morris Halle (1967). "Remarks on analysis by synthesis and distinctive features." Models for the perception of speech and visual form, pp. 88–102. M.I.T. Press. ISBN 9780262230261.
  14. ^ Halle, Morris; Kenneth N. Stevens (2002) [First printed in 1971]. "A note on laryngeal features," pp. 45–61. Mouton de Gruyter. ISBN 9783110171433.
  15. ^ Halle, Morris; Kenneth N. Stevens (1979). "Some reflections on the theoretical bases of phonetics." Frontiers of speech communication research, pp. 335–349. Academic Press. ISBN 9780124498501.
  16. ^ K.N. Stevens (2000). Acoustic Phonetics. Current Studies in Linguistics. MIT Press. ISBN 9780262194044.
  17. ^ K.N. Stevens (1968). The quantal nature of speech: evidence from articulatory-acoustic data.
  18. ^ K.N. Stevens; S.J. Keyser (1989). ""Primary Features and their Enhancement in Consonants," Language 65(1):81–106". Language. 65 (1): 81–106. doi:10.2307/414843. JSTOR 414843.
  19. ^ S.E. Blumstein; K.N. Stevens (1979). ""Acoustic invariance in speech production: Evidence from measurements of the spectral characteristics of stop consonants," J. Acoust. Soc. Am. 66(4):1001-1017". Archived from the original on 2013-07-02.
  20. ^ S.E. Blumstein; K.N. Stevens (1980). ""Perceptual invariance and onset spectra for stop consonants in different vowel environments," J. Acoust. Soc. Am. 67(2):648–662". Archived from the original on 2013-07-02.
  21. ^ "Ken Stevens x-ray film on youtube". YouTube. Archived from the original on 2021-12-14.
  22. ^ Joseph S. Perkell (1969). Physiology of Speech Production: Results and Implications of a Quantitative Cineradiographic Study (Research Monograph). MIT Press. ISBN 978-0262661706.
  23. ^ Stevens, Kenneth N.; 平野実; Foundation, Voice (1981). Vocal Fold Physiology, 1980, Minoru Hirano and Kenneth N. Stevens, eds. University of Tokyo Press. ISBN 978-0860082811.
  24. ^ "From Sound to Sense: 50+ Years of Discoveries in Speech Communication". May 11, 2004.
  25. ^ Frederik Nebeker (8 April 1997). "IEEE Oral-History:James L. Flanagan".
  26. ^ "Past and Present Officers and Members of the Executive Council, Acoustical Society of America".[permanent dead link]
  27. ^ "Emilio Segre Visual Archives, Gallery of Member Society Presidents".
  28. ^ "IEEE Global History Network, Kenneth N. Stevens". 2 February 2016.
  29. ^ "NAE Members: Dr. Kenneth N. Stevens".
  30. ^ "Alberts Issues Challenge to New NAS Members". The Scientist. June 8, 1998.