Decades of Discovery - 1970s

Haskins Laboratories completes the move to New Haven, CT, begun in 1969, and enters into affiliation agreements with Yale University and the University of Connecticut.

Recognizing the Laboratories’ unique facilities for analysis and synthesis of speech, the National Institutes of Health defray the costs of sharing the facilities with investigators from other institutions— support that continues for nearly twenty years.

Katherine Harris, working with Fredericka Bell-Berti, Gloria Borden, and others, demonstrates electromyographically how the precise phasing and layering of articulatory actions give rise to segmental overlap, and thus to the acoustic phenomena of coarticulation.

Isabelle Liberman, Donald Shankweiler, and Alvin Liberman team up with Ignatius Mattingly to study the relation between speech perception and reading, a topic implicit in the Laboratories’ research program since the 1940’s. They develop the concept of “phonemic awareness,” the knowledge that would-be readers must have of the phonemic structure of their language if they are to learn to read. Under the broad rubric of the “Alphabetic Principle,” this concept is the core of the Laboratories’ program of reading pedagogy today.

Patrick Nye joins the Laboratories to lead a team including Franklin Cooper, Jane Gaitenby, George Sholes, and Gary Kuhn in work on the reading machine. The project culminates when the addition of an optical typescript reader enables investigators to assemble the first automatic text-to-speech reading machine. By the end of the decade the technology has advanced to the point where commercial concerns assume the task of designing and manufacturing reading machines for the blind.

Working with Bruno Repp, Virginia Mann, Joanne Miller, Douglas Whalen, and others over the next decade or so, Alvin Liberman conducts a series of innovative experiments to clarify and deepen the concept of a speech mode of perception. These experiments move away from the cue as a static property of the acoustic signal toward the cue as a dynamic index of articulatory action.

Experiments by Peter Bailey, James Cutting, Michael Dorman, Quentin Summerfield, and others cast doubt on the validity of the “acoustic cue” as a unit of perceptual function. Building on these experiments, Philip Rubin develops the sinewave synthesis program used by Robert Remez, Rubin, David Pisoni, and colleagues. These researchers show that listeners can perceive continuous speech, without traditional speech cues, from a pattern of three sinewaves that track the changing resonances of the vocal tract. Their work paves the way for a view of speech as a dynamic pattern of trajectories through articulatory-acoustic space.

Rubin, Thomas Baer, Paul Mermelstein, and colleagues develop Mermelstein’s anatomically simplified vocal tract model into the first articulatory synthesizer that can be controlled in a physically meaningful way and used for interactive experiments.

« PREVIOUS                                        NEXT »