- About Us
- Oral Histories and Transcriptions
- Decades of Discovery
- Features and Demos
- Abramson/Lisker VOT Stimuli
- Articulatory Synthesis
- Gestural Model
- Imitation of Expressive Microstructure
- Sinewave Synthesis
- TADA: Task Dynamic model of inter-articulator speech coordination
- The Pattern Playback
- Grants and Projects
- Ongoing Studies
- Examinations Spoken and Written Comprehension Processes (13-18)
- I-LEARN: Implicit learning in vision, hearing and touch for children with typical and atypical reading profiles. (Ages 6-9)
- Innovative interactive learning iPad game for children with ASD and speech sound disorders
- LEARN (Ages 2-3)
- Listening to faces speech study for children on the autism spectrum (Ages 7-10)
- Listening to faces speech study for children with speech sound disorders (Ages 7-10)
- Memory and Reading Study (Ages 10-12 and 16-24)
- Visual feedback and childhood speech disorders (Ages10-14)
- FILL OUT OUR STUDY PARTICIPANT FORM
- Haskins Laboratories: Data Sharing Initiative
- Ongoing Studies
- Research Infrastructure
- Research partners
- Funding Opportunities
- Grants and Projects
- Research Centers
- Haskins Imaging Laboratory
- Conference Posters Directory
- HIL - Published Articles and Book Chapters
- HIL Members
- Posters from Cognitive Neuroscience Society Annual Meeting. San Francisco, CA., March 28-31, 2015
- Haskins Global Summit
- Haskins Training Institute
- Yale Reading Center
Decades of Discovery - 1970s
Haskins Laboratories completes the move to New Haven, CT, begun in 1969, and enters into affiliation agreements with Yale University and the University of Connecticut.
Recognizing the Laboratories’ unique facilities for analysis and synthesis of speech, the National Institutes of Health defray the costs of sharing the facilities with investigators from other institutions— support that continues for nearly twenty years.
Katherine Harris, working with Fredericka Bell-Berti, Gloria Borden, and others, demonstrates electromyographically how the precise phasing and layering of articulatory actions give rise to segmental overlap, and thus to the acoustic phenomena of coarticulation.
Isabelle Liberman, Donald Shankweiler, and Alvin Liberman team up with Ignatius Mattingly to study the relation between speech perception and reading, a topic implicit in the Laboratories’ research program since the 1940’s. They develop the concept of “phonemic awareness,” the knowledge that would-be readers must have of the phonemic structure of their language if they are to learn to read. Under the broad rubric of the “Alphabetic Principle,” this concept is the core of the Laboratories’ program of reading pedagogy today.
Patrick Nye joins the Laboratories to lead a team including Franklin Cooper, Jane Gaitenby, George Sholes, and Gary Kuhn in work on the reading machine. The project culminates when the addition of an optical typescript reader enables investigators to assemble the first automatic text-to-speech reading machine. By the end of the decade the technology has advanced to the point where commercial concerns assume the task of designing and manufacturing reading machines for the blind.
Working with Bruno Repp, Virginia Mann, Joanne Miller, Douglas Whalen, and others over the next decade or so, Alvin Liberman conducts a series of innovative experiments to clarify and deepen the concept of a speech mode of perception. These experiments move away from the cue as a static property of the acoustic signal toward the cue as a dynamic index of articulatory action.
Experiments by Peter Bailey, James Cutting, Michael Dorman, Quentin Summerfield, and others cast doubt on the validity of the “acoustic cue” as a unit of perceptual function. Building on these experiments, Philip Rubin develops the sinewave synthesis program used by Robert Remez, Rubin, David Pisoni, and colleagues. These researchers show that listeners can perceive continuous speech, without traditional speech cues, from a pattern of three sinewaves that track the changing resonances of the vocal tract. Their work paves the way for a view of speech as a dynamic pattern of trajectories through articulatory-acoustic space.
Rubin, Thomas Baer, Paul Mermelstein, and colleagues develop Mermelstein’s anatomically simplified vocal tract model into the first articulatory synthesizer that can be controlled in a physically meaningful way and used for interactive experiments.