Neural Dynamics of Speech Perception and Word Recognition – Stephen Grossberg (Department of Cognitive and Neural Systems, Boston University)
View Seminar Video
What is the neural representation of a speech code as it evolves in time? How do listeners integrate temporally distributed phonemic information into coherent representations of syllables and words? How does the brain extract invariant properties of variable-rate speech? This talk will describe an emerging neural model that suggests answers to these questions, while quantitatively simulating challenging data about speech and word recognition. In this model, rate-dependent category boundaries emerge from feedback interactions between a working memory for short-term storage of phonetic items and a list categorization network for grouping sequences of items. The conscious speech and word recognition code is suggested to be a resonant wave. Such a wave emerges when sequential activation and storage of phonemic items in working memory provides bottom-up input to unitized representations, or list chunks, that group together sequences of items of variable length. The list chunks compete with each other as they dynamically integrate this bottom-up information. The winning groupings feed back to provide top-down support to their phonemic items. These top-down expectations amplify and focus attention on consistent working memory items, while suppressing inconsistent working memory items. Feedback establishes a resonance which temporarily boosts the activation levels of selected items and chunks, thereby creating an emergent conscious percept. Because the resonance evolves more slowly than working memory activation, it can be influenced by information presented after relatively long intervening silence intervals. Variations in the durations of speech sounds and silent pauses can hereby produce different perceived groupings of words, and future sounds can influence how we hear past sounds. Preprocessing of acoustic signals into parallel auditory streams that respond preferentially to transient and sustained properties of the acoustic signal before being stored in parallel working memories, together with cross-stream automatic gain control, can help to explain how an invariant speech representation can emerge from variable-rate speech. References: Boardman, I., Grossberg, S., Myers, C., and Cohen, M. (1998). Neural dynamics of perceptual order and context effects for variable-rate speech syllables. Perception & Psychophysics, in press. Grossberg, S., Boardman, I., and Cohen, M. (1997). Neural dynamics of variable-rate speech categorization. J. Exptal. Psychol.: Human Percept. & Perform., 23, 481-503. Grossberg, S. and Myers, C. (1999). The resonant dynamics of speech perception: Interword integration and duration-dependent backward effects. Psychological Review, in press.