The Neural Control of Speech – Frank Guenther (Boston University)
View Seminar Video
Speech production involves coordinated processing in many regions of the brain. To better understand these processes, our laboratory has designed, tested, and refined a neural network model whose components correspond to brain regions involved in speech. Babbling and imitation phases are used to train neural mappings between phonological, articulatory, auditory, and somatosensory representations. After learning, the model can produce syllables and words it has learned by commanding movements of an articulatory synthesizer. Because the model’s components correspond to neurons and are given precise anatomical locations, activity in the model’s cells can be compared to neuroimaging data. Computer simulations of the model account for a wide range of experimental findings, including data on acquisition of speaking skills, articulatory kinematics, and brain activity during speech. “Impaired” versions of the model are being used to investigate several communication disorders, and the model has been used to guide development of a neural prosthesis aimed at restoring speech output to profoundly paralyzed individuals.
Frank Guenther, Professor of Cognitive and Neural Systems at Boston University, is a computational and cognitive neuroscientist specializing in speech and motor control. He received an MS in Electrical Engineering from Princeton University in 1987 and PhD in Cognitive and Neural Systems from Boston University in 1993. He is also a faculty member in the Harvard University/MIT Speech and Hearing Bioscience and Technology Program and a research affiliate at Massachusetts General Hospital. His research combines theoretical modeling with behavioral and neuroimaging experiments to characterize the neural computations underlying speech and language. He is also involved in the development of speech prostheses that utilize brain-computer interfaces to restore synthetic speech to paralyzed individuals.