Integrating behavioral, neuroscience, and computational methods in the study of human speech – Lynne Bernstein (Department of Communication Neuroscience, House Ear Institute)
View Seminar Video
Speech perception is a process that transforms speech stimuli into neural representations that are then projected onto word-form representations in the mental lexicon. This process is conventionally thought to involve the encoding of stimulus information as abstract linguistic categories, such as features or phonemes. We have been using a variety of methods to study auditory and visual speech perception and spoken word recognition. Across behavioral, brain imaging, electrophysiology, and computational methods, we are obtaining evidence for modality-specific speech processing, for example: Based on computational modeling and behavioral testing, it appears that modality-specific representations contact the mental lexicon during spoken word recognition; Based on fMRI results, there is a visual phonetic processing route in human cortex that is distinct from the auditory phonetic processing route; and Direct correlations between optical phonetic similarity measures and visual speech perception are high, approximately .80. An implication of these findings is that speech perception operates on modality-specific representations rather than being mediated by abstract, amodal representations; And spoken language processing is far more highly distributed in the brain than heretofore thought.
Dr. Bernstein received her Ph.D. in Psycholinguistics from the University of Michigan. She holds current academic appointments at UCLA, the California Institute of Technology, and California State University.