Temporal primitives in auditory cognition and speech perception – David Poeppel (University of Maryland)

December 2, 2008 all-day

View Seminar Video
Generating usable internal representations of speech input requires, among other operations, fractionating the signals into temporal units/chunks of the appropriate granularity. Adopting (and adapting) Marr’s (1982) approach to vision, a perspective is outlined that formulates linking hypotheses between specific neurobiological mechanisms (for example cortical oscillations and phase-locking) and the representations that underlie auditory cognition (for example syllables). Focusing on the implementational and algorithmic levels of description, I argue that the perception of sound patterns requires a multi-time resolution analysis. In particular, recent experimental data from psychophysics, MEG (Luo & Poeppel 2007), and concurrent EEG/fMRI (Giraud et al. 2007) suggest that there exist two privileged time scales that form the basis for constructing elementary auditory percepts. These ‘temporal primitives’ permit the construction of the internal representations that mediate the analysis of speech and other acoustic signals.
David Poeppel is Professor in the Department of Biology, the Department of Linguistics, and the Neuroscience and Cognitive Science Program at the University of Maryland College Park and the Department of Psychology at New York University. Trained in neurophysiology, cognitive science, and cognitive neuroscience at MIT and UCSF, his lab is focused on the cognitive neuroscience of hearing, speech, and language. Although the lab uses all kinds of techniques, one principal methodology is magnetoencephalography (MEG).

Center for Language and Speech Processing