Factoring Speech into Linguistic Features – Karen Livescu (Massachusetts Institute of Technology)
View Seminar Video
Spoken language technologies, such as automatic speech recognition and synthesis, typically treat speech as a string of “phones”. In contrast, humans produce speech through a complex combination of semi-independent articulatory trajectories. Recent theories of phonology acknowledge this, and treat speech as a combination of multiple streams of linguistic “features”. In this talk I will present ways in which the factorization of speech into features can be useful in speech recognition, in both audio and visual (lipreading) settings. The main contribution is a feature-based approach to pronunciation modeling, using dynamic Bayesian networks. In this class of models, the great variety of pronunciations seen in conversational speech is explained as the result of asynchrony among feature streams and changes in individual feature values. I will also discuss the use of linguistic features in observation modeling via feature-specific classifiers. I will describe the application of these ideas in experiments with audio and visual speech recognition, and present analyses suggesting additional potential applications in speech science and technology.
Karen Livescu is a Luce Post-doctoral Fellow in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and EECS department at MIT. She completed her PhD in EECS, MIT, in 2005, and her BA in Physics at Princeton University in 1996, with a stint in between as a visiting student in EE/CS at the Technion in Israel. In the summer of 2006 she led a team project in JHU’s summer workshop series on speech and language engineering. Her main research interests are in speech and language processing, with a slant toward combining statistical modeling techniques with knowledge from linguistics and speech science.