Quest for the Essence of Language – Steven Greenberg (Centre for Applied Hearing Research, Technical Univ of Denmark; Silicon Speech, Santa Venetia, CA, USA)

November 1, 2005 all-day

View Seminar Video
Spoken language is often conceptualized as mere sequences of words and phonemes. From this traditional perspective, the listeners task is to decode the speech signal into constituent elements derived from spectral decomposition of the acoustic signal. This presentation outlines a multi-tier theory of spoken language in which utterances are composed not only of words and phones, but also syllables, articulatory-acoustic features and most importantly prosemes, encapsulating the prosodic pattern in terms of prominence and accent. This multi-tier framework portrays pronunciation variation and the phonetic micro-structure of the utterance with far greater precision than the conventional lexico-phonetic approach, thereby providing the prospect of efficiently modeling the information-bearing elements of spoken language for automatic speech recognition and synthesis.

In the early part of his career, Steven Greenberg, studied Linguistics, first at the University of Pennsylvania (A.B.) and then at the University of California, Los Angeles (Ph.D.). He also studied Neuroscience (UCLA), Psychoacoustics (Northwestern) and Auditory Physiology (Northwestern, University of Wisconsin). He was a principal researcher in the Neurophysiology Department at the University of Wisconsin-Madison for many years, before migrating back to the “Golden West” in 1991 to assume directorship of a speech laboratory at the University of California, Berkeley, where he also held a tenure-level position in the Department of Linguistics. In 1995, Dr. Greenberg migrated a few blocks further west to join the scientific research staff at the International Computer Science Institute (affiliated with, but independent from UC-Berkeley). During the time he was at ICSI, he published many papers on the phonetic and prosodic properties of spontaneous spoken language, as well as conducted perceptual studies regarding the underlying acoustic and visual basis of speech intelligibility. He also developed, with Brian Kingsbury, the Modulation Spectrogram for robust representation of speech for automatic speech recognition as well as syllable-centric classifiers of phonetic features for speech technology applications. Since 2002, he has been President of Silicon Speech, a company based in the San Francisco Bay Area, that is dedicated to developing future-generation speech technology based on principles of human brain function and information theory. Beginning in 2004, Dr. Greenberg has also been a Visiting Professor at the Centre for Applied Hearing Research at the Technical University of Denmark where he performs speech-perception-related research.

Center for Language and Speech Processing