Quest for the Essence of Language

Steven Greenberg, Centre for Applied Hearing Research, Technical Univ of Denmark; Silicon Speech, Santa Venetia, CA, USA

November 1, 2005

View Seminar Video


Spoken language is often conceptualized as mere sequences of words and phonemes. From this traditional perspective, the listeners task is to decode the speech signal into constituent elements derived from spectral decomposition of the acoustic signal. This presentation outlines a multi-tier theory of spoken language in which utterances are composed not only of words and phones, but also syllables, articulatory-acoustic features and _LP_most importantly_RP_ prosemes, encapsulating the prosodic pattern in terms of prominence and accent. This multi-tier framework portrays pronunciation variation and the phonetic micro-structure of the utterance with far greater precision than the conventional lexico-phonetic approach, thereby providing the prospect of efficiently modeling the information-bearing elements of spoken language for automatic speech recognition and synthesis.


In the early part of his career, Steven Greenberg, studied Linguistics, first at the University of Pennsylvania _LP_A.B._RP_ and then at the University of California, Los Angeles _LP_Ph.D._RP_. He also studied Neuroscience _LP_UCLA_RP_, Psychoacoustics _LP_Northwestern_RP_ and Auditory Physiology _LP_Northwestern, University of Wisconsin_RP_. He was a principal researcher in the Neurophysiology Department at the University of Wisconsin-Madison for many years, before migrating back to the "Golden West" in 1991 to assume directorship of a speech laboratory at the University of California, Berkeley, where he also held a tenure-level position in the Department of Linguistics. In 1995, Dr. Greenberg migrated a few blocks further west to join the scientific research staff at the International Computer Science Institute _LP_affiliated with, but independent from UC-Berkeley_RP_. During the time he was at ICSI, he published many papers on the phonetic and prosodic properties of spontaneous spoken language, as well as conducted perceptual studies regarding the underlying acoustic _LP_and visual_RP_ basis of speech intelligibility. He also developed _LP_with Brian Kingsbury_RP_ the Modulation Spectrogram for robust representation of speech for automatic speech recognition as well as syllable-centric classifiers of phonetic features for speech technology applications. Since 2002, he has been President of Silicon Speech, a company based in the San Francisco Bay Area, that is dedicated to developing future-generation speech technology based on principles of human brain function and information theory. Beginning in 2004, Dr. Greenberg has also been a Visiting Professor at the Centre for Applied Hearing Research at the Technical University of Denmark where he performs speech-perception-related research.