Exploiting Both Local and Global Constraints for Multi-Span Statistical Language Modeling – Jerome R. Bellegarda (Apple Technology Group, Apple Computer, Inc.)
Abstract
A new framework is proposed to integrate the various constraints, both local and global, that are present in the language. Local constraints are captured via n-gram language modeling, while global constraints are taken into account through the use of latent semantic analysis. An integrative formulation is derived for the combination of these two paradigms, resulting in several families of multi-span language models for large vocabulary speech recognition. Because of the inherent complementarity in the two types of constraints, the performance of the integrated language models, as measured by perplexity, compares favorably with the corresponding n-gram performance.
Biography
Jerome R. Bellegarda received the Diplome d’Ingenieur degree (summa cum laude) from the Ecole Nationale Superieure d’Electricite et de Mecanique, Nancy, France, in 1984, and the M.S. and Ph.D. degrees in Electrical Engineering from the University of Rochester, Rochester, NY, in 1984 and 1987, respectively.In 1987 he was a Research Associate in the Department of Electrical Engineering at the University of Rochester, developing multiple access coding techniques. From 1988 to 1994 he was a Research Staff Member at the IBM T.J. Watson Research Center, Yorktown Heights, NY, working on various improvements to the modeling component of the IBM speech recognition system, and developing advanced feature extraction and recognition modeling algorithms for cursive on-line handwriting. In 1994 he joined Apple Computer, Cupertino, CA, where he is currently Principal Scientist in the Spoken Language Research Group. At Apple he has worked on speaker adaptation, Asian dictation, statistical language modeling, and advanced dialog interactions. His research interests include voice-driven man-machine communications, multiple input/output modalities, and multimedia knowledge management.