How May I Help You? — understanding fluently spoken language – Larry Yaeger (Apple Computer, Inc.)

February 3, 1998 all-day

We are interested in providing automated services via natural spoken dialog systems. By natural, we mean that the machine understands and acts upon what people actually say, in contrast to what one would like them to say. There are many issues that arise when such systems are targeted for large populations of non-expert users. In this talk, we focus on the task of automatically routing telephone calls based on a user’s fluently spoken response to the open-ended prompt of “How may I help you?.” We first describe a database generated from 10,000 spoken transactions between customers and human agents. We then describe methods for automatically acquiring language models for both recognition and understanding from such data. Experimental results evaluating call-classification from speech are reported for that database. These methods have been embedded and further evaluated within a spoken dialog system, with subsequent processing for information retrieval and form-filling.
Allen Gorin received the B.S. and M.A. degrees in Mathematics from SUNY at Stony Brook in 1975 and 1976 respectively, then the Ph.D. in Mathematics from the CUNY Graduate Center in 1980. From 1980-83 he worked at Lockheed investigating algorithms for target recognition from time-varying imagery. In 1983 he joined AT&T Bell Labs in Whippany where he was the Principle Investigator for AT&T’s ASPEN project within the DARPA Strategic Computing Program, investigating parallel architectures and algorithms for pattern recognition. In 1987, he was appointed a Distinguished Member of the Technical Staff. In 1988, he joined the Speech Research Department at Bell Labs in Murray Hill, and is now at AT&T Labs Research in Florham Park. His long-term research interest focuses on machine learning methods for spoken language understanding. He has served as a guest editor for the IEEE Transactions on Speech and Audio, and was a visiting researcher at the ATR Interpreting Telecommunications Research Laboratory in Japan during 1994. He is a member of the Acoustical Society of America and a Senior Member of the IEEE.

Center for Language and Speech Processing