Rational Kernels — General Framework, Algorithms and Applications – Patrick Haffner (AT&T Research)

When:
January 28, 2003 all-day
2003-01-28T00:00:00-05:00
2003-01-29T00:00:00-05:00

View Seminar Video
Abstract
Joint work with Corinna Cortes and Mehryar Mohri. Kernel methods are widely used in statistical learning techniques due to their excellent performance and their computational efficiency in high-dimensional feature space. However, text or speech data cannot always be represented by the fixed-length vectors that the traditional kernels handle. In this talk, we introduce a general framework, Rational Kernels, that extends kernel techniques to deal with variable-length sequences and more generally to deal with large sets of weighted alternative sequences represented by weighted automata. Far from being abstract and computationally complex objects, rational kernels can be readily implemented using general weighted automata algorithms that have been extensively used in text and speech processing and that we will briefly review. Rational kernels provide a general framework for the definition and design of similarity measures between word or phone lattices particularly useful in speech mining applications. Viewed as a similarity measure, they can also be used in Support Vector Machines and significantly improve the spoken-dialog classification performance in difficult tasks such as the AT&T ‘How May I Help You’ (HMIHY) system. We present several examples of rational kernels to illustrate these applications. We finally show that many string kernels commonly considered in computational biology applications are specific instances of rational kernels.

Biography
Patrick Haffner graduated from Ecole Polytechnique, Paris, France in 1987 and from Ecole Nationale Superieure des Telecommunications (ENST), Paris, France in 1989. He received his PhD in speech and signal processing from ENST in 1994. His research interests center on statistical learning techniques that can be used to globally optimize real-world processes with speech or image input. With France Telecom Research, he developed multi-state time-delay neural networks (MS-TDNNs) and applied them to recognize telephone speech. In 1995, he joined AT&T Laboratories, where he worked on image classification using convolutional neural networks (with Yann LeCun) and Support Vector Machines (with Vladimir Vapnik). Using information theoretic principles, he also developed and implemented the segmenter used in the DjVu document compression system. Since 2001, he has been working on kernel methods and information theoretic learning for spoken language understanding.

Center for Language and Speech Processing