Kernel Machines for Pattern Classification and Sequence Decoding – Gert Cauwenberghs (Johns Hopkins University)

February 11, 2003 all-day

View Seminar Video
Recently it has been shown that a simple learning paradigm, the support vector machine (SVM), outperforms elaborately tuned expert systems and neural networks in learning to recognize patterns from sparse training examples. Underlying its success are mathematical foundations of statistical learning theory. I will present a general class of kernel machines that fit the statistical learning paradigm, and that extend to class probability estimation and MAP forward sequence decoding. Sparsity in the kernel expansion (number of support vectors) relates to the shape of the loss function, and (more fundamentally) to the rank of the kernel matrix. Applications will be illustrated with examples in image classification and phoneme sequence recognition. I will also briefly present the Kerneltron, a silicon support vector “machine” for high-performance, real-time, and low-power parallel kernel computation.

Dr. Cauwenberghs’ research focuses on algoritms, architectures and VLSI systems for signal processing and adaptive neural computation, including speech and acoustic processors, focal-plane image processors, adaptive classifiers, and low-power coding and instrumentation. He has served as chair of the Analog Signal Processing Technical Committee of the IEEE Circuits and Systems Society, and is associate editor of the IEEE Transactions of Circuits and Systems II: Analog and Digital Signal Processing and the newly established IEEE Sensors Journal. More biographical information can be found here.

Center for Language and Speech Processing