Learning Hierarchies of Features – Yann LeCun (Courant Institute of Mathematical Sciences and Center for Neural Science, NYU)

When:
November 30, 2010 all-day
2010-11-30T00:00:00-05:00
2010-12-01T00:00:00-05:00

View Seminar Video
Abstract
Intelligent perceptual tasks such as vision and audition require the construction of good internal representations. Theoretical and empirical evidence suggests that the perceptual world is best represented by a multi-stage hierarchy in which features in successive stages are increasingly global, invariant, and abstract. An important challenge for Machine Learning, artificial perception, and AI is to devise “deep learning” methods than can automatically learn good feature hierarchies from labeled and unlabeled data.A class of such methods that combine unsupervised sparse coding, and supervised refinement will be described. A number of applications will be shown through videos and live demos, including a category-level object recognition system that can be trained on line, and a trainable vision system for off-road mobile robot. An implementation of these systems on an FPGA will be shown. It is based on a new programmable and reconfigurable “dataflow” architecture dubbed NeuFlow.
Biography
Yann LeCun is Silver Professor of Computer Science and Neural Science at the Courant Institute of Mathematical Sciences and at the Center for Neural Science of New York University. He received an Electrical Engineer Diploma from École Supérieure d’Ingénieurs en Électronique et Électrotechnique (ESIEE), Paris in 1983, and a PhD in Computer Science from Université Pierre-et-Marie-Curie (Paris) in 1987. After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories in Holmdel, NJ, in 1988. He became head of the Image Processing Research Department at AT&T Labs-Research in 1996, and joined NYU in 2003, after a brief period as Fellow at the NEC Research Institute in Princeton. His current interests include machine learning, computer vision, pattern recognition, mobile robotics, and computational neuroscience. He has published over 150 technical papers on these topics as well as on neural networks, handwriting recognition, image processing and compression, and VLSI design. His handwriting recognition technology is used by several banks around the world to read checks. His image compression technology, called DjVu, is used by hundreds of web sites and publishers and millions of users to distribute and access scanned documents on the Web, and his image recognition technique, called Convolutional Network, has been deployed by companies such as Google, Microsoft, NEC, France Telecom and several startup companies for document recognition, human-computer interaction, image indexing, and video analytics. He has been on the editorial board of IJCV, IEEE PAMI, IEEE Trans on Neural Networks, was program chair of CVPR’06, and is chair of the annual Learning Workshop. He is on the science advisory board of Institute for Pure and Applied Mathematics, and is the co-founder of MuseAmi, a music technology company.

Johns Hopkins University

Johns Hopkins University, Whiting School of Engineering

Center for Language and Speech Processing
Hackerman 226
3400 North Charles Street, Baltimore, MD 21218-2680

Center for Language and Speech Processing