Discriminative Learning of Generative Models – Tony Jebara (Columbia)

When:
November 9, 2004 all-day
2004-11-09T00:00:00-05:00
2004-11-10T00:00:00-05:00

View Seminar Video
Abstract
Generative models such as Bayesian networks, distributions, and hidden Markov models are elegant formalisms to setup and specify prior knowledge about a learning problem. However, the standard estimation methods they rely on, including maximum likelihood and Bayesian integration do not focus modeling resources on a particular input-output task. They only generically describe the data. In applied settings when models are imperfectly matched to real data, more discriminative learning as in support vector machines is crucial for improving performance. In this talk, I show how we can learn generative models optimally for a given task such as classification and obtain large margin discrimination boundaries. Through maximum entropy discrimination, all exponential family models can be discriminative via convex programming. Furthermore, the method handles interesting latent models such as mixtures and hidden Markov models. This is done via a variant of the maximum entropy that uses variational bounding on classification constraints to make computations tractable in the latent case. Interestingly, the method gives rise to Lagrange multipliers that behave like posteriors over hidden variables. Preliminary experiments are shown.

Biography
Tony Jebara is an Assistant Professor of Computer Science at Columbia University. He is Director of the Columbia Machine Learning Laboratory whose research focuses upon machine learning, computer vision and related application areas such as human-computer interaction. Jebara is also a Principal Investigator at Columbias Vision and Graphics Center. He has published over 30 papers in the above areas including the book Machine Learning: Discriminative and Generative (Kluwer). Jebara is the recipient of the Career award from the National Science Foundation and has also recieved honors for his papers from the International Conference on Machine Learning and from the Pattern Recognition Society. He has served as chair or program committee member for various conferences including ICDL, ICML, COLT, UAI, IJCAI and on the editorial board of the Machine Learning Journal. Jebaras research has been featured on television (ABC, BBC, New York One, TechTV, etc.) as well as in the popular press (Wired Online, Scientific American, Newsweek, Science Photo Library, etc.). Jebara obtained his Bachelors from McGill University (at the McGill Center for Intelligent Machines) in 1996. He obtained his Masters in 1998 and his PhD in 2002 both from the Massachusetts Institute of Technology (at the MIT Media Laboratory). He is currently a member of the IEEE, ACM and AAAI. Professor Jebaras research and laboratory are supported in part by Microsoft, Alpha Star Corporation and the National Science Foundation.

Center for Language and Speech Processing