Richard Socher (MetaMind) “Multimodal Question Answering for Language and Vision”

When:
February 16, 2016 @ 12:00 pm – 1:15 pm
2016-02-16T12:00:00-05:00
2016-02-16T13:15:00-05:00
Where:
Hackerman Hall B17
3400 N Charles St
Baltimore, MD 21218
USA

Abstract

Deep Learning enabled tremendous breakthroughs in visual understanding and speech recognition. Ostensibly, this is not the case in natural language processing (NLP) and higher level reasoning.
However, it only appears that way because there are so many different tasks in NLP and no single one of them, by itself, captures the complexity of language understanding. In this talk, I introduce dynamic memory networks which are our attempt to solve a large variety of NLP and vision problems through the lense of question answering.

Biography

Richard Socher is the CEO and founder of MetaMind, a startup that seeks to improve artificial intelligence and make it widely accessible. He obtained his PhD from Stanford working on deep learning with Chris Manning and Andrew Ng and won the best Stanford CS PhD thesis award. He is interested in developing new AI models that perform well across multiple different tasks in natural language processing and computer vision.

He was awarded the Distinguished Application Paper Award at the International Conference on Machine Learning (ICML) 2011, the 2011 Yahoo! Key Scientific Challenges Award, a Microsoft Research PhD Fellowship in 2012 and a 2013 “Magic Grant” from the Brown Institute for Media Innovation and the 2014 GigaOM Structure Award.

Johns Hopkins University

Johns Hopkins University, Whiting School of Engineering

Center for Language and Speech Processing
Hackerman 226
3400 North Charles Street, Baltimore, MD 21218-2608

Center for Language and Speech Processing