Bowen Zhou (IBM, T.J. Watson Research Center) “Recent Advances of Deep Learning for Question Answering” @ Hackerman Hall B17
Mar 29 @ 12:00 pm – 1:15 pm


Question Answering (QA) is one of the most exciting areas in artificial intelligence today, aiming at augmenting human’s cognitive capabilities with computing systems that are capable of generating meaningful and insightful answers to questions in natural language, which ultimately help human reason and make decisions more effectively.

In this talk, I will present a few recent results from my group that have advanced QA using a variety of new models based on deep learning, which includes improved representation learning for passage-based non-factoid QA, and a novel two-way attention mechanism with neural networks. Our two-way attention mechanism is a general framework independent of the underlying representation learning, and it has been applied to both convolutional neural networks (CNNs) and recurrent neural networks (RNNs) in our studies. I will also share a large-scale non-factoid QA data sets we created that researchers can use to benchmark their models and report progresses.

Time permitting, I will also overview our work around DL-based natural language generation, and the proposed structured memory architecture for general-purpose learning frameworks such as Neural Turing Machines.


Dr. Bowen Zhou leads IBM’s research efforts, across IBM Research and Watson Group, around the creation and application of new state-of-the-art statistical learning techniques, such as deep learning, tomake algorithmic advancements in the fields of question answering, natural language understanding, reading comprehension, knowledge acquisition, reasoning and decision making. Bowen also leads the development of cloud services exposing these capabilities in the IBM cloud, and enabling other researchers, developers and offerings to integrate these state-of-the-art algorithms.

Since he joined IBM Research in 2003, Bowen has held a number of positions in research and management. He has published more than 80 papers in the fields of speech recognition, machine translation, speech-to-speech translation, natural language understanding, deep learning and question answering. Bowen was the main technical contributor, and later the leader, of the IBM’s speech translation technologies. He has also served as the Principal Investigator for a number of large-scale research projects funded by DARPA and others since 2008. He has received numerous technical awards at IBM including a number of Outstanding Innovation Awards, Outstanding Technical Achievement Awards, and the “Best of IBM” award in 2015.

Bowen has been active in both IEEE and ACL communities. He was elected to serve in the IEEE Speech and Language Processing Technical Committee for 2010-2015. He has been an Associate Editor for IEEE/ACM Transactions Audio, Speech and Language Processing since 2013. He has also frequently served as the Area Chairs for ICASSP and NAACL et al., and as the invited speakers and panelists at international conferences such as Interspeech and NAACL.

Annie Louis (Google) “Understanding Indirect Answers to Polar Questions” @ via Zoom
Oct 19 @ 12:00 pm – 1:15 pm


For the polar (yes/no) question “Want to get dinner?”, there are many perfectly natural responses in addition to ‘yes’ and ‘no’. Humans can spontaneously interpret responses such as “I’m starving.”, “You are up for Chinese?” or “Let’s do lunch tomorrow.”. Allowing such indirect yet natural responses in dialog systems can be hugely beneficial compared to closed vocabularies. However, today’s systems are only as sensitive to these pragmatic moves as their language model allows.

In this talk, I will present the first large-scale English language corpus with around 34,000 (yes/no question, indirect answer) pairs to enable progress on understanding indirect responses. The data was collected via elaborate crowd-sourcing, and contains utterances with yes/no meaning, as well as uncertain, middle-ground, and conditional responses. I will also present experiments with BERT-based neural models to predict such categories for a question-answer pair. We find that while our performance is reasonable, it is not yet sufficient for robust dialog.


Annie Louis is a Research Scientist at Google Research in London. Before that she worked as a postdoc at the University of Edinburgh, and as a Research Fellow at the Alan Turing Institute. Even previously, she completed her PhD with the NLP group at University of Pennsylvania. Her research interests are in NLP and machine learning, particularly on discourse and pragmatic phenomena, and applications such as summarisation, and conversation systems.


Tom McCoy (Johns Hopkins University) “Opening the Black Box of Deep Learning: Representations, Inductive Biases, and Robustness” @ Ames Hall 234
Jan 31 @ 12:00 pm – 1:15 pm


Natural language processing has been revolutionized by neural networks, which perform impressively well in applications such as machine translation and question answering. Despite their success, neural networks still have some substantial shortcomings: Their internal workings are poorly understood, and they are notoriously brittle, failing on example types that are rare in their training data. In this talk, I will use the unifying thread of hierarchical syntactic structure to discuss approaches for addressing these shortcomings. First, I will argue for a new evaluation paradigm based on targeted, hypothesis-driven tests that better illuminate what models have learned; using this paradigm, I will show that even state-of-the-art models sometimes fail to recognize the hierarchical structure of language (e.g., to conclude that “The book on the table is blue” implies “The table is blue.”) Second, I will show how these behavioral failings can be explained through analysis of models’ inductive biases and internal representations, focusing on the puzzle of how neural networks represent discrete symbolic structure in continuous vector space. I will close by showing how insights from these analyses can be used to make models more robust through approaches based on meta-learning, structured architectures, and data augmentation.


Tom McCoy is a PhD candidate in the Department of Cognitive Science at Johns Hopkins University. As an undergraduate, he studied computational linguistics at Yale. His research combines natural language processing, cognitive science, and machine learning to study how we can achieve robust generalization in models of language, as this remains one of the main areas where current AI systems fall short. In particular, he focuses on inductive biases and representations of linguistic structure, since these are two of the major components that determine how learners generalize to novel types of input.

Center for Language and Speech Processing