Don P. Giddens Inaugural Professorial Lecture: Jason Eisner @ Mason Hall Auditorium
Mar 28 @ 3:30 pm – 5:00 pm
Don P. Giddens Inaugural Professorial Lecture: Jason Eisner @ Mason Hall Auditorium

Join us at the Don P. Giddens Inaugural Professorial Lecture recognizing Jason Eisner as a full professor in the Department of Computer Science. This lecture will be held from 3:30 to 5 p.m. on Monday, March 28, 2015, in the Mason Hall Auditorium. A reception will follow in the Malone Hall lobby.

In a lecture titled “Probabilistic Models of Natural Language,” Eisner will describe mathematical tools for modeling how the parts of a sentence relate to one another, to the grammar of the language, and to facts in the world. He also will talk about algorithms for learning the parameters of such models and drawing inferences from them. Jason is affiliated with the Center for Language and Speech Processing, the Cognitive Science Department, and the Human Language Technology Center of Excellence, and leads JHU’s cross-departmental machine learning group.

The Don P. Giddens Inaugural Professorial Lecture series is named for the fifth dean of the Whiting School of Engineering and started in 1993 to honor newly promoted full professors.

Yejin Choi (University of Washington) “Procedural Language and Knowledge” @ Hackerman Hall B17
Oct 7 @ 12:00 pm – 1:15 pm


Various types of how-to-knowledge are encoded in natural language instructions: from setting up a tent, to preparing a dish for dinner, and to executing biology lab experiments. These types of instructions are based on procedural language, which poses unique challenges. For example, verbal arguments are commonly elided when they can be inferred from context, e.g., “bake for 30 minutes”, not specifying bake what and where. Entities frequently merge and split, e.g., “vinegar’’ and “oil’’ merging into “dressing’’, creating challenges to reference resolution. And disambiguation often requires world knowledge, e.g., the implicit location argument of “stir frying” is on “stove”. In this talk, I will present our recent approaches to interpreting and composing cooking recipes that aim to address these challenges.

In the first part of the talk, I will present an unsupervised approach to interpreting recipes as action graphs, which define what actions should be performed on which objects and in what order. Our work demonstrates that it is possible to recover action graphs without having access to gold labels, virtual environments or simulations. The key insight is to rely on the redundancy across different variations of similar instructions that provides the learning bias to infer various types of background knowledge, such as the typical sequence of actions applied to an ingredient, or how a combination of ingredients (e.g., “flour”, “milk”, “eggs”) becomes a new entity (e.g, “wet mixture”).

In the second part of the talk, I will present an approach to composing new recipes given a target dish name and a set of ingredients. The key challenge is to maintain global coherence while generating a goal-oriented text. We propose a Neural Checklist Model that attains global coherence by storing and updating a checklist of the agenda (e.g., an ingredient list) with paired attention mechanisms for tracking what has been already mentioned and what needs to be yet introduced. This model also achieves strong performance on dialogue system response generation. I will conclude the talk by discussing the challenges in modeling procedural language and acquiring the necessary background knowledge, pointing to avenues for future research.


Yejin Choi is an assistant professor at the Computer Science & Engineering Department of University of Washington. Her recent research focuses on language grounding, integrating language and vision, and modeling nonliteral meaning in text. She was among the IEEE’s AI Top 10 to Watch in 2015 and a co-recipient of the Marr Prize at ICCV 2013. Her work on detecting deceptive reviews, predicting the literary success, and learning to interpret connotation has been featured by numerous media outlets including NBC News for New York, NPR Radio, New York Times, and Bloomberg Business Week. She received her Ph.D. in Computer Science at Cornell University.

Eunsol Choi (University of Texas at Austin) “Learning to Understand Language in Context” @ via Zoom
Feb 15 @ 12:00 pm – 1:15 pm


Many applications of natural language processing need to understand text from the rich context in which it occurs and present information in a new context. Interpreting the rich context of a sentence, either conversation history, social context, or preceding contents in the document, is challenging yet crucial to understand the sentence. In the first part of the talk, we study the context-reduction process by defining the problem of sentence decontextualization: taking a sentence together with its context and rewriting it to be interpretable out of context, while preserving its meaning. Typically a sentence taken out of a context is unintelligible, but decontextualization recovers key pieces of information and make sentences stand alone. We demonstrate the utility of this process, as a preprocessing for open-domain question answering and for generating an informative and concise answer to an information-seeking query. In the latter half of the talk, we focus on building models to integrate rich context to interpret single utterances more accurately. We study the challenges of interpreting rich context in question answering, by first integrating conversational history and by integrating entity information. Together, these works show how modeling interaction between text and the rich context in which it occurs can improve performances of NLP systems.


Eunsol Choi is an assistant professor in the computer science department at the University of Texas at Austin. Her research focuses on natural language processing, various ways to recover semantics from unstructured text. Recently, her research focused on question answering and entity analysis. Prior to UT, she was a visiting faculty researcher at Google AI. She received a Ph.D. from the University of Washington, working with Luke Zettlemoyer and Yejin Choi. She received an undergraduate degree in mathematics and computer science at Cornell University. She is a recipient Facebook Research Fellowship and has co-organized many workshops related to question answering at NLP and ML venues.

Center for Language and Speech Processing