Aren’t You Tired of Gradable Adjectives? They are Fascinating: Automatically Deriving Adjectival Scales – Marie-Catherine de Marneffe (Ohio State University)

When:
December 6, 2013 all-day
2013-12-06T00:00:00-05:00
2013-12-07T00:00:00-05:00

View Seminar Video

Abstract
In this talk, I will discuss how to automatically derive the orderings and meanings of gradable adjectives (such as okay < good < great < wonderful). To determine whether the intended answer is “yes” or “no” in a dialogue such as “Was the movie wonderful? It was worth seeing”, we need to evaluate how “worth seeing” relates to “wonderful”. Can we automatically learn from real texts the scalar orderings people assign to these modifiers? I will show how we can exploit the availability of large amounts of text on the web (such as online reviews ratings) to approximate these orderings. Then I will turn to neural network language models. I will show that continuous space word representations extracted from such models can be used to derive adjectival scales of high quality, emphasizing that neural network language models do capture semantic regularities. I evaluate the quality of the adjectival scales on several datasets. Next, I will briefly turn to biomedical data: what does it mean to show “severe symptoms of cardiac disease” or “mild pulmonary symptoms”? I will outline work in progress targeting the meaning of gradable adjectives in that domain. Not only do we want to get an ordering between such adjectives, but we also want to learn what counts as “severe” or “mild” symptoms of a disease.

Biography
Marie-Catherine de Marneffe is an assistant professor in Linguistics at The Ohio State University. She received her PhD from Stanford University in December 2012 under the supervision of Christopher D. Manning. She is developing computational linguistic methods that capture what is conveyed by speakers beyond the literal meaning of the words they say. Primarily she wants to ground meanings in corpus data, and show how such meanings can drive pragmatic inference. She has also worked on Recognizing Textual Entailment and contributed to defining the Stanford Dependencies representation, which is designed to be a practical representation of grammatical relations and predicate argument structure.

Johns Hopkins University

Johns Hopkins University, Whiting School of Engineering

Center for Language and Speech Processing
Hackerman 226
3400 North Charles Street, Baltimore, MD 21218-2680

Center for Language and Speech Processing