Learning Semantic Parsers for More Languages and with Less Supervision – Luke Zettlemoyer (University of Washington)
View Seminar Video
Recent work has demonstrated effective learning algorithms for a variety of semantic parsing problems, where the goal is to automatically recover the underlying meaning of input sentences. Although these algorithms can work well, there is still a large cost in annotating data and gathering other language-specific resources for each new application. This talk focuses on efforts to address these challenges by developing scalable, probabilistic CCG grammar induction algorithms. I will present recent work on methods that incorporate new notions of lexical generalization, thereby enabling effective learning for a variety of different natural languages and formal meaning representations. I will also describe a new approach for learning semantic parsers from conversational data, which does not require any manual annotation of sentence meaning. Finally, I will sketch future directions, including our recurring focus on building scalable learning techniques while attempting to minimize the application-specific engineering effort.Joint work with Yoav Artzi, Tom Kwiatkowski, Sharon Goldwater, and Mark Steedman.
Luke Zettlemoyer is an Assistant Professor at the University of Washington. His research interests are in the intersections of natural language processing, machine learning and decision making under uncertainty. He spends much of his time developing learning algorithms that attempt to recover and make use of detailed representations of the meaning of natural language text. He was a postdoctoral research fellow at the University of Edinburgh and received his Ph.D. from MIT.