Lea Frermann (University of Melbourne) “Learning Representations of Long Narratives for Summarization and Inference”
3400 N Charles St
Baltimore, MD 21218
USA
Abstract
Humans have an impressive ability to understand long and complex narratives, and to utilize common sense knowledge to quickly comprehend novel situations. NLP systems tend to scale poorly to long texts, and to rely on extended batch training before being able to make inferences. In this talk, I will present two projects aimed towards improving automatic understanding and modeling of long and complex narratives.
The first part presents work on leveraging topical document structure for improved and more flexible summarization. Given a topic and a news article, our topic-aware summarization models, summarize the article with respect to the topic. I will present a scalable synthetic training setup, and show that modeling document structure is particularly useful for long documents.
The second part of the talk focuses on incremental inference in a complex, multi-modal, and evolving world, considering the task of incremental identification of the perpetrator in episodes of a TV crime series (CSI). I will present a model, data set, task formulation and extensive analysis of the quality of model predictions compared to human predictions for the same task.
Biography
Lea is a postdoc at Amazon Core AI (Berlin), currently spending a 5 week visit at Columbia University, New York. In July 2019 she will take up a lecturer position at Melbourne University. Previously, she was a research associate at the University of Edinburgh, and a visiting scholar at Stanford. She obtained a PhD from the University of Edinburgh in 2017 (supervised by Mirella Lapata). Her research investigates the efficiency and robustness of human learning and inference in the face of the complexity of the world as approximated, for example, through large corpora of child-directed speech, or plots of books and films.