When Topic Models Go Bad: Diagnosing and Improving Models for Exploring Large Corpora – Jordan Boyd-Graber (University of Maryland)

When:
September 20, 2011 all-day
2011-09-20T00:00:00-04:00
2011-09-21T00:00:00-04:00

View Seminar Video
Abstract
Imagine you need to get the gist of what’s going on in a large text dataset such as all tweets that mention Obama, all e-mails sent within a company, or all newspaper articles published by the New York Times in the 1990s. Topic models, which automatically discover the themes which permeate a corpus, are a popular tool for discovering what’s being discussed. However, topic models aren’t perfect; errors hamper adoption of the model, performance in downstream computational tasks, and human understanding of the data. However, humans can easily diagnose and fix these errors. We describe crowdsourcing experiments to detect problematic topics and to determine which models produce comprehensible topics. Next, we present a statistically sound model to incorporate hints and suggestions from humans to iteratively refine topic models to better model large datasets.If time permits, we will also examine how topic models can be used to understand topic control in debates and discussions.
Biography
Jordan Boyd-Graber in an assistant professor in the College of Information Studies and the Institute for Advanced Computer Studies at the University of Maryland, focusing on the interaction of users and machine learning: how algorithms can better learn from human behaviors and how users can better communicate their needs to machine learning algorithms. Previously, he worked as a postdoc with Philip Resnik at the University of Maryland. Until 2009, he was a graduate student at Princeton University working with David Blei on linguistic extensions of topic models. His current work is supported by NSF, IARPA, and ARL.

Center for Language and Speech Processing