Jonathan May (Information Sciences Institute – USC) “How I Learned to Stop Worrying and Love Evaluations (and Keep Worrying)”

November 29, 2016 @ 12:00 pm – 1:15 pm
Hackerman Hall B17
3400 N Charles St
Baltimore, MD 21218
Center for Language and Speech Processing


Bake-offs, shared tasks, evaluations: these are names for short, high-stress periods in many CS researchers’ lives where their algorithms and models are exposed to unseen data, often with reputations and funding on the line.  Evaluations are sometimes perceived to be the bane of much of our work lives.  We grouse about metrics, procedures, glitches, and all the time “wasted” chasing scores, rather than doing Real Science (TM).  In this talk I will argue that despite valid criticisms of the approach, coordinated evaluation is a net benefit to NLP research and has led to accomplishments that might not have otherwise arisen.  This argument will frame a more in-depth discussion of several pieces of recent evaluation-grounded work: rapid generation of translation and information extraction for low-resource surprise languages (DARPA LORELEI) and organization of SemEval shared tasks in semantic parsing and generation.


Jonathan May is a Research Assistant Professor at the University of Southern California’s Information Sciences Institute (USC/ISI).  Previously, he was a research scientist at SDL Research (formerly Language Weaver) and a scientist at Raytheon BBN Technologies.  He received a Ph.D. in Computer Science from the University of Southern California in 2010 and a BSE and MSE in Computer Science Engineering and Computer and Information Science, respectively, from the University of Pennsylvania in 2001.  Jon’s research interests include automata theory, natural language processing, machine translation, and machine learning.

Center for Language and Speech Processing