He He (New York University) “What We Talk about When We Talk about Spurious Correlations in NLP”

When:
October 14, 2022 @ 12:00 pm – 1:15 pm
2022-10-14T12:00:00-04:00
2022-10-14T13:15:00-04:00
Where:
Hackerman Hall B17
3400 N. Charles Street
Baltimore
MD 21218
Cost:
Free

Abstract

Model robustness and spurious correlations have received increasing attention in the NLP community, both in methods and evaluation. The term “spurious correlation” is overloaded though and can refer to any undesirable shortcuts learned by the model, as judged by domain experts.

When designing mitigation algorithms, we often (implicitly) assume that a spurious feature is irrelevant for prediction. However, many features in NLP (e.g. word overlap and negation) are not spurious in the sense that the background is spurious for classifying objects in an image. In contrast, they carry important information that’s needed to make predictions by humans. In this talk, we argue that it is more productive to characterize features in terms of their necessity and sufficiency for prediction. We then discuss the implications of this categorization in representation, learning, and evaluation.

Biography

He He is an Assistant Professor in the Department of Computer Science and the Center for Data Science at New York University. She obtained her PhD in Computer Science at the University of Maryland, College Park. Before joining NYU, she spent a year at AWS AI and was a post-doc at Stanford University before that. She is interested in building robust and trustworthy NLP systems in human-centered settings. Her recent research focus includes robust language understanding, collaborative text generation, and understanding capabilities and issues of large language models.

Center for Language and Speech Processing