Najoung Kim – “Probing What Different NLP Tasks Teach Machines About Function Word Comprehension and Where To Go Next”

When:
October 14, 2019 @ 12:00 pm – 1:00 pm
2019-10-14T12:00:00-04:00
2019-10-14T13:00:00-04:00

Abstract:

In this talk, I will mainly discuss a task-based approach to measure the effect of various pretraining objectives (e.g., language modeling, CCG supertagging) of sentence encoders on their understanding of function words. The tasks are created by structurally mutating sentences from existing datasets to target the comprehension of specific types of function words in English (e.g., prepositions, wh-words). Our results show that pretraining on language modeling performs the best on average across our probing tasks, supporting its widespread use for pretraining state-of-the-art NLP models, and CCG supertagging and NLI pretraining perform comparably. Overall, no pretraining objective dominates across the board, and our function word probing tasks highlight several intuitive differences between pretraining objectives. In addition to these findings, I will discuss ongoing follow-up works and some promising future directions for probing analyses in NLP.

Johns Hopkins University

Johns Hopkins University, Whiting School of Engineering

Center for Language and Speech Processing
Hackerman 226
3400 North Charles Street, Baltimore, MD 21218-2680

Center for Language and Speech Processing