Abstract
Kelly’s research spans three broad directions in multilingual NLP and representation learning: (1) diagnosing and fixing failure modes in translation technologies (2) data-efficient and low-resource NLP, and (3) compute-efficient NLP. This talk is an overview of 5 years of PhD work, spanning projects on unsupervised machine translation and bilingual lexicon induction, the mathematical framing of translation tasks, and efficient adaptation of large language models to new languages. Kelly will also discuss future research directions, including multi-modal representation learning, compression, speech translation, and sign-language translation.