Yulia Tsvetkov (University of Washington) “LLMs under the Microscope: Illuminating the Blind Spots and Improving the Reliability of Language Models”

When:
April 26, 2024 @ 12:00 pm – 1:15 pm
2024-04-26T12:00:00-04:00
2024-04-26T13:15:00-04:00
Where:
Hackerman Hall B17
3400 N. Charles Street
Baltimore
MD 21218
Cost:
Free

Abstract

Large language models (LMs) are pretrained on diverse data sources—news, discussion forums, books, online encyclopedias. A significant portion of this data includes facts and opinions which, on one hand, celebrate democracy and diversity of ideas, and on the other hand are inherently socially biased. In this talk. I’ll present our recent work proposing new methods to (1) measure media biases in LMs trained on such corpora, along the social and economic axes, and (2) measure the fairness of downstream NLP models trained on top of politically biased LMs. In this study, we find that pretrained LMs do have political leanings which reinforce the polarization present in pretraining corpora, propagating social biases into social-oriented tasks such as hate speech and misinformation detection. In the second part of my talk, I’ll discuss ideas on mitigating LMs’ unfairness. Rather than debiasing models—which, our work shows, is impossible—we propose to understand, calibrate, and better control for their social impacts using modular methods in which diverse LMs collaborate at inference time.

Bio

Yulia Tsvetkov is an associate professor at the Paul G. Allen School of Computer Science & Engineering at University of Washington. Her research group works on fundamental advancements to large language models, multilingual NLP, and AI ethics. This research is motivated by a unified goal: to extend the capabilities of human language technology beyond individual populations and across language boundaries, thereby making NLP tools available to all users. Prior to joining UW, Yulia was an assistant professor at Carnegie Mellon University and a postdoc at Stanford. Yulia is a recipient of NSF CAREER, Sloan Fellowship, Okawa Research award, and several paper awards and runner-ups at NLP, ML, and CSS conferences.

Center for Language and Speech Processing