Seminars

Feb
10
Fri
Mark Yatskar (University of Pennsylvania) “Understanding Dataset Biases: Behavioral Indicators During Annotation and Contrastive Mitigations” @ Hackerman Hall B17
Feb 10 @ 12:00 pm – 1:15 pm

Abstract

Biases in datasets, or unintentionally introduced spurious cues, are a common source of misspecification in machine learning. Performant models trained on such data can gender stereotype or be brittle under distribution shift.  In this talk, we present several results in multimodal and question answering applications studying sources of dataset bias, and several mitigation methods.  We propose approaches where known dimensions of dataset bias are explicitly factored out of a model during learning, without needing to modify data. Finally, we ask whether dataset biases can be attributable to annotator behavior during annotation. Drawing inspiration from work in psychology on cognitive biases, we show certain behavioral patterns are highly indicative of the creation of problematic (but valid) data instances in question answering. We give evidence that many existing observations around how dataset bias propagates to models can be attributed to data samples created by annotators we identify.

Biography

Mark Yatskar is an Assistant Professor at University of Pennsylvania in the department of Computer and Information Science. He did his PhD at University of Washington co-advised by Luke Zettlemoyer and Ali Farhadi. He was a Young Investigator at the Allen Institute for Artificial Intelligence for several years working with their computer vision team, Prior. His work spans Natural Language Processing, Computer Vision, and Fairness in Machine Learning. He received a Best Paper Award at EMNLP for work on gender bias amplification, and his work has been featured in Wired and the New York Times.

Sep
8
Fri
Daniel Khashabi (Johns Hopkins University) “Building More Helpful Language Models” @ Hackerman Hall B17
Sep 8 @ 12:00 pm – 1:15 pm

Abstract

The arms race to build increasingly larger, powerful language models (LMs) in the past year has been remarkable. Yet incorporating LMs effectively into practical applications that facilitate manual workflows remains challenging. I will discuss LMs’ limiting factors and our efforts to overcome them. I will start with challenges surrounding efficient and robust LM alignment. I will share insights from our recent paper “Self-Instruct” (ACL 2023), where we used vanilla (unaligned) LMs for aligning itself, an approach that has yielded some success. Then, I will move on to the challenge of tracing the output of LMs to reliable sources, a weakness that makes them prone to hallucinations. I will discuss our recent approach of ‘according-to’ prompting, which steers LMs to quote directly from sources observed in its pre-training. If time permits, I will discuss our ongoing project to adapt LMs to interact with web pages. Throughout the presentation, I will highlight our progress, and end with questions about our future progress.

Biography

Daniel Khashabi is an assistant professor in computer science at Johns Hopkins University and the Center for Language and Speech Processing (CLSP) member. He is interested in building reasoning-driven modular NLP systems that are robust, transparent, and communicative, particularly those that use natural language as the communication medium. Khashabi has published over 40 papers on natural language processing and AI in top-tier venues. His work touches upon developing. His research has won the ACL 2023 Outstanding Paper Award, NAACL 2022 Best Paper Award, research gifts from the Allen Institute for AI, and an Amazon Research Award 2023. Before joining Hopkins, he was a postdoctoral fellow at the Allen Institute for AI (2019-2022) and obtained a Ph.D. from the University of Pennsylvania in 2019.

Center for Language and Speech Processing