Seminars

Oct
7
Fri
Ariya Rastrow (Amazon) @ Hackerman Hall B17
Oct 7 @ 12:00 pm – 1:15 pm
Oct
14
Fri
He He (New York University) “What We Talk about When We Talk about Spurious Correlations in NLP” @ Hackerman Hall B17
Oct 14 @ 12:00 pm – 1:15 pm

Abstract

Model robustness and spurious correlations have received increasing attention in the NLP community, both in methods and evaluation. The term “spurious correlation” is overloaded though and can refer to any undesirable shortcuts learned by the model, as judged by domain experts.

When designing mitigation algorithms, we often (implicitly) assume that a spurious feature is irrelevant for prediction. However, many features in NLP (e.g. word overlap and negation) are not spurious in the sense that the background is spurious for classifying objects in an image. In contrast, they carry important information that’s needed to make predictions by humans. In this talk, we argue that it is more productive to characterize features in terms of their necessity and sufficiency for prediction. We then discuss the implications of this categorization in representation, learning, and evaluation.

Biography

He He is an Assistant Professor in the Department of Computer Science and the Center for Data Science at New York University. She obtained her PhD in Computer Science at the University of Maryland, College Park. Before joining NYU, she spent a year at AWS AI and was a post-doc at Stanford University before that. She is interested in building robust and trustworthy NLP systems in human-centered settings. Her recent research focus includes robust language understanding, collaborative text generation, and understanding capabilities and issues of large language models.

Oct
17
Mon
David Chiang (University of Notre Dame) “Exact Recursive Probabilistic Programming with Colin McDonald, Darcey Riley, Kenneth Sible (Notre Dame) and Chung-chieh Shan (Indiana)” @ Hackerman Hall B17
Oct 17 @ 12:00 pm – 1:15 pm

Abstract

Recursive calls over recursive data are widely useful for generating probability distributions, and probabilistic programming allows computations over these distributions to be expressed in a modular and intuitive way. Exact inference is also useful, but unfortunately, existing probabilistic programming languages do not perform exact inference on recursive calls over recursive data, forcing programmers to code many applications manually. We introduce a probabilistic language in which a wide variety of recursion can be expressed naturally, and inference carried out exactly. For instance, probabilistic pushdown automata and their generalizations are easy to express, and polynomial-time parsing algorithms for them are derived automatically. We eliminate recursive data types using program transformations related to defunctionalization and refunctionalization. These transformations are assured correct by a linear type system, and a successful choice of transformations, if there is one, is guaranteed to be found by a greedy algorithm. I will also describe the implementation of this language in two phases: first, compilation to a factor graph grammar, and second, computing the sum-product of the factor graph grammar.
Biography
David Chiang (PhD, University of Pennsylvania, 2004) is an associate professor in the Department of Computer Science and Engineering at the University of Notre Dame. His research is on computational models for learning human languages, particularly how to translate from one language to another. His work on applying formal grammars and machine learning to translation has been recognized with two best paper awards (at ACL 2005 and NAACL HLT 2009). He has received research grants from DARPA, NSF, Google, and Amazon, has served on the executive board of NAACL and the editorial board of Computational Linguistics and JAIR, and is currently on the editorial board of Transactions of the ACL.
Oct
24
Mon
Fei Sha (University of Southern California) “Extracting Information from Text into Memory for Knowledge-Intensive Tasks” @ Hackerman Hall B17
Oct 24 @ 12:00 pm – 1:15 pm

Abstract

Modern learning architectures for natural language processing have been very successful in incorporating a huge amount of texts into their parameters. However, by and large, such models store and use knowledge in distributed and decentralized ways. This proves unreliable and makes the models ill-suited for knowledge-intensive tasks that require reasoning over factual information in linguistic expressions.  In this talk, I will give a few examples of exploring alternative architectures to tackle those challenges. In particular, we can improve the performance of such (language) models by representing, storing and accessing knowledge in a dedicated memory component.

This talk is based on several joint works with Yury Zemlyanskiy (Google Research), Michiel de Jong (USC and Google Research), William Cohen (Google Research and CMU) and our other collaborators in Google Research.

Biography

Fei is a research scientist at Google Research. Before that, he was a Professor of Computer Science at University of Southern California. His primary research interests are machine learning and its application to various AI problems: speech and language processing, computer vision, robotics and recently weather forecast and climate modeling.   He has a PhD (2007) from Computer and Information Science from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing, China).

Sep
1
Fri
Lei Li (Carnegie Mellon University) “Empowering Responsible Use of Large Language Models” @ Hackerman Hall B17
Sep 1 @ 12:00 pm – 1:15 pm

Abstract

Large language models (LLMs) have demonstrated incredible power, but they also possess vulnerabilities that can lead to misuse and potential attacks. In this presentation, we will address two fundamental questions regarding the responsible utilization of LLMs: (1) How can we accurately identify AI-generated text? (2) What measures can safeguard the intellectual property of LLMs? We will introduce two recent watermarking techniques designed for text and models, respectively. Our discussion will encompass the theoretical underpinnings that ensure the correctness of watermark detection, along with robustness against evasion attacks. Furthermore, we will showcase empirical evidence validating their effectiveness. These findings establish a solid technical groundwork for policymakers, legal professionals, and generative AI practitioners alike.

Biography

Lei Li is an Assistant Professor in Language Technology Institute at Carnegie Mellon University. He received Ph.D. from Carnegie Mellon University School of Computer Science. He is a recipient of ACL 2021 Best Paper Award, CCF Young Elite Award in 2019, CCF distinguished speaker in 2017, Wu Wen-tsün AI prize in 2017, and 2012 ACM SIGKDD dissertation award (runner-up), and is recognized as Notable Area Chair of ICLR 2023. Previously, he was a faculty member at UC Santa Barbara. Prior to that,  he founded ByteDance AI Lab in 2016 and led its research in NLP, ML, Robotics, and Drug Discovery. He launched ByteDance’s machine translation system VolcTrans and AI writing system Xiaomingbot, serving one billion users.

Oct
2
Mon
CLSP Student Seminar – Anna Favaro @ Hackerman Hall B17
Oct 2 @ 12:00 pm – 1:15 pm
Oct
6
Fri
CLSP Student Seminar – Andrew Blair-Stanek “Shelter Check and GPT-4’s Bad Legal Performance” @ Hackerman Hall B17
Oct 6 @ 12:00 pm – 1:15 pm

Abstract

Our goal is to use AI to automatically find tax minimization strategies, an approach which we call “Shelter Check.” It would come in two variants. Existing-Authority Shelter Check would aim to find whether existing tax law authorities can be combined to create tax minimization strategies, so the IRS or Congress can shut them down. New-Authority Shelter Check would automate checking whether a new tax law authority – like proposed legislation or a draft court decision – would combine with existing authorities to create a new tax minimization strategy. We had initially had high hopes for GPT-* large language models for implementing Shelter Check, but our tests have showed that they do very poorly at basic legal reasoning and handling legal text. So we are now creating a benchmark and training data for LLM’s handling legal text, hoping to spur improvements.

Oct
9
Mon
Wei-Ning Hsu (Meta Foundational AI Research) “Large Scale Universal Speech Generative Models” @ Hackerman Hall B17
Oct 9 @ 12:00 pm – 1:15 pm

Abstract

Large-scale generative models such as GPT and DALL-E have revolutionized natural language processing and computer vision research. These models not only generate high fidelity text or image outputs, but also demonstrate impressive domain and task generalization capabilities. In contrast, audio generative models are relatively primitive in scale and generalization.

In this talk, I will start with a brief introduction on conventional neural speech generative models and discuss why they are unfit for scaling to Internet-scale data. Next, by reviewing the latest large-scale generative models for text and image, I will outline a few lines of promising approaches to build scalable speech models. Last, I will present Voicebox, our latest work to advance this area. Voicebox is the most versatile generative model for speech. It is trained with a simple task — text conditioned speech infilling — on over 50K hours of multilingual speech with a powerful flow-matching objective. Through in-context learning, Voicebox can perform monolingual/cross-lingual zero-shot TTS, holistic style conversion, transient noise removal, content editing, and diverse sample generation. Moreover, Voicebox achieves state-of-the-art performance and excellent run-time efficiency.

Biography

Wei-Ning Hsu is a research scientist at Meta Foundational AI Research (FAIR) and currently the lead of the audio generation team. His research focuses on self-supervised learning and generative models for speech and audio. His pioneering work includes HuBERT, AV-HuBERT, TextlessNLP, data2vec, wav2vec-U, textless speech translation, and Voicebox. 

Prior to joining Meta, Wei-Ning worked at MERL and Google Brain as a research intern. He received his Ph.D. and S.M. degrees in Electrical Engineering and Computer Science from Massachusetts Institute of Technology in 2020 and 2018, under the supervision of Dr. James Glass. He received his B.S. degree in Electrical Engineering from National Taiwan University in 2014, under the supervision of Prof. Lin-shan Lee and Prof. Hsuan-Tien Lin.

Oct
13
Fri
Antoine Bosselut (EPFL) “From Mechanistic Interpretability to Mechanistic Reasoning” @ Hackerman Hall B17
Oct 13 @ 12:00 pm – 1:15 pm

Abstract

Pretrained language models (LMs) encode implicit representations of knowledge in their parameters. Despite this observation, our best methods for interpreting these representations yield few actionable insights on how to manipulate this parameter space for downstream benefit. In this talk, I will present work on methods that simulate machine reasoning by localizing and modifying parametric knowledge representations. First, I will present a method for discovering knowledge-critical subnetworks within pretrained language models, and show that these sparse computational subgraphs are responsible for the model’s ability to encode specific pieces of knowledge. Then, I will present a new reasoning algorithm, RECKONING, a bi-level optimisation procedure that dynamically encodes and reasons over new knowledge at test-time using the model’s existing learned knowledge representations as a scratchpad. Finally, I will discuss next steps and challenges in using internal model mechanisms for reasoning.
Bio
Antoine Bosselut is an assistant professor in the School of Computer and Communication Sciences at the École Polytechnique Fédéral de Lausanne (EPFL). He was a postdoctoral scholar at Stanford University and a Young Investigator at the Allen Institute for AI (AI2). He completed his PhD at the University of Washington and was a student researcher at Microsoft Research. His research interests are in building systems that mix knowledge and language representations to solve problems in NLP, specializing in commonsense representation and reasoning.
Oct
16
Mon
CLSP Student Seminar – Maliha Jahan
Oct 16 @ 12:00 pm – 1:15 pm

Center for Language and Speech Processing