Abstract
Large language models (LLMs) have demonstrated incredible power, but they also possess vulnerabilities that can lead to misuse and potential attacks. In this presentation, we will address two fundamental questions regarding the responsible utilization of LLMs: (1) How can we accurately identify AI-generated text? (2) What measures can safeguard the intellectual property of LLMs? We will introduce two recent watermarking techniques designed for text and models, respectively. Our discussion will encompass the theoretical underpinnings that ensure the correctness of watermark detection, along with robustness against evasion attacks. Furthermore, we will showcase empirical evidence validating their effectiveness. These findings establish a solid technical groundwork for policymakers, legal professionals, and generative AI practitioners alike.
Biography
Lei Li is an Assistant Professor in Language Technology Institute at Carnegie Mellon University. He received Ph.D. from Carnegie Mellon University School of Computer Science. He is a recipient of ACL 2021 Best Paper Award, CCF Young Elite Award in 2019, CCF distinguished speaker in 2017, Wu Wen-tsün AI prize in 2017, and 2012 ACM SIGKDD dissertation award (runner-up), and is recognized as Notable Area Chair of ICLR 2023. Previously, he was a faculty member at UC Santa Barbara. Prior to that, he founded ByteDance AI Lab in 2016 and led its research in NLP, ML, Robotics, and Drug Discovery. He launched ByteDance’s machine translation system VolcTrans and AI writing system Xiaomingbot, serving one billion users.
Abstract
The arms race to build increasingly larger, powerful language models (LMs) in the past year has been remarkable. Yet incorporating LMs effectively into practical applications that facilitate manual workflows remains challenging. I will discuss LMs’ limiting factors and our efforts to overcome them. I will start with challenges surrounding efficient and robust LM alignment. I will share insights from our recent paper “Self-Instruct” (ACL 2023), where we used vanilla (unaligned) LMs for aligning itself, an approach that has yielded some success. Then, I will move on to the challenge of tracing the output of LMs to reliable sources, a weakness that makes them prone to hallucinations. I will discuss our recent approach of ‘according-to’ prompting, which steers LMs to quote directly from sources observed in its pre-training. If time permits, I will discuss our ongoing project to adapt LMs to interact with web pages. Throughout the presentation, I will highlight our progress, and end with questions about our future progress.
Biography
Daniel Khashabi is an assistant professor in computer science at Johns Hopkins University and the Center for Language and Speech Processing (CLSP) member. He is interested in building reasoning-driven modular NLP systems that are robust, transparent, and communicative, particularly those that use natural language as the communication medium. Khashabi has published over 40 papers on natural language processing and AI in top-tier venues. His work touches upon developing. His research has won the ACL 2023 Outstanding Paper Award, NAACL 2022 Best Paper Award, research gifts from the Allen Institute for AI, and an Amazon Research Award 2023. Before joining Hopkins, he was a postdoctoral fellow at the Allen Institute for AI (2019-2022) and obtained a Ph.D. from the University of Pennsylvania in 2019.
Abstract
Embedding text sequences is a widespread requirement in modern language understanding. Existing approaches focus largely on constant-size representations. This is problematic, as the amount of information contained in text often varies with the length of the input. We propose a solution called Nugget, which encodes language into a representation based on a dynamically selected subset of input tokens. These nuggets are learned through tasks like autoencoding and machine translation, and intuitively segment language into meaningful units. We demonstrate Nugget outperforms related approaches in tasks involving semantic comparison. Finally, we illustrate these compact units allow for expanding the contextual window of a language model (LM), suggesting new future LMs that can condition on significantly larger amounts of content.
Abstract
The growing power in computing and AI promises a near-term future of human-machine teamwork. In this talk, I will present my research group’s efforts in understanding the complex dynamics of human-machine interaction and designing intelligent machines aimed to assist and collaborate with people. I will focus on 1) tools for onboarding machine teammates and authoring machine assistance, 2) methods for detecting, and broadly managing, errors in collaboration, and 3) building blocks of knowledge needed to enable ad hoc human-machine teamwork. I will also highlight our recent work on designing assistive, collaborative machines to support older adults aging in place.
Biography
Chien-Ming Huang is the John C. Malone Assistant Professor in the Department of Computer Science at the Johns Hopkins University. His research focuses on designing interactive AI aimed to assist and collaborate with people. He publishes in top-tier venues in HRI, HCI, and robotics including Science Robotics, HRI, CHI, and CSCW. His research has received media coverage from MIT Technology Review, Tech Insider, and Science Nation. Huang completed his postdoctoral training at Yale University and received his Ph.D. in Computer Science at the University of Wisconsin–Madison. He is a recipient of the NSF CAREER award. https://www.cs.jhu.edu/~cmhuang/
Abstract
The use of NLP in the realm of financial technology is broad and complex, with applications ranging from sentiment analysis and named entity recognition to question answering. Large Language Models (LLMs) have been shown to be effective on a variety of tasks; however, no LLM specialized for the financial domain has been reported in the literature. In this work, we present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg’s extensive data sources, perhaps the largest domain-specific dataset yet, augmented with 345 billion tokens from general-purpose datasets. We validate BloombergGPT on standard LLM benchmarks, open financial benchmarks, and a suite of internal benchmarks that most accurately reflect our intended usage. Our mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks. Additionally, we explain our modeling choices, training process, and evaluation methodology.
Biography
Mark Dredze is the John C Malone Professor of Computer Science at Johns Hopkins University and the Director of Research (Foundations of AI) for the JHU AI-X Foundry. He develops Artificial Intelligence Systems based on natural language processing and explores applications to public health and medicine.
Prof. Dredze is affiliated with the Malone Center for Engineering in Healthcare, the Center for Language and Speech Processing, among others. He holds a joint appointment in the Biomedical Informatics & Data Science Section (BIDS), under the Department of Medicine (DOM), Division of General Internal Medicine (GIM) in the School of Medicine. He obtained his PhD from the University of Pennsylvania in 2009.
Abstract
Visually rich documents (scanned or digital) remain important for many consumer and business use cases. During this talk we will share recent work from our team in the Document Intelligence Lab of Adobe Research to understand, create, and interact with these documents. First, we’ll share a series of work on building models to decompose and understand the structure of documents to support use cases around document analysis and accessibility. Next, we’ll explore document semantic understanding for a project where we convert natural language contract clauses to code to support business automation. Finally, we’ll discuss DocEdit, a model and dataset that enables editing structured documents from natural language.
BIOS:
Rajiv Jain is a Senior Research Scientist in the Document Intelligence Lab in Adobe Research, where his research focuses on understanding the layout, content, and interaction with documents. Prior to joining Adobe, Rajiv was a consultant at DARPA, where he worked on the Media Forensics Program to secure digital imagery. He previously served for 10 years as a researcher for the Department of Defense where he worked on projects around large scale systems, computer vision, and network security. He received his PhD in computer science from the University of Maryland, College Park working in the field of document image analysis and retrieval.
Chris Tensmeyer primarily focuses on multi-modal document layout and content understanding as a Research Scientist in the Document Intelligence Lab of Adobe Research. Since joining Adobe 5 years ago, his work has directly impacted popular Adobe features such as mobile Acrobat Liquid Mode, PDF table extraction, handwriting recognition, and scanned document detection. Other research interests include general Computer Vision and Deep Learning. He received his PhD in Computer Science from Brigham Young University on the topic of Deep Learning for Document Image Analysis.
Abstract
The field of NLP is in the midst of a disruptive shift, fueled most recently by the advent of large language models (LLMs), with impacts on our methodologies, funding and public perception. While the core technologies and scope of real-world impact of our field may be changing (everything is different!), many of the same key challenges faced since the inception of our field remain (nothing has changed). In this talk I’ll describe recent work characterizing and tackling some of these challenges, notably: data-efficient domain adaptation and lifelong learning. I will also anchor discussion of cycles and shifts in the field by describing findings from a qualitative study of factors shaping the community over time, including culture, incentives, and infrastructure. Through these complementary lenses into the past, present and future, I aim to inspire shared hope, excitement and discussion.
Bio
Emma Strubell is the Raj Reddy Assistant Professor in the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University, and a Visiting Scientist at the Allen Institute for Artificial Intelligence. Previously she held research scientist roles at Google and FAIR after earning her doctoral degree in 2019 from the University of Massachusetts Amherst. Her research lies at the intersection of natural language processing and machine learning, with a focus on providing pragmatic solutions to practitioners who wish to gain insights from natural language text via computation- and data-efficient AI. Her work has been recognized with a Madrona AI Impact Award, best paper awards at ACL and EMNLP, and cited in news outlets including the New York Times and Wall Street Journal.
Abstract
Any valuable NLP dataset has traditionally been shipped with crowdsourced categorical labels. Instructions for collecting these labels are easy to communicate and the labels themselves are easy to annotate. However, as self-supervision based methods are getting better at basically everything, human annotations may need to provide more nuanced supervision or enable more detailed evaluation in order to be worth further collecting. One natural extension to existing categorical annotation schemes is to obtain uncertainty information beyond a single hard label. In this talk, I will discuss my recent efforts on introducing scalar labels in place of categorical labels as a form of uncertainty annotation. We demonstrate that, compared to other more obvious annotation schemes for eliciting uncertainty information, scalar labels are significantly more cost-effective to annotate, provide reliable evaluation, and have a theoretical connection to existing predictive uncertainty metrics. In particular, they motivate using other losses as surrogates for calibration evaluation.