Seminars

Apr
3
Mon
Student Seminar – Samik Sadhu (JHU) “Importance of Different Temporal Modulations of Speech: A Tale of Two Perspectives” @ Hackerman Hall B17
Apr 3 @ 12:00 pm – 1:15 pm

Abstract

How important are different temporal speech modulations for speech recognition? We answer this question from two complementary perspectives. Firstly, we quantify the amount of phonetic information in the modulation spectrum of speech by computing the mutual information between temporal modulations with frame-wise phoneme labels. Looking from another perspective, we ask – which speech modulations an Automatic Speech Recognition (ASR) system prefers for its operation. Data-driven weights are learned over the modulation spectrum and optimized for an end-to-end ASR task. Both methods unanimously agree that speech information is mostly contained in slow modulation. Maximum mutual information occurs around 3-6 Hz which also happens to be the range of modulations most preferred by the ASR. In addition, we show that the incorporation of this knowledge into ASRs significantly reduces their dependency on the amount of training data.

 

Apr
7
Fri
JHU CLSP APSA Roundtable on Learning How to Play with the Machines @ Hackerman Hall B17
Apr 7 @ 12:00 pm – 1:15 pm

Learning How to Play With The Machines: Taking Stock of Where the Collaboration Between Computational and Social Science Stands

 

Speakers:  Jeff Gill, Ernesto Calvo, Hale Sirin and Antonios Anastasopoulos

Apr
10
Mon
Student Seminar – Ruizhe Huang @ Hackerman Hall B17
Apr 10 @ 12:00 pm – 1:15 pm
Apr
14
Fri
Larry Heck (Georgia Institute of Technology) “The AVA Digital Human: Improving Conversational Interactions through Visually Situated Context” @ Hackerman Hall B17
Apr 14 @ 12:00 pm – 1:15 pm

Abstract

Advances in open domain Large Language Models (LLMs) starting with BERT and more recently with GPT-4, PaLM, and LLaMA have facilitated dramatic improvements in conversational systems. These improvements include an unprecedented breadth of conversational interactions between humans and machines while maintaining and sometimes surpassing the accuracy of systems trained specifically for known, closed domains. However, many applications still require higher levels of accuracy than pre-trained LLMs can provide. There are many studies underway to accomplish this. Broadly speaking, the methods assume the pre-trained models are fixed (due to cost/time), and instead look to various augmentation methods including prompting strategies and model adaptation/fine-tuning.

One augmentation strategy leverages the context of the conversation. For example, who are the participants and what is known about these individuals (personal context), what was just said (dialogue context), where is the conversation taking place (geo context), what time of day and season is it (time context), etc.  A powerful form of context is the shared visual setting of the conversation between the human(s) and machine. The shared visual scene may be from a device (phone, smart glasses) or represented on a screen (browser, maps, etc.) The elements in the visual context can be exploited by grounding the natural language conversational interaction, thereby changing the priors of certain concepts and increasing the accuracy of the system. In this talk, I will present some of my historical work in this area as well as my recent work in the AI Virtual Assistant (AVA) Lab at Georgia Tech.

Bio

Dr. Larry Heck is a Professor with a joint appointment in the School of Electrical and Computer Engineering and the School of Interactive Computing at the Georgia Institute of Technology. He holds the Rhesa S. Farmer Distinguished Chair of Advanced Computing Concepts and is a Georgia Research Alliance Eminent Scholar. His received the BSEE from Texas Tech University (1986), and MSEE and PhD EE from the Georgia Institute of Technology (1989,1991). He is a Fellow of the IEEE, inducted into the Academy of Distinguished Engineering Alumni at Georgia Tech and received the Distinguished Engineer Award from the Texas Tech University Whitacre College of Engineering. He was a Senior Research Engineer with SRI (1992-98), Vice President of R&D at Nuance (1998-2005), Vice President of Search and Advertising Sciences at Yahoo! (2005-2009), Chief Scientist of the Microsoft Speech products and Distinguished Engineer in Microsoft Research (2009-2014), Principal Scientist with Google Research (2014-2017), and CEO of Viv Labs and SVP at Samsung (2017-2021).

 

Apr
17
Mon
Paco Guzman (Meta AI) “Building a Universal Translation System to Break Down Language Barriers” @ Hackerman Hall B17
Apr 17 @ 12:00 pm – 1:15 pm

Abstract

Machine Translation has the ultimate goal of eliminating language barriers. However, the area has focused mainly on a few languages, leaving many low-resource languages without support. In this talk, I will discuss the challenges of bringing translation support for 200 written languages and beyond.
First, I talk about the No Language Left Behind Project, where we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. We proposed multiple architectural and training improvements to counteract over-fitting while training on thousands of language-pairs/tasks. We evaluated the performance of over 40,000 different translation directions.
Afterwards, I’ll discuss the challenges of pushing translation performance beyond text for languages that don’t have written standards like Hokkien.

Our models achieve state-of-the-art performance and lay important groundwork towards realizing a universal translation system. At the same time, we keep making open-source contributions for everyone to keep advancing the research for the languages they care about.

Bio

Paco is Research Scientist Manager supporting translation teams in Meta AI (FAIR). He works in the field of machine translation with a focus on low-resource translation (e.g. NLLB, FLORES) and the aim to break language barriers. He joined Meta in 2016. His research has been published in top-tier NLP venues like ACL, EMNLP. He was the co-chair of the Research director at AMTA (2020-2022). He has ave organized several research competitions focused on low-resource translation and data filtering. Paco obtained his PhD from the ITESM in Mexico, was a visiting scholar at the LTI-CMU from 2008-2009, and participated in DARPA’s GALE evaluation program. Paco was a post-doc and scientist at Qatar Computing Research Institute in Qatar in 2012-2016

Apr
21
Fri
Karthik Narasimhan (Princeton University) ” Towards General-Purpose Language-Enabled Agents: Machines that can Read, Think and Act” @ Hackerman Hall B17
Apr 21 @ 12:00 pm – 1:15 pm

Abstract

Large language models (LLMs) have ushered in exciting capabilities in language understanding and text generation, with systems like ChatGPT holding fluent dialogs with users and being almost indistinguishable from humans. While this has obviously raised conversational systems and chatbots to a new level, it also presents exciting new opportunities for building artificial agents with improved decision making capabilities. Specifically, the ability to reason with language can allow us to build agents that can 1) execute complex action sequences to effect change in the world, 2) learn new skills by ‘reading’ in addition to ‘doing’, and 3) allow for easier personalization and control over their behavior. In this talk, I will demonstrate how we can build such language-enabled agents that exhibit the above traits across various use cases such as multi-hop question answering, web interaction, and robotic tool manipulation. In the end, I will also discuss some dangers of using these LLM-based systems and some challenges that lie ahead in ensuring their safe use.

Biography

Karthik Narasimhan is an assistant professor in the Computer Science department at Princeton University and a co-Director of the Princeton NLP group. His research spans the areas of natural language processing and reinforcement learning, with the goal of building intelligent agents that learn to operate in the world through both their own experience (”doing things”) and leveraging existing human knowledge (”reading about things”). Karthik received his PhD from MIT in 2017, and spent a year as a visiting research scientist at OpenAI contributing to the GPT language model, prior to joining Princeton in 2018. His research has been recognized by the NSF CAREER, a Google Research Scholar Award, an Amazon research award (2019), Bell Labs runner-up prize and outstanding paper awards at EMNLP (2015, 2016) and NeurIPS (2022).

Apr
24
Mon
Student Seminar – Brian Lu @ Hackerman Hall B17
Apr 24 @ 12:00 pm – 1:15 pm
Apr
28
Fri
Becky Passonneau (Penn State University) ” Automated Support to Scaffold Students’ Short- and Long-form STEM Writing” @ Hackerman Hall B17
Apr 28 @ 12:00 pm – 1:15 pm

Abstract

Automated analysis of student writing has the potential to provide alternatives to selected-response questions such as multiple choice, and to enable teachers and instructors to assess students’ reasoning skills based on their long-form writing. Further, automated support to assess both short answers and long passages could provide students with a smoother trajectory towards mastery of written communication.  Our methods focus on the specific ideas students express to support formative assessment through different kinds of feedback, which aims to scaffold their abilities to reason and communicate. In this talk I review our work in the PSU NLP lab on methods for automated assessment of different forms of student writing, from younger and older students.  I will briefly illustrate highly curated datasets created in collaboration with researchers in STEM education, results from deployment of an older content analysis tool on middle school physics essays, and very preliminary results on assessment of college students’ physics lab reports.  I will also present our current work on short answer assessment using a novel recurrent relation network that incorporates contrastive learning.

Bio

Becky Passonneau has been a Professor in the Department of Computer Science and Engineering at Penn State University since 2016, when she joined as the first NLP researcher. Since that time the NLP faculty has grown to include Rui Zhang and Wenpeng Yin. Becky’s research in natural language processing addresses computational pragmatics, meaning the investigation of language as a system of interactive behavior that serves a wide range of purposes. She received her PhD in Linguistics from the University of Chicago in 1985, and worked at several academic and industry research labs before joining Penn State. Her work is reported in over 140 publications in journals and refereed conference proceedings, and has been funded through 27 sponsored projects  from 16 sources, including government agencies, corporate sponsors, corporate gifts, and foundations..

Sep
1
Fri
Lei Li (Carnegie Mellon University) “Empowering Responsible Use of Large Language Models” @ Hackerman Hall B17
Sep 1 @ 12:00 pm – 1:15 pm

Abstract

Large language models (LLMs) have demonstrated incredible power, but they also possess vulnerabilities that can lead to misuse and potential attacks. In this presentation, we will address two fundamental questions regarding the responsible utilization of LLMs: (1) How can we accurately identify AI-generated text? (2) What measures can safeguard the intellectual property of LLMs? We will introduce two recent watermarking techniques designed for text and models, respectively. Our discussion will encompass the theoretical underpinnings that ensure the correctness of watermark detection, along with robustness against evasion attacks. Furthermore, we will showcase empirical evidence validating their effectiveness. These findings establish a solid technical groundwork for policymakers, legal professionals, and generative AI practitioners alike.

Biography

Lei Li is an Assistant Professor in Language Technology Institute at Carnegie Mellon University. He received Ph.D. from Carnegie Mellon University School of Computer Science. He is a recipient of ACL 2021 Best Paper Award, CCF Young Elite Award in 2019, CCF distinguished speaker in 2017, Wu Wen-tsün AI prize in 2017, and 2012 ACM SIGKDD dissertation award (runner-up), and is recognized as Notable Area Chair of ICLR 2023. Previously, he was a faculty member at UC Santa Barbara. Prior to that,  he founded ByteDance AI Lab in 2016 and led its research in NLP, ML, Robotics, and Drug Discovery. He launched ByteDance’s machine translation system VolcTrans and AI writing system Xiaomingbot, serving one billion users.

Sep
8
Fri
Daniel Khashabi (Johns Hopkins University) “Building More Helpful Language Models” @ Hackerman Hall B17
Sep 8 @ 12:00 pm – 1:15 pm

Abstract

The arms race to build increasingly larger, powerful language models (LMs) in the past year has been remarkable. Yet incorporating LMs effectively into practical applications that facilitate manual workflows remains challenging. I will discuss LMs’ limiting factors and our efforts to overcome them. I will start with challenges surrounding efficient and robust LM alignment. I will share insights from our recent paper “Self-Instruct” (ACL 2023), where we used vanilla (unaligned) LMs for aligning itself, an approach that has yielded some success. Then, I will move on to the challenge of tracing the output of LMs to reliable sources, a weakness that makes them prone to hallucinations. I will discuss our recent approach of ‘according-to’ prompting, which steers LMs to quote directly from sources observed in its pre-training. If time permits, I will discuss our ongoing project to adapt LMs to interact with web pages. Throughout the presentation, I will highlight our progress, and end with questions about our future progress.

Biography

Daniel Khashabi is an assistant professor in computer science at Johns Hopkins University and the Center for Language and Speech Processing (CLSP) member. He is interested in building reasoning-driven modular NLP systems that are robust, transparent, and communicative, particularly those that use natural language as the communication medium. Khashabi has published over 40 papers on natural language processing and AI in top-tier venues. His work touches upon developing. His research has won the ACL 2023 Outstanding Paper Award, NAACL 2022 Best Paper Award, research gifts from the Allen Institute for AI, and an Amazon Research Award 2023. Before joining Hopkins, he was a postdoctoral fellow at the Allen Institute for AI (2019-2022) and obtained a Ph.D. from the University of Pennsylvania in 2019.

Center for Language and Speech Processing