Seminars

Nov
11
Fri
Hui Guan (University of Massachusetts Amherst) “Towards Accurate and Efficient Edge Computing Via Multi-Task Learning” @ Hackerman Hall B17
Nov 11 @ 12:00 pm – 1:15 pm

Abstract

AI-powered applications increasingly adopt Deep Neural Networks (DNNs) for solving many prediction tasks, leading to more than one DNNs running on resource-constrained devices. Supporting many models simultaneously on a device is challenging due to the linearly increased computation, energy, and storage costs. An effective approach to address the problem is multi-task learning (MTL) where a set of tasks are learned jointly to allow some parameter sharing among tasks. MTL creates multi-task models based on common DNN architectures and has shown significantly reduced inference costs and improved generalization performance in many machine learning applications. In this talk, we will introduce our recent efforts on leveraging MTL to improve accuracy and efficiency for edge computing. The talk will introduce multi-task architecture design systems that can automatically identify resource-efficient multi-task models with low inference costs and high task accuracy.
Biography
Hui Guan is an Assistant Professor in the College of Information and Computer Sciences (CICS) at the University of Massachusetts Amherst, the flagship campus of the UMass system. She received her Ph.D. in Electrical Engineering from North Carolina State University in 2020. Her research lies in the intersection between machine learning and systems, with an emphasis on improving the speed, scalability, and reliability of machine learning through innovations in algorithms and programming systems. Her current research focuses on both algorithm and system optimizations of deep multi-task learning and graph machine learning.
Nov
18
Fri
Angela Fan (Meta AI Research) “No Language Left Behind: Scaling Human-Centered Machine Translation” @ Hackerman Hall B17
Nov 18 @ 12:00 pm – 1:15 pm

Abstract

Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high-quality results, all while keeping ethical considerations in mind? In this talk, I introduce No Language Left Behind, an initiative to break language barriers for low-resource languages. In No Language Left Behind, we took on the low-resource language translation challenge by first contextualizing the need for translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. We proposed multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system in an open-source manner.

Biography

Angela is a research scientist at Meta AI Research in New York, focusing on supporting efforts in speech and language research. Recent projects include No Language Left Behind (https://ai.facebook.com/research/no-language-left-behind/) and Universal Speech Translation for Unwritten Languages (https://ai.facebook.com/blog/ai-translation-hokkien/). Before translation, Angela previously focused on research in on-device models for NLP and computer vision and text generation.

Dec
2
Fri
Minje Kim (Indiana University) “Personalized Speech Enhancement: Data- and Resource-Efficient Machine Learning” @ Hackerman Hall B17
Dec 2 @ 12:00 pm – 1:15 pm

Abstract

One of the keys to success in machine learning applications is to improve each user’s personal experience via personalized models. A personalized model can be a more resource-efficient solution than a general-purpose model, too, because it focuses on a particular sub-problem, for which a smaller model architecture can be good enough. However, training a personalized model requires data from the particular test-time user, which are not always available due to their private nature and technical challenges. Furthermore, such data tend to be unlabeled as they can be collected only during the test time, once after the system is deployed to user devices. One could rely on the generalization power of a generic model, but such a model can be too computationally/spatially complex for real-time processing in a resource-constrained device. In this talk, I will present some techniques to circumvent the lack of labeled personal data in the context of speech enhancement. Our machine learning models will require zero or few data samples from the test-time users, while they can still achieve the personalization goal. To this end, we will investigate modularized speech enhancement models as well as the potential of self-supervised learning for personalized speech enhancement. Because our research achieves the personalization goal in a data- and resource-efficient way, it is a step towards a more available and affordable AI for society.

Biography

Minje Kim is an associate professor in the Dept. of Intelligent Systems Engineering at Indiana University, where he leads his research group, Signals and AI Group in Engineering (SAIGE). He is also an Amazon Visiting Academic, consulting for Amazon Lab126. At IU, he is affiliated with various programs and labs such as Data Science, Cognitive Science, Dept. of Statistics, and Center for Machine Learning. He earned his Ph.D. in the Dept. of Computer Science at the University of Illinois at Urbana-Champaign. Before joining UIUC, He worked as a researcher at ETRI, a national lab in Korea, from 2006 to 2011. Before then, he received his Master’s and Bachelor’s degrees in the Dept. of Computer Science and Engineering at POSTECH (Summa Cum Laude) and in the Division of Information and Computer Engineering at Ajou University (with honor) in 2006 and 2004, respectively. He is a recipient of various awards including NSF Career Award (2021), IU Trustees Teaching Award (2021), IEEE SPS Best Paper Award (2020), and Google and Starkey’s grants for outstanding student papers in ICASSP 2013 and 2014, respectively. He is an IEEE Senior Member and also a member of the IEEE Audio and Acoustic Signal Processing Technical Committee (2018-2023). He is serving as an Associate Editor for EURASIP Journal of Audio, Speech, and Music Processing, and as a Consulting Associate Editor for IEEE Open Journal of Signal Processing. He is also a reviewer, program committee member, or area chair for the major machine learning and signal processing. He filed more than 50 patent applications as an inventor.

Dec
9
Fri
Mark Hasegawa-Johnson (University of Illinois Urbana-Champaign) “Zipf’s Law Suggests a Three-Pronged Approach to Inclusive Speech Recognition” @ Hackerman Hall B17
Dec 9 @ 12:00 pm – 1:15 pm

Abstract

Zipf’s law is commonly glossed by the aphorism “infrequent words are frequent,” but in practice, it has often meant that there are three types of words: frequent, infrequent, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (with dynamic time warping).  Hidden Markov models worked well for moderately infrequent words, but the problem of OOV words was not solved until sequence-to-sequence neural nets de-reified the concept of a word.  Many other social phenomena follow power-law distributions.  The number of native speakers of the N’th most spoken language, for example, is 1.44 billion over N to the 1.09.  In languages with sufficient data, we have shown that monolingual pre-training outperforms multilingual pre-training.  In less-frequent languages, multilingual knowledge transfer can significantly reduce phone error rates.  In languages with no training data, unsupervised ASR methods can be proven to converge, as long as the eigenvalues of the language model are sufficiently well separated to be measurable. Other systems of social categorization may follow similar power-law distributions.  Disability, for example, can cause speech patterns that were never seen in the training database, but not all disabilities need do so.  The inability of speech technology to work for people with even common disabilities is probably caused by a lack of data, and can probably be solved by finding better modes of interaction between technology researchers and the communities served by technology.

Biography

Mark Hasegawa-Johnson is a William L. Everitt Faculty Fellow of Electrical and Computer Engineering at the University of Illinois in Urbana-Champaign.  He has published research in speech production and perception, source separation, voice conversion, and low-resource automatic speech recognition.

Jan
23
Mon
CLSP Student Seminar – Kelly Marchisio – “Efficient Multilingual NLP” @ Hackerman Hall B17
Jan 23 @ 12:00 pm – 1:15 pm

Abstract

Kelly’s research spans three broad directions in multilingual NLP and representation learning: (1) diagnosing and fixing failure modes in translation technologies (2) data-efficient and low-resource NLP, and (3) compute-efficient NLP. This talk is an overview of 5 years of PhD work, spanning projects on unsupervised machine translation and bilingual lexicon induction, the mathematical framing of translation tasks, and efficient adaptation of large language models to new languages.  Kelly will also discuss future research directions, including multi-modal representation learning, compression, speech translation, and sign-language translation.

Jan
27
Fri
CLSP Student Seminar @ Hackerman Hall B17
Jan 27 @ 12:00 pm – 1:15 pm
Jan
30
Mon
Daniel Fried (CMU) @ Hackerman Hall B17
Jan 30 @ 12:00 pm – 1:15 pm
Feb
3
Fri
Sasha Rush (Cornell University) “Pretraining Without Attention” @ Hackerman Hall B17
Feb 3 @ 12:00 pm – 1:15 pm

Abstract

Transformers are essential to pretraining. As we approach 5 years of BERT, the connection between attention as architecture and transfer learning remains key to this central thread in NLP. Other architectures such as CNNs and RNNs have been used to replicate pretraining results, but these either fail to reach the same accuracy or require supplemental attention layers. This work revisits the semanal BERT result and considers pretraining without attention. We consider replacing self-attention layers with recently developed approach for long-range sequence modeling and transformer architecture variants. Specifically, inspired by recent papers like the structured space space sequence model (S4), we use simple routing layers based on state-space models (SSM) and a bidirectional model architecture based on multiplicative gating. We discuss the results of the proposed Bidirectional Gated SSM (BiGS) and present a range of analysis into its properties. Results show that architecture does seem to have a notable impact on downstream performance and a different inductive bias that is worth exploring further.

Biography

Alexander “Sasha” Rush is an Associate Professor at Cornell Tech. His work is at the intersection of natural language processing and generative modeling with applications in text generation, efficient inference, and controllability. He has written several popular open-source software projects supporting NLP research and data science, and works part-time as a researcher at Hugging Face. He is the secretary of ICLR and developed software used to run virtual conferences during COVID. His work has received paper and demo awards at major NLP, visualization, and hardware conferences, an NSF Career Award, and a Sloan Fellowship. He tweets and blogs, mostly about coding and ML, at @srush_nlp.
Feb
6
Mon
Sharon Levy (University of California, Santa Barbara) “Responsible AI via Responsible Large Language Models” @ Hackerman Hall B17
Feb 6 @ 12:00 pm – 1:15 pm

Abstract

While large language models have advanced the state-of-the-art in natural language processing, these models are trained on large-scale datasets, which may include harmful information. Studies have shown that as a result, the models exhibit social biases and generate misinformation after training. In this talk, I will discuss my work on analyzing and interpreting the risks of large language models across the areas of fairness, trustworthiness, and safety. I will first describe my research in the detection of dialect bias between African American English (AAE) vs. Standard American English (SAE). The second part investigates the trustworthiness of models through the memorization and subsequent generation of conspiracy theories. I will end my talk with recent work in AI safety regarding text that may lead to physical harm.

Biography

Sharon is a 5th-year Ph.D. candidate at the University of California, Santa Barbara, where she is advised by Professor William Wang. Her research interests lie in natural language processing, with a focus on Responsible AI. Sharon’s research spans the subareas of fairness, trustworthiness, and safety, with publications in ACL, EMNLP, WWW, and LREC. She has spent summers interning at AWS, Meta, and Pinterest. Sharon is a 2022 EECS Rising Star and a current recipient of the Amazon Alexa AI Fellowship for Responsible AI.

Feb
10
Fri
Mark Yatskar (University of Pennsylvania) “Understanding Dataset Biases: Behavioral Indicators During Annotation and Contrastive Mitigations” @ Hackerman Hall B17
Feb 10 @ 12:00 pm – 1:15 pm

Abstract

Biases in datasets, or unintentionally introduced spurious cues, are a common source of misspecification in machine learning. Performant models trained on such data can gender stereotype or be brittle under distribution shift.  In this talk, we present several results in multimodal and question answering applications studying sources of dataset bias, and several mitigation methods.  We propose approaches where known dimensions of dataset bias are explicitly factored out of a model during learning, without needing to modify data. Finally, we ask whether dataset biases can be attributable to annotator behavior during annotation. Drawing inspiration from work in psychology on cognitive biases, we show certain behavioral patterns are highly indicative of the creation of problematic (but valid) data instances in question answering. We give evidence that many existing observations around how dataset bias propagates to models can be attributed to data samples created by annotators we identify.

Biography

Mark Yatskar is an Assistant Professor at University of Pennsylvania in the department of Computer and Information Science. He did his PhD at University of Washington co-advised by Luke Zettlemoyer and Ali Farhadi. He was a Young Investigator at the Allen Institute for Artificial Intelligence for several years working with their computer vision team, Prior. His work spans Natural Language Processing, Computer Vision, and Fairness in Machine Learning. He received a Best Paper Award at EMNLP for work on gender bias amplification, and his work has been featured in Wired and the New York Times.

Center for Language and Speech Processing