Seminars

Oct
7
Fri
Ariya Rastrow (Amazon) @ Hackerman Hall B17
Oct 7 @ 12:00 pm – 1:15 pm
Oct
14
Fri
He He (New York University) “What We Talk about When We Talk about Spurious Correlations in NLP” @ Hackerman Hall B17
Oct 14 @ 12:00 pm – 1:15 pm

Abstract

Model robustness and spurious correlations have received increasing attention in the NLP community, both in methods and evaluation. The term “spurious correlation” is overloaded though and can refer to any undesirable shortcuts learned by the model, as judged by domain experts.

When designing mitigation algorithms, we often (implicitly) assume that a spurious feature is irrelevant for prediction. However, many features in NLP (e.g. word overlap and negation) are not spurious in the sense that the background is spurious for classifying objects in an image. In contrast, they carry important information that’s needed to make predictions by humans. In this talk, we argue that it is more productive to characterize features in terms of their necessity and sufficiency for prediction. We then discuss the implications of this categorization in representation, learning, and evaluation.

Biography

He He is an Assistant Professor in the Department of Computer Science and the Center for Data Science at New York University. She obtained her PhD in Computer Science at the University of Maryland, College Park. Before joining NYU, she spent a year at AWS AI and was a post-doc at Stanford University before that. She is interested in building robust and trustworthy NLP systems in human-centered settings. Her recent research focus includes robust language understanding, collaborative text generation, and understanding capabilities and issues of large language models.

Oct
17
Mon
David Chiang (University of Notre Dame) “Exact Recursive Probabilistic Programming with Colin McDonald, Darcey Riley, Kenneth Sible (Notre Dame) and Chung-chieh Shan (Indiana)” @ Hackerman Hall B17
Oct 17 @ 12:00 pm – 1:15 pm

Abstract

Recursive calls over recursive data are widely useful for generating probability distributions, and probabilistic programming allows computations over these distributions to be expressed in a modular and intuitive way. Exact inference is also useful, but unfortunately, existing probabilistic programming languages do not perform exact inference on recursive calls over recursive data, forcing programmers to code many applications manually. We introduce a probabilistic language in which a wide variety of recursion can be expressed naturally, and inference carried out exactly. For instance, probabilistic pushdown automata and their generalizations are easy to express, and polynomial-time parsing algorithms for them are derived automatically. We eliminate recursive data types using program transformations related to defunctionalization and refunctionalization. These transformations are assured correct by a linear type system, and a successful choice of transformations, if there is one, is guaranteed to be found by a greedy algorithm. I will also describe the implementation of this language in two phases: first, compilation to a factor graph grammar, and second, computing the sum-product of the factor graph grammar.
Biography
David Chiang (PhD, University of Pennsylvania, 2004) is an associate professor in the Department of Computer Science and Engineering at the University of Notre Dame. His research is on computational models for learning human languages, particularly how to translate from one language to another. His work on applying formal grammars and machine learning to translation has been recognized with two best paper awards (at ACL 2005 and NAACL HLT 2009). He has received research grants from DARPA, NSF, Google, and Amazon, has served on the executive board of NAACL and the editorial board of Computational Linguistics and JAIR, and is currently on the editorial board of Transactions of the ACL.
Oct
24
Mon
Fei Sha (University of Southern California) “Extracting Information from Text into Memory for Knowledge-Intensive Tasks” @ Hackerman Hall B17
Oct 24 @ 12:00 pm – 1:15 pm

Abstract

Modern learning architectures for natural language processing have been very successful in incorporating a huge amount of texts into their parameters. However, by and large, such models store and use knowledge in distributed and decentralized ways. This proves unreliable and makes the models ill-suited for knowledge-intensive tasks that require reasoning over factual information in linguistic expressions.  In this talk, I will give a few examples of exploring alternative architectures to tackle those challenges. In particular, we can improve the performance of such (language) models by representing, storing and accessing knowledge in a dedicated memory component.

This talk is based on several joint works with Yury Zemlyanskiy (Google Research), Michiel de Jong (USC and Google Research), William Cohen (Google Research and CMU) and our other collaborators in Google Research.

Biography

Fei is a research scientist at Google Research. Before that, he was a Professor of Computer Science at University of Southern California. His primary research interests are machine learning and its application to various AI problems: speech and language processing, computer vision, robotics and recently weather forecast and climate modeling.   He has a PhD (2007) from Computer and Information Science from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing, China).

Nov
4
Fri
Berrak Sisman (University of Texas at Dallas) “Speech Synthesis and Voice Conversion: Machine Learning can Mimic Anyone’s Voice” @ Hackerman Hall B17
Nov 4 @ 12:00 pm – 1:15 pm

Abstract

Voice conversion (VC) is a significant aspect of artificial intelligence. It is the study of how to convert one’s voice to sound like that of another without changing the linguistic content. Voice conversion belongs to a general technical field of speech synthesis, which converts text to speech or changes the properties of speech, for example, voice identity, emotion, and accents. Voice conversion involves multiple speech processing techniques, such as speech analysis, spectral conversion, prosody conversion, speaker characterization, and vocoding. With the recent advances in theory and practice, we are now able to produce human-like voice quality with high speaker similarity. In this talk, Dr. Sisman will present the recent advances in voice conversion and discuss their promise and limitations. Dr. Sisman will also provide a summary of the available resources for expressive voice conversion research.

Biography

Dr. Berrak Sisman (Member, IEEE) received the Ph.D. degree in electrical and computer engineering from National University of Singapore in 2020, fully funded by A*STAR Graduate Academy under Singapore International Graduate Award (SINGA). She is currently working as a tenure-track Assistant Professor at the Erik Jonsson School Department of Electrical and Computer Engineering at University of Texas at Dallas, United States. Prior to joining UT Dallas, she was a faculty member at Singapore University of Technology and Design (2020-2022). She was a Postdoctoral Research Fellow at the National University of Singapore (2019-2020). She was an exchange doctoral student at the University of Edinburgh and a visiting scholar at The Centre for Speech Technology Research (CSTR), University of Edinburgh (2019). She was a visiting researcher at RIKEN Advanced Intelligence Project in Japan (2018). Her research is focused on machine learning, signal processing, emotion, speech synthesis and voice conversion.

Dr. Sisman has served as the Area Chair at INTERSPEECH 2021, INTERSPEECH 2022, IEEE SLT 2022 and as the Publication Chair at ICASSP 2022. She has been elected as a member of the IEEE Speech and Language Processing Technical Committee (SLTC) in the area of Speech Synthesis for the term from January 2022 to December 2024. She plays leadership roles in conference organizations and active in technical committees. She has served as the General Coordinator of the Student Advisory Committee (SAC) of International Speech Communication Association (ISCA).

Nov
11
Fri
Hui Guan (University of Massachusetts Amherst) “Towards Accurate and Efficient Edge Computing Via Multi-Task Learning” @ Hackerman Hall B17
Nov 11 @ 12:00 pm – 1:15 pm

Abstract

AI-powered applications increasingly adopt Deep Neural Networks (DNNs) for solving many prediction tasks, leading to more than one DNNs running on resource-constrained devices. Supporting many models simultaneously on a device is challenging due to the linearly increased computation, energy, and storage costs. An effective approach to address the problem is multi-task learning (MTL) where a set of tasks are learned jointly to allow some parameter sharing among tasks. MTL creates multi-task models based on common DNN architectures and has shown significantly reduced inference costs and improved generalization performance in many machine learning applications. In this talk, we will introduce our recent efforts on leveraging MTL to improve accuracy and efficiency for edge computing. The talk will introduce multi-task architecture design systems that can automatically identify resource-efficient multi-task models with low inference costs and high task accuracy.
Biography
Hui Guan is an Assistant Professor in the College of Information and Computer Sciences (CICS) at the University of Massachusetts Amherst, the flagship campus of the UMass system. She received her Ph.D. in Electrical Engineering from North Carolina State University in 2020. Her research lies in the intersection between machine learning and systems, with an emphasis on improving the speed, scalability, and reliability of machine learning through innovations in algorithms and programming systems. Her current research focuses on both algorithm and system optimizations of deep multi-task learning and graph machine learning.
Nov
18
Fri
Angela Fan (Meta AI Research) “No Language Left Behind: Scaling Human-Centered Machine Translation” @ Hackerman Hall B17
Nov 18 @ 12:00 pm – 1:15 pm

Abstract

Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high-quality results, all while keeping ethical considerations in mind? In this talk, I introduce No Language Left Behind, an initiative to break language barriers for low-resource languages. In No Language Left Behind, we took on the low-resource language translation challenge by first contextualizing the need for translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. We proposed multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system in an open-source manner.

Biography

Angela is a research scientist at Meta AI Research in New York, focusing on supporting efforts in speech and language research. Recent projects include No Language Left Behind (https://ai.facebook.com/research/no-language-left-behind/) and Universal Speech Translation for Unwritten Languages (https://ai.facebook.com/blog/ai-translation-hokkien/). Before translation, Angela previously focused on research in on-device models for NLP and computer vision and text generation.

Dec
2
Fri
Minje Kim (Indiana University) “Personalized Speech Enhancement: Data- and Resource-Efficient Machine Learning” @ Hackerman Hall B17
Dec 2 @ 12:00 pm – 1:15 pm

Abstract

One of the keys to success in machine learning applications is to improve each user’s personal experience via personalized models. A personalized model can be a more resource-efficient solution than a general-purpose model, too, because it focuses on a particular sub-problem, for which a smaller model architecture can be good enough. However, training a personalized model requires data from the particular test-time user, which are not always available due to their private nature and technical challenges. Furthermore, such data tend to be unlabeled as they can be collected only during the test time, once after the system is deployed to user devices. One could rely on the generalization power of a generic model, but such a model can be too computationally/spatially complex for real-time processing in a resource-constrained device. In this talk, I will present some techniques to circumvent the lack of labeled personal data in the context of speech enhancement. Our machine learning models will require zero or few data samples from the test-time users, while they can still achieve the personalization goal. To this end, we will investigate modularized speech enhancement models as well as the potential of self-supervised learning for personalized speech enhancement. Because our research achieves the personalization goal in a data- and resource-efficient way, it is a step towards a more available and affordable AI for society.

Biography

Minje Kim is an associate professor in the Dept. of Intelligent Systems Engineering at Indiana University, where he leads his research group, Signals and AI Group in Engineering (SAIGE). He is also an Amazon Visiting Academic, consulting for Amazon Lab126. At IU, he is affiliated with various programs and labs such as Data Science, Cognitive Science, Dept. of Statistics, and Center for Machine Learning. He earned his Ph.D. in the Dept. of Computer Science at the University of Illinois at Urbana-Champaign. Before joining UIUC, He worked as a researcher at ETRI, a national lab in Korea, from 2006 to 2011. Before then, he received his Master’s and Bachelor’s degrees in the Dept. of Computer Science and Engineering at POSTECH (Summa Cum Laude) and in the Division of Information and Computer Engineering at Ajou University (with honor) in 2006 and 2004, respectively. He is a recipient of various awards including NSF Career Award (2021), IU Trustees Teaching Award (2021), IEEE SPS Best Paper Award (2020), and Google and Starkey’s grants for outstanding student papers in ICASSP 2013 and 2014, respectively. He is an IEEE Senior Member and also a member of the IEEE Audio and Acoustic Signal Processing Technical Committee (2018-2023). He is serving as an Associate Editor for EURASIP Journal of Audio, Speech, and Music Processing, and as a Consulting Associate Editor for IEEE Open Journal of Signal Processing. He is also a reviewer, program committee member, or area chair for the major machine learning and signal processing. He filed more than 50 patent applications as an inventor.

Dec
9
Fri
Mark Hasegawa-Johnson (University of Illinois Urbana-Champaign) “Zipf’s Law Suggests a Three-Pronged Approach to Inclusive Speech Recognition” @ Hackerman Hall B17
Dec 9 @ 12:00 pm – 1:15 pm

Abstract

Zipf’s law is commonly glossed by the aphorism “infrequent words are frequent,” but in practice, it has often meant that there are three types of words: frequent, infrequent, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (with dynamic time warping).  Hidden Markov models worked well for moderately infrequent words, but the problem of OOV words was not solved until sequence-to-sequence neural nets de-reified the concept of a word.  Many other social phenomena follow power-law distributions.  The number of native speakers of the N’th most spoken language, for example, is 1.44 billion over N to the 1.09.  In languages with sufficient data, we have shown that monolingual pre-training outperforms multilingual pre-training.  In less-frequent languages, multilingual knowledge transfer can significantly reduce phone error rates.  In languages with no training data, unsupervised ASR methods can be proven to converge, as long as the eigenvalues of the language model are sufficiently well separated to be measurable. Other systems of social categorization may follow similar power-law distributions.  Disability, for example, can cause speech patterns that were never seen in the training database, but not all disabilities need do so.  The inability of speech technology to work for people with even common disabilities is probably caused by a lack of data, and can probably be solved by finding better modes of interaction between technology researchers and the communities served by technology.

Biography

Mark Hasegawa-Johnson is a William L. Everitt Faculty Fellow of Electrical and Computer Engineering at the University of Illinois in Urbana-Champaign.  He has published research in speech production and perception, source separation, voice conversion, and low-resource automatic speech recognition.

Center for Language and Speech Processing