Seminars

Dec
2
Fri
Minje Kim (Indiana University) “Personalized Speech Enhancement: Data- and Resource-Efficient Machine Learning” @ Hackerman Hall B17
Dec 2 @ 12:00 pm – 1:15 pm

Abstract

One of the keys to success in machine learning applications is to improve each user’s personal experience via personalized models. A personalized model can be a more resource-efficient solution than a general-purpose model, too, because it focuses on a particular sub-problem, for which a smaller model architecture can be good enough. However, training a personalized model requires data from the particular test-time user, which are not always available due to their private nature and technical challenges. Furthermore, such data tend to be unlabeled as they can be collected only during the test time, once after the system is deployed to user devices. One could rely on the generalization power of a generic model, but such a model can be too computationally/spatially complex for real-time processing in a resource-constrained device. In this talk, I will present some techniques to circumvent the lack of labeled personal data in the context of speech enhancement. Our machine learning models will require zero or few data samples from the test-time users, while they can still achieve the personalization goal. To this end, we will investigate modularized speech enhancement models as well as the potential of self-supervised learning for personalized speech enhancement. Because our research achieves the personalization goal in a data- and resource-efficient way, it is a step towards a more available and affordable AI for society.

Biography

Minje Kim is an associate professor in the Dept. of Intelligent Systems Engineering at Indiana University, where he leads his research group, Signals and AI Group in Engineering (SAIGE). He is also an Amazon Visiting Academic, consulting for Amazon Lab126. At IU, he is affiliated with various programs and labs such as Data Science, Cognitive Science, Dept. of Statistics, and Center for Machine Learning. He earned his Ph.D. in the Dept. of Computer Science at the University of Illinois at Urbana-Champaign. Before joining UIUC, He worked as a researcher at ETRI, a national lab in Korea, from 2006 to 2011. Before then, he received his Master’s and Bachelor’s degrees in the Dept. of Computer Science and Engineering at POSTECH (Summa Cum Laude) and in the Division of Information and Computer Engineering at Ajou University (with honor) in 2006 and 2004, respectively. He is a recipient of various awards including NSF Career Award (2021), IU Trustees Teaching Award (2021), IEEE SPS Best Paper Award (2020), and Google and Starkey’s grants for outstanding student papers in ICASSP 2013 and 2014, respectively. He is an IEEE Senior Member and also a member of the IEEE Audio and Acoustic Signal Processing Technical Committee (2018-2023). He is serving as an Associate Editor for EURASIP Journal of Audio, Speech, and Music Processing, and as a Consulting Associate Editor for IEEE Open Journal of Signal Processing. He is also a reviewer, program committee member, or area chair for the major machine learning and signal processing. He filed more than 50 patent applications as an inventor.

Dec
9
Fri
Mark Hasegawa-Johnson (University of Illinois Urbana-Champaign) “Zipf’s Law Suggests a Three-Pronged Approach to Inclusive Speech Recognition” @ Hackerman Hall B17
Dec 9 @ 12:00 pm – 1:15 pm

Abstract

Zipf’s law is commonly glossed by the aphorism “infrequent words are frequent,” but in practice, it has often meant that there are three types of words: frequent, infrequent, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (with dynamic time warping).  Hidden Markov models worked well for moderately infrequent words, but the problem of OOV words was not solved until sequence-to-sequence neural nets de-reified the concept of a word.  Many other social phenomena follow power-law distributions.  The number of native speakers of the N’th most spoken language, for example, is 1.44 billion over N to the 1.09.  In languages with sufficient data, we have shown that monolingual pre-training outperforms multilingual pre-training.  In less-frequent languages, multilingual knowledge transfer can significantly reduce phone error rates.  In languages with no training data, unsupervised ASR methods can be proven to converge, as long as the eigenvalues of the language model are sufficiently well separated to be measurable. Other systems of social categorization may follow similar power-law distributions.  Disability, for example, can cause speech patterns that were never seen in the training database, but not all disabilities need do so.  The inability of speech technology to work for people with even common disabilities is probably caused by a lack of data, and can probably be solved by finding better modes of interaction between technology researchers and the communities served by technology.

Biography

Mark Hasegawa-Johnson is a William L. Everitt Faculty Fellow of Electrical and Computer Engineering at the University of Illinois in Urbana-Champaign.  He has published research in speech production and perception, source separation, voice conversion, and low-resource automatic speech recognition.

Feb
24
Fri
Wei Xu (Georgia Tech) “GPT-3 vs Humans: Rethinking Evaluation of Natural Language Generation” @ Hackerman Hall B17
Feb 24 @ 12:00 pm – 1:15 pm

Abstract

While GPT models have shown impressive performance on summarization and open-ended text generation, it’s important to assess their abilities on more constrained text generation tasks that require significant and diverse rewritings. In this talk, I will discuss the challenges of evaluating systems that are highly competitive and perform close to humans on two such tasks: (i) paraphrase generation and (ii) text simplification. To address these challenges, we introduce an interactive Rank-and-Rate evaluation framework. Our results show that GPT-3.5 has made a major step up from fine-tuned T5 in paraphrase generation, but still lacks the diversity and creativity of humans who spontaneously produce large quantities of paraphrases.

Additionally, we demonstrate that GPT-3.5 performs similarly to a single human in text simplification, which makes it difficult for existing automatic evaluation metrics to distinguish between the two. To overcome this shortcoming, we propose LENS, a learnable evaluation metric that outperforms SARI, BERTScore, and other existing methods in both automatic evaluation and minimal risk decoding for text generation.

Biography

Wei Xu is an assistant professor in the School of Interactive Computing at the Georgia Institute of Technology, where she is also affiliated with the new NSF AI CARING Institute and Machine Learning Center. She received her Ph.D. in Computer Science from New York University and her B.S. and M.S. from Tsinghua University. Xu’s research interests are in natural language processing, machine learning, and social media, with a focus on text generation, stylistics, robustness and controllability of machine learning models, and reading and writing assistive technology. She is a recipient of the NSF CAREER Award, CrowdFlower AI for Everyone Award, Criteo Faculty Research Award, and Best Paper Award at COLING’18. She has also received funds from DARPA and IARPA. She is an elected member of the NAACL executive board and regularly serves as a senior area chair for AI/NLP conferences.

Dec
1
Fri
Karen Livescu (Toyota Technological Institute at Chicago) “What Do Pre-Trained Speech Representation Models Know? Layer-Wise Analysis and Benchmarking” @ Hackerman Hall B17
Dec 1 @ 12:00 pm – 1:15 pm

Abstract

Pre-trained speech representation models have become ubiquitous in speech processing over the past few years.  They have both improved the state of the art and made it feasible to learn task-specific models with very little labeled data.  However, it is not well understood what linguistic information is encoded in pre-trained models and how best to apply them to downstream tasks. In this talk I will describe recent work that begins to build an understanding of the layer-wise information learned by pre-trained speech models.  We consider a number of popular pre-trained models and investigate the extent to which their layers encode spectral, phonetic, and word-level information.  The results of these analyses also suggest some ways to improve or simplify the application of pre-trained models for downstream tasks.  Finally, I will describe our efforts to benchmark model performance on a variety of spoken language understanding tasks, in order to broaden our understanding of the capabilities of state-of-the-art models.

This talk is based in part on work presented in

A. Pasad et al., “Comparative layer-wise analysis of self-supervised speech models,”ICASSP 2023.

A. Pasad et al., “What do self-supervised speech models know about words?,” arXiv:2307.00162, 2023.

S. Shon et al., “SLUE Phase-2: A Benchmark Suite of Diverse Spoken Language Understanding Tasks,” ACL 2023.

Bio

Karen Livescu is a Professor at TTI-Chicago. She completed her PhD at MIT in 2005. She is an ISCA Fellow and a recent IEEE Distinguished Lecturer.  She has served as a program chair/co-chair for ICLR, Interspeech, and ASRU, and is an Associate Editor for TACL and IEEE T-PAMI.  Her group’s work spans a variety of topics in spoken, written, and signed language processing.

Dec
4
Mon
Alvaro Velasquez @ Hackerman Hall B17
Dec 4 @ 12:00 pm – 1:15 pm

Center for Language and Speech Processing