Seminars

Dec
2
Fri
Minje Kim (Indiana University) “Personalized Speech Enhancement: Data- and Resource-Efficient Machine Learning” @ Hackerman Hall B17
Dec 2 @ 12:00 pm – 1:15 pm

Abstract

One of the keys to success in machine learning applications is to improve each user’s personal experience via personalized models. A personalized model can be a more resource-efficient solution than a general-purpose model, too, because it focuses on a particular sub-problem, for which a smaller model architecture can be good enough. However, training a personalized model requires data from the particular test-time user, which are not always available due to their private nature and technical challenges. Furthermore, such data tend to be unlabeled as they can be collected only during the test time, once after the system is deployed to user devices. One could rely on the generalization power of a generic model, but such a model can be too computationally/spatially complex for real-time processing in a resource-constrained device. In this talk, I will present some techniques to circumvent the lack of labeled personal data in the context of speech enhancement. Our machine learning models will require zero or few data samples from the test-time users, while they can still achieve the personalization goal. To this end, we will investigate modularized speech enhancement models as well as the potential of self-supervised learning for personalized speech enhancement. Because our research achieves the personalization goal in a data- and resource-efficient way, it is a step towards a more available and affordable AI for society.

Biography

Minje Kim is an associate professor in the Dept. of Intelligent Systems Engineering at Indiana University, where he leads his research group, Signals and AI Group in Engineering (SAIGE). He is also an Amazon Visiting Academic, consulting for Amazon Lab126. At IU, he is affiliated with various programs and labs such as Data Science, Cognitive Science, Dept. of Statistics, and Center for Machine Learning. He earned his Ph.D. in the Dept. of Computer Science at the University of Illinois at Urbana-Champaign. Before joining UIUC, He worked as a researcher at ETRI, a national lab in Korea, from 2006 to 2011. Before then, he received his Master’s and Bachelor’s degrees in the Dept. of Computer Science and Engineering at POSTECH (Summa Cum Laude) and in the Division of Information and Computer Engineering at Ajou University (with honor) in 2006 and 2004, respectively. He is a recipient of various awards including NSF Career Award (2021), IU Trustees Teaching Award (2021), IEEE SPS Best Paper Award (2020), and Google and Starkey’s grants for outstanding student papers in ICASSP 2013 and 2014, respectively. He is an IEEE Senior Member and also a member of the IEEE Audio and Acoustic Signal Processing Technical Committee (2018-2023). He is serving as an Associate Editor for EURASIP Journal of Audio, Speech, and Music Processing, and as a Consulting Associate Editor for IEEE Open Journal of Signal Processing. He is also a reviewer, program committee member, or area chair for the major machine learning and signal processing. He filed more than 50 patent applications as an inventor.

Dec
9
Fri
Mark Hasegawa-Johnson (University of Illinois Urbana-Champaign) “Zipf’s Law Suggests a Three-Pronged Approach to Inclusive Speech Recognition” @ Hackerman Hall B17
Dec 9 @ 12:00 pm – 1:15 pm

Abstract

Zipf’s law is commonly glossed by the aphorism “infrequent words are frequent,” but in practice, it has often meant that there are three types of words: frequent, infrequent, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (with dynamic time warping).  Hidden Markov models worked well for moderately infrequent words, but the problem of OOV words was not solved until sequence-to-sequence neural nets de-reified the concept of a word.  Many other social phenomena follow power-law distributions.  The number of native speakers of the N’th most spoken language, for example, is 1.44 billion over N to the 1.09.  In languages with sufficient data, we have shown that monolingual pre-training outperforms multilingual pre-training.  In less-frequent languages, multilingual knowledge transfer can significantly reduce phone error rates.  In languages with no training data, unsupervised ASR methods can be proven to converge, as long as the eigenvalues of the language model are sufficiently well separated to be measurable. Other systems of social categorization may follow similar power-law distributions.  Disability, for example, can cause speech patterns that were never seen in the training database, but not all disabilities need do so.  The inability of speech technology to work for people with even common disabilities is probably caused by a lack of data, and can probably be solved by finding better modes of interaction between technology researchers and the communities served by technology.

Biography

Mark Hasegawa-Johnson is a William L. Everitt Faculty Fellow of Electrical and Computer Engineering at the University of Illinois in Urbana-Champaign.  He has published research in speech production and perception, source separation, voice conversion, and low-resource automatic speech recognition.

Feb
3
Fri
Sasha Rush (Cornell University) “Pretraining Without Attention” @ Hackerman Hall B17
Feb 3 @ 12:00 pm – 1:15 pm

Abstract

Transformers are essential to pretraining. As we approach 5 years of BERT, the connection between attention as architecture and transfer learning remains key to this central thread in NLP. Other architectures such as CNNs and RNNs have been used to replicate pretraining results, but these either fail to reach the same accuracy or require supplemental attention layers. This work revisits the semanal BERT result and considers pretraining without attention. We consider replacing self-attention layers with recently developed approach for long-range sequence modeling and transformer architecture variants. Specifically, inspired by recent papers like the structured space space sequence model (S4), we use simple routing layers based on state-space models (SSM) and a bidirectional model architecture based on multiplicative gating. We discuss the results of the proposed Bidirectional Gated SSM (BiGS) and present a range of analysis into its properties. Results show that architecture does seem to have a notable impact on downstream performance and a different inductive bias that is worth exploring further.

Biography

Alexander “Sasha” Rush is an Associate Professor at Cornell Tech. His work is at the intersection of natural language processing and generative modeling with applications in text generation, efficient inference, and controllability. He has written several popular open-source software projects supporting NLP research and data science, and works part-time as a researcher at Hugging Face. He is the secretary of ICLR and developed software used to run virtual conferences during COVID. His work has received paper and demo awards at major NLP, visualization, and hardware conferences, an NSF Career Award, and a Sloan Fellowship. He tweets and blogs, mostly about coding and ML, at @srush_nlp.
Dec
1
Fri
Karen Livescu (Toyota Technological Institute at Chicago) “What Do Pre-Trained Speech Representation Models Know? Layer-Wise Analysis and Benchmarking” @ Hackerman Hall B17
Dec 1 @ 12:00 pm – 1:15 pm

Abstract

Pre-trained speech representation models have become ubiquitous in speech processing over the past few years.  They have both improved the state of the art and made it feasible to learn task-specific models with very little labeled data.  However, it is not well understood what linguistic information is encoded in pre-trained models and how best to apply them to downstream tasks. In this talk I will describe recent work that begins to build an understanding of the layer-wise information learned by pre-trained speech models.  We consider a number of popular pre-trained models and investigate the extent to which their layers encode spectral, phonetic, and word-level information.  The results of these analyses also suggest some ways to improve or simplify the application of pre-trained models for downstream tasks.  Finally, I will describe our efforts to benchmark model performance on a variety of spoken language understanding tasks, in order to broaden our understanding of the capabilities of state-of-the-art models.

This talk is based in part on work presented in

A. Pasad et al., “Comparative layer-wise analysis of self-supervised speech models,”ICASSP 2023.

A. Pasad et al., “What do self-supervised speech models know about words?,” arXiv:2307.00162, 2023.

S. Shon et al., “SLUE Phase-2: A Benchmark Suite of Diverse Spoken Language Understanding Tasks,” ACL 2023.

Bio

Karen Livescu is a Professor at TTI-Chicago. She completed her PhD at MIT in 2005. She is an ISCA Fellow and a recent IEEE Distinguished Lecturer.  She has served as a program chair/co-chair for ICLR, Interspeech, and ASRU, and is an Associate Editor for TACL and IEEE T-PAMI.  Her group’s work spans a variety of topics in spoken, written, and signed language processing.

Dec
4
Mon
Alvaro Velasquez (DARPA) “Foundation Models and the Transfer of Embodied Autonomy” @ Hackerman Hall B17
Dec 4 @ 12:00 pm – 1:15 pm

Abstract

Foundation models, including Chat-GPT and its many variants, have come into prominence in the natural language processing (NLP) community thanks the ubiquity of text data readily available on the internet and the design of modern transformer architectures that can effectively learn from such data. However, the development of a foundation model for sequential decision-making (e.g., reinforcement learning, planning) is faced with additional challenges not present in NLP. In this talk, we discuss some of these challenges with the hope of informing future investments that funding agencies and the academic community should engage in. The problem of transfer learning in the context of sequential decision-making is also discussed and constitutes one of the challenges that foundation models must address.

Bio

Alvaro Velasquez a program manager at the Defense Advanced Research Projects Agency (DARPA), where he currently leads programs on neuro-symbolic AI. Before that, Alvaro oversaw the machine intelligence portfolio for the Information Directorate of the Air Force Research Laboratory (AFRL). Alvaro is a recipient of the distinguished paper award from AAAI and best paper and patent awards from AFRL, the National Science Foundation Graduate Research Fellowship. He has authored over 70 papers and two patents and serves as Associate Editor of the IEEE Transactions on Artificial Intelligence.

Center for Language and Speech Processing