Abstract
In recent years, the field of Natural Language Processing has seen a profusion of tasks, datasets, and systems that facilitate reasoning about real-world situations through language (e.g., RTE, MNLI, COMET). Such systems might, for example, be trained to consider a situation where “somebody dropped a glass on the floor,” and conclude it is likely that “the glass shattered” as a result. In this talk, I will discuss three pieces of work that revisit assumptions made by or about these systems. In the first work, I develop a Defeasible Inference task, which enables a system to recognize when a prior assumption it has made may no longer be true in light of new evidence it receives. The second work I will discuss revisits partial-input baselines, which have highlighted issues of spurious correlations in natural language reasoning datasets and led to unfavorable assumptions about models’ reasoning abilities. In particular, I will discuss experiments that show models may still learn to reason in the presence of spurious dataset artifacts. Finally, I will touch on work analyzing harmful assumptions made by reasoning models in the form of social stereotypes, particularly in the case of free-form generative reasoning models.
Biography
Rachel Rudinger is an Assistant Professor in the Department of Computer Science at the University of Maryland, College Park. She holds joint appointments in the Department of Linguistics and the Institute for Advanced Computer Studies (UMIACS). In 2019, Rachel completed her Ph.D. in Computer Science at Johns Hopkins University in the Center for Language and Speech Processing. From 2019-2020, she was a Young Investigator at the Allen Institute for AI in Seattle, and a visiting researcher at the University of Washington. Her research interests include computational semantics, common-sense reasoning, and issues of social bias and fairness in NLP.
Abstract
I will present our work on data augmentation using style transfer as a way to improve domain adaptation in sequence labeling tasks. The target domain is social media data, and the task is named entity recognition (NER). The premise is that we can transform the labelled out of domain data into something that stylistically is more closely related to the target data. Then we can train a model on a combination of the generated data and the smaller amount of in domain data to improve NER prediction performance. I will show recent empirical results on these efforts.
If time allows, I will also give an overview of other research projects I’m currently leading at RiTUAL (Research in Text Understanding and Analysis of Language) lab. The common thread among all these research problems is the scarcity of labeled data.
Biography
Thamar Solorio is a Professor of Computer Science at the University of Houston (UH). She holds graduate degrees in Computer Science from the Instituto Nacional de Astrofísica, Óptica y Electrónica, in Puebla, Mexico. Her research interests include information extraction from social media data, enabling technology for code-switched data, stylistic modeling of text, and more recently multimodal approaches for online content understanding. She is the director and founder of the RiTUAL Lab at UH. She is the recipient of an NSF CAREER award for her work on authorship attribution, and recipient of the 2014 Emerging Leader ABIE Award in Honor of Denice Denton. She is currently serving a second term as an elected board member of the North American Chapter of the Association of Computational Linguistics and was PC co-chair for NAACL 2019. She recently joined the team of Editors in Chief for the ACL Rolling Review (ARR) system. Her research is currently funded by the NSF and by ADOBE.
Abstract
The availability of large multilingual pre-trained language models has opened up exciting pathways for developing NLP technologies for languages with scarce resources. In this talk I will advocate for the need to go beyond the most common languages in multilingual evaluation, and on the challenges of handling new, unseen-during-training languages and varieties. I will also share some of my experiences with working with indigenous and other endangered language communities and activists.
Biography
Antonios Anastasopoulos is an Assistant Professor in Computer Science at George Mason University. In 2019, Antonis received his PhD in Computer Science from the University of Notre Dame and then worked as a postdoctoral researcher at the Language Technologies Institute at Carnegie Mellon University. His research interests revolve around computational linguistics and natural language processing with a focus on low-resource settings, endangered languages, and cross-lingual learning.
Abstract
Model robustness and spurious correlations have received increasing attention in the NLP community, both in methods and evaluation. The term “spurious correlation” is overloaded though and can refer to any undesirable shortcuts learned by the model, as judged by domain experts.
When designing mitigation algorithms, we often (implicitly) assume that a spurious feature is irrelevant for prediction. However, many features in NLP (e.g. word overlap and negation) are not spurious in the sense that the background is spurious for classifying objects in an image. In contrast, they carry important information that’s needed to make predictions by humans. In this talk, we argue that it is more productive to characterize features in terms of their necessity and sufficiency for prediction. We then discuss the implications of this categorization in representation, learning, and evaluation.
Biography
He He is an Assistant Professor in the Department of Computer Science and the Center for Data Science at New York University. She obtained her PhD in Computer Science at the University of Maryland, College Park. Before joining NYU, she spent a year at AWS AI and was a post-doc at Stanford University before that. She is interested in building robust and trustworthy NLP systems in human-centered settings. Her recent research focus includes robust language understanding, collaborative text generation, and understanding capabilities and issues of large language models.
Abstract
Abstract
Modern learning architectures for natural language processing have been very successful in incorporating a huge amount of texts into their parameters. However, by and large, such models store and use knowledge in distributed and decentralized ways. This proves unreliable and makes the models ill-suited for knowledge-intensive tasks that require reasoning over factual information in linguistic expressions. In this talk, I will give a few examples of exploring alternative architectures to tackle those challenges. In particular, we can improve the performance of such (language) models by representing, storing and accessing knowledge in a dedicated memory component.
This talk is based on several joint works with Yury Zemlyanskiy (Google Research), Michiel de Jong (USC and Google Research), William Cohen (Google Research and CMU) and our other collaborators in Google Research.
Biography
Fei is a research scientist at Google Research. Before that, he was a Professor of Computer Science at University of Southern California. His primary research interests are machine learning and its application to various AI problems: speech and language processing, computer vision, robotics and recently weather forecast and climate modeling. He has a PhD (2007) from Computer and Information Science from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing, China).
Abstract
Voice conversion (VC) is a significant aspect of artificial intelligence. It is the study of how to convert one’s voice to sound like that of another without changing the linguistic content. Voice conversion belongs to a general technical field of speech synthesis, which converts text to speech or changes the properties of speech, for example, voice identity, emotion, and accents. Voice conversion involves multiple speech processing techniques, such as speech analysis, spectral conversion, prosody conversion, speaker characterization, and vocoding. With the recent advances in theory and practice, we are now able to produce human-like voice quality with high speaker similarity. In this talk, Dr. Sisman will present the recent advances in voice conversion and discuss their promise and limitations. Dr. Sisman will also provide a summary of the available resources for expressive voice conversion research.
Biography
Dr. Berrak Sisman (Member, IEEE) received the Ph.D. degree in electrical and computer engineering from National University of Singapore in 2020, fully funded by A*STAR Graduate Academy under Singapore International Graduate Award (SINGA). She is currently working as a tenure-track Assistant Professor at the Erik Jonsson School Department of Electrical and Computer Engineering at University of Texas at Dallas, United States. Prior to joining UT Dallas, she was a faculty member at Singapore University of Technology and Design (2020-2022). She was a Postdoctoral Research Fellow at the National University of Singapore (2019-2020). She was an exchange doctoral student at the University of Edinburgh and a visiting scholar at The Centre for Speech Technology Research (CSTR), University of Edinburgh (2019). She was a visiting researcher at RIKEN Advanced Intelligence Project in Japan (2018). Her research is focused on machine learning, signal processing, emotion, speech synthesis and voice conversion.
Dr. Sisman has served as the Area Chair at INTERSPEECH 2021, INTERSPEECH 2022, IEEE SLT 2022 and as the Publication Chair at ICASSP 2022. She has been elected as a member of the IEEE Speech and Language Processing Technical Committee (SLTC) in the area of Speech Synthesis for the term from January 2022 to December 2024. She plays leadership roles in conference organizations and active in technical committees. She has served as the General Coordinator of the Student Advisory Committee (SAC) of International Speech Communication Association (ISCA).
Abstract