Abstract
Understanding the implications underlying a text is critical to assessing its impact, in particular the social dynamics that may result from a reading of the text. This requires endowing artificial intelligence (AI) systems with pragmatic reasoning, for example to correctly conclude that the statement “Epidemics and cases of disease in the 21st century are “staged”” relates to unfounded conspiracy theories. In this talk, I discuss how shortcomings in the ability of current AI systems to reason about pragmatics present challenges to equitable detection of false or harmful language. I demonstrate how these shortcomings can be addressed by imposing human-interpretable structure on deep learning architectures using insights from linguistics.
In the first part of the talk, I describe how adversarial text generation algorithms can be used to improve robustness of content moderation systems. I then introduce a pragmatic formalism for reasoning about harmful implications conveyed by social media text. I show how this pragmatic approach can be combined with generative neural language models to uncover implications of news headlines. I also address the bottleneck to progress in text generation posed by gaps in evaluation of factuality. I conclude by showing how context-aware content moderation can be used to ensure safe interactions with conversational agents.
Biography
Saadia Gabriel is a PhD candidate in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, advised by Prof. Yejin Choi and Prof. Franziska Roesner. Her research revolves around natural language processing and machine learning, with a particular focus on building systems for understanding how social commonsense manifests in text (i.e. how do people typically behave in social scenarios), as well as mitigating spread of false or harmful text (e.g. Covid-19 misinformation). Her work has been covered by a wide range of media outlets like Forbes and TechCrunch. It has also received a 2019 ACL best short paper nomination, a 2019 IROS RoboCup best paper nomination and won a best paper award at the 2020 WeCNLP summit. Prior to her PhD, Saadia received a BA summa cum laude from Mount Holyoke College in Computer Science and Mathematics.
Abstract
The growing power in computing and AI promises a near-term future of human-machine teamwork. In this talk, I will present my research group’s efforts in understanding the complex dynamics of human-machine interaction and designing intelligent machines aimed to assist and collaborate with people. I will focus on 1) tools for onboarding machine teammates and authoring machine assistance, 2) methods for detecting, and broadly managing, errors in collaboration, and 3) building blocks of knowledge needed to enable ad hoc human-machine teamwork. I will also highlight our recent work on designing assistive, collaborative machines to support older adults aging in place.
Biography
Chien-Ming Huang is the John C. Malone Assistant Professor in the Department of Computer Science at the Johns Hopkins University. His research focuses on designing interactive AI aimed to assist and collaborate with people. He publishes in top-tier venues in HRI, HCI, and robotics including Science Robotics, HRI, CHI, and CSCW. His research has received media coverage from MIT Technology Review, Tech Insider, and Science Nation. Huang completed his postdoctoral training at Yale University and received his Ph.D. in Computer Science at the University of Wisconsin–Madison. He is a recipient of the NSF CAREER award. https://www.cs.jhu.edu/~cmhuang/