Kai-Wei Chang (UCLA) “Towards Trustworthy AI: Teaching Machines Through Language and Rules”
3400 N CHARLES ST
Baltimore
MD 21218
Abstract
Over the past decades, machine learning has primarily relied on labeled data, with success often depending on the availability of vast, high-quality annotations and the assumption that test conditions mirror training conditions. In contrast, humans learn efficiently from conceptual explanations, instructions, rules, and contextual understanding. With advancements in large language models, AI systems can now understand descriptions and follow instructions, paving the way for a paradigm shift. This talk explores how teaching machines through language and rules can enable AI systems to gain human trust and enhance their inclusivity, robustness, and ability to learn new concepts. I will highlight our journey in developing vision-language models capable of detecting unseen objects through rich natural language descriptions. Additionally, I will discuss techniques for guiding the behavior of language models and text-to-image models using language. I will also describe our efforts to incorporate constraints to control language models effectively. Finally, I will conclude the talk by discussing future directions and challenges in building trustworthy language agents.
Bio
Kai-Wei Chang is an Associate Professor in the Department of Computer Science at UCLA and an Amazon Scholar at Amazon AGI. His research focuses on trustworthy AI and multimodal language models. He has published extensively in natural language processing and machine learning, with his work widely recognized through multiple paper awards at top conferences, including EMNLP, ACL, KDD, and CVPR. In 2021, Kai-Wei was honored as a Sloan Fellow for his contributions to Trustworthy AI as a junior faculty member. Recently, Kai-Wei was elected as an officer of SIGDAT, the organizing body behind EMNLP, and will serve as Vice President in 2025 and President in 2026. He is an associate editor for leading journals such as JAIR, JMLR, TACL, and ARR. He also served as an Associate Program Chair at AAAI 2023 and as Senior Area Chair for most NLP/ML/AI conferences. Since 2021, Kai-Wei has organized five editions of the Trustworthy NLP Workshop at ACL*, a platform that fosters research on fairness, robustness, and inclusivity in NLP. Additionally, he has delivered multiple tutorials on topics such as Fairness, Robustness, and Multimodal NLP at EMNLP (2019, 2021) and ACL (2023). Kai-Wei received his Ph.D. from the University of Illinois at Urbana-Champaign in 2015 and subsequently worked as a postdoctoral researcher at Microsoft Research in 2016. For more details, visit http://kwchang.net.