BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-23312@www.clsp.jhu.edu DTSTAMP:20240328T092607Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nAdvanced neural language m odels have grown ever larger and more complex\, pushing forward the limits of language understanding and generation\, while diminishing interpretabi lity. The black-box nature of deep neural networks blocks humans from unde rstanding them\, as well as trusting and using them in real-world applicat ions. This talk will introduce interpretation techniques that bridge the g ap between humans and models for developing trustworthy natural language p rocessing
\n (NLP). I will first show how to explain black-box models and evaluate their explanations for understanding their p rediction behavior. Then I will introduce how to improve the interpretabil ity of neural language models by making their decision-making transparent and rationalized. Finally\, I will discuss how to diagnose and improve mod els (e.g.\, robustness) through the lens of explanations. I will conclude with future research directions that are centered around model interpretab ility and committed to facilitating communications and interactions betwee n intelligent machines\, system developers\, and end users for long-term t rustworthy AI.Biography
\nHanjie Chen is a Ph.D. candidate in Computer Science at the University of Virginia\, advis ed by Prof. Yangfeng Ji. Her research interests lie in Trustworthy AI\, Na tural Language Processing (NLP)\, and
DTSTART;TZID=America/New_York:20230313T120000 DTEND;TZID=America/New_York:20230313T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Hanjie Chen (University of Virginia) “Bridging Humans and Machines: Techniques for Trustworthy NLP” URL:https://www.clsp.jhu.edu/events/hanjie-chen-university-of-virginia/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Chen\,February END:VEVENT END:VCALENDAR Interpretabl e Machine Learning. She develops interpretation techniques to explain neur al language models and make their prediction behavior transparent and reli able. She is a recipient of the Carlos and Esther Farrar Fellowship and th e Best Poster Award at the ACM CAPWIC 2021. Her work has been published at top-tier NLP/AI conferences (e.g.\, ACL\, AAAI\, EMNLP\, NAACL) and selec ted by the National Center for Women & Information Technology (NCWIT) Coll egiate Award Finalist 2021. She (as the primary instructor) co-designed an d taught the course\, Interpretable Machine Learning\, and was awarded the UVA CS Outstanding Graduate Teaching Award and University-wide Graduate T eaching Awards Nominee (top 5% of graduate instructors). More details can be found at https://www.cs.virginia.edu/~hc9mx