BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21621@www.clsp.jhu.edu DTSTAMP:20240329T122243Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nSystems that support expre ssive\, situated natural language interactions are essential for expanding access to complex computing systems\, such as robots and databases\, to n on-experts. Reasoning and learning in such natural language interactions i s a challenging open problem. For example\, resolving sentence meaning req uires reasoning not only about word meaning\, but also about the interacti on context\, including the history of the interaction and the situated env ironment. In addition\, the sequential dynamics that arise between user an d system in and across interactions make learning from static data\, i.e.\ , supervised data\, both challenging and ineffective. However\, these same interaction dynamics result in ample opportunities for learning from impl icit and explicit feedback that arises naturally in the interaction. This lays the foundation for systems that continually learn\, improve\, and ada pt their language use through interaction\, without additional annotation effort. In this talk\, I will focus on these challenges and opportunities. First\, I will describe our work on modeling dependencies between languag e meaning and interaction context when mapping natural language in interac tion to executable code. In the second part of the talk\, I will describe our work on language understanding and generation in collaborative interac tions\, focusing on continual learning from explicit and implicit user fee dback.
\nBiography
\nAlane Suhr is a PhD Cand idate in the Department of Computer Science at Cornell University\, advis ed by Yoav Artzi. Her research spans natural language processing\, machine learning\, and computer vision\, with a focus on building systems that pa rticipate and continually learn in situated natural language interactions with human users. Alane’s work has been recognized by paper awards at ACL and NAACL\, and has been supported by fellowships and grants\, including a n NSF Graduate Research Fellowship\, a Facebook PhD Fellowship\, and resea rch awards from AI2\, ParlAI\, and AWS. Alane has also co-organized multip le workshops and tutorials appearing at NeurIPS\, EMNLP\, NAACL\, and ACL. Previously\, Alane received a BS in Computer Science and Engineering as a n Eminence Fellow at the Ohio State University.
DTSTART;TZID=America/New_York:20220314T120000 DTEND;TZID=America/New_York:20220314T131500 LOCATION:Virtual Seminar SEQUENCE:0 SUMMARY:Alane Suhr (Cornell University) “Reasoning and Learning in Interact ive Natural Language Systems” URL:https://www.clsp.jhu.edu/events/alane-suhr-cornell-university-reasoning -and-learning-in-interactive-natural-language-systems/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,March\,Suhr END:VEVENT BEGIN:VEVENT UID:ai1ec-23314@www.clsp.jhu.edu DTSTAMP:20240329T122243Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nWhile GPT models have shown impressive performance on summa rization and open-ended text generation\, it’s important to assess their a bilities on more constrained text generation tasks that require significan t and diverse rewritings. In this talk\, I will discuss the challenges of evaluating systems that are highly competitive and perform close to humans on two such tasks: (i) paraphrase generation and (ii) text simplification . To address these challenges\, we introduce an interactive Rank-and-Rate evaluation framework. Our results show that GPT-3.5 has made a major step up from fine-tuned T5 in paraphrase generation\, but still lacks the diver sity and creativity of humans who spontaneously produce large quantities o f paraphrases.
\nAdditionally\, we demonstrate that GPT-3.5 performs similarly to a sin gle human in text simplification\, which makes it difficult for existing a utomatic evaluation metrics to distinguish between the two. To overcome th is shortcoming\, we propose LENS\, a learnable evaluation metric that outp erforms SARI\, BERTScore\, and other existing methods in both automatic ev aluation and minimal risk decoding for text generation.
\nBiography
\nWei Xu is an assistant professor in the School of Interactive Com puting at the Georgia Institute of Technology\, where she is also affiliat ed with the new NSF AI CARING Institute and Machine Learning Center. She r eceived her Ph.D. in Computer Science from New York University and her B.S . and M.S. from Tsinghua University. Xu’s research interests are in natura l language processing\, machine learning\, and social media\, with a focus on text generation\, stylistics\, robustness and controllability of machi ne learning models\, and reading and writing assistive technology. She is a recipient of the NSF CAREER Award\, CrowdFlower AI for Everyone Award\, Criteo Faculty Research Award\, and Best Paper Award at COLING’18. She has also received funds from DARPA and IARPA. She is an elected member of the NAACL executive board and regularly serves as a senior area chair for AI/ NLP conferences.
DTSTART;TZID=America/New_York:20230224T120000 DTEND;TZID=America/New_York:20230224T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Wei Xu (Georgia Tech) “GPT-3 vs Humans: Rethinking Evaluation of Na tural Language Generation” URL:https://www.clsp.jhu.edu/events/wei-xu-georgia-tech/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,February\,Xu END:VEVENT END:VCALENDAR