BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21621@www.clsp.jhu.edu DTSTAMP:20240328T212520Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nSystems that support expressive\, situated natural la nguage interactions are essential for expanding access to complex computin g systems\, such as robots and databases\, to non-experts. Reasoning and l earning in such natural language interactions is a challenging open proble m. For example\, resolving sentence meaning requires reasoning not only ab out word meaning\, but also about the interaction context\, including the history of the interaction and the situated environment. In addition\, the sequential dynamics that arise between user and system in and across inte ractions make learning from static data\, i.e.\, supervised data\, both ch allenging and ineffective. However\, these same interaction dynamics resul t in ample opportunities for learning from implicit and explicit feedback that arises naturally in the interaction. This lays the foundation for sys tems that continually learn\, improve\, and adapt their language use throu gh interaction\, without additional annotation effort. In this talk\, I wi ll focus on these challenges and opportunities. First\, I will describe ou r work on modeling dependencies between language meaning and interaction c ontext when mapping natural language in interaction to executable code. In the second part of the talk\, I will describe our work on language unders tanding and generation in collaborative interactions\, focusing on continu al learning from explicit and implicit user feedback.\nBiography\nAlane Su hr is a PhD Candidate in the Department of Computer Science at Cornell Uni versity\, advised by Yoav Artzi. Her research spans natural language proc essing\, machine learning\, and computer vision\, with a focus on building systems that participate and continually learn in situated natural langua ge interactions with human users. Alane’s work has been recognized by pape r awards at ACL and NAACL\, and has been supported by fellowships and gran ts\, including an NSF Graduate Research Fellowship\, a Facebook PhD Fellow ship\, and research awards from AI2\, ParlAI\, and AWS. Alane has also co- organized multiple workshops and tutorials appearing at NeurIPS\, EMNLP\, NAACL\, and ACL. Previously\, Alane received a BS in Computer Science and Engineering as an Eminence Fellow at the Ohio State University. DTSTART;TZID=America/New_York:20220314T120000 DTEND;TZID=America/New_York:20220314T131500 LOCATION:Virtual Seminar SEQUENCE:0 SUMMARY:Alane Suhr (Cornell University) “Reasoning and Learning in Interact ive Natural Language Systems” URL:https://www.clsp.jhu.edu/events/alane-suhr-cornell-university-reasoning -and-learning-in-interactive-natural-language-systems/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nSystems that support expressive\, situated natural la nguage interactions are essential for expanding access to complex computin g systems\, such as robots and databases\, to non-experts. Reasoning and l earning in such natural language interactions is a challenging open proble m. For example\, resolving sentence meaning requires reasoning not only ab out word meaning\, but also about the interaction context\, including the history of the interaction and the situated environment. In addition\, the sequential dynamics that arise between user and system in and across inte ractions make learning from static data\, i.e.\, supervised data\, both ch allenging and ineffective. However\, these same interaction dynamics resul t in ample opportunities for learning from implicit and explicit feedback that arises naturally in the interaction. This lays the foundation for sys tems that continually learn\, improve\, and adapt their language use throu gh interaction\, without additional annotation effort. In this talk\, I wi ll focus on these challenges and opportunities. First\, I will describe ou r work on modeling dependencies between language meaning and interaction c ontext when mapping natural language in interaction to executable code. In the second part of the talk\, I will describe our work on language unders tanding and generation in collaborative interactions\, focusing on continu al learning from explicit and implicit user feedback.
\nBiog raphy
\nAlane Suhr is a PhD Candidate in the Department of Computer Science at Cornell University\, advised by Yoav Artzi. Her resea rch spans natural language processing\, machine learning\, and computer vi sion\, with a focus on building systems that participate and continually l earn in situated natural language interactions with human users. Alane’s w ork has been recognized by paper awards at ACL and NAACL\, and has been su pported by fellowships and grants\, including an NSF Graduate Research Fel lowship\, a Facebook PhD Fellowship\, and research awards from AI2\, ParlA I\, and AWS. Alane has also co-organized multiple workshops and tutorials appearing at NeurIPS\, EMNLP\, NAACL\, and ACL. Previously\, Alane receive d a BS in Computer Science and Engineering as an Eminence Fellow at the Oh io State University.
\n X-TAGS;LANGUAGE=en-US:2022\,March\,Suhr END:VEVENT BEGIN:VEVENT UID:ai1ec-22374@www.clsp.jhu.edu DTSTAMP:20240328T212520Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nIn recent years\, the field of Natural Language Proce ssing has seen a profusion of tasks\, datasets\, and systems that facilita te reasoning about real-world situations through language (e.g.\, RTE\, MN LI\, COMET). Such systems might\, for example\, be trained to consider a s ituation where “somebody dropped a glass on the floor\,” and conclude it i s likely that “the glass shattered” as a result. In this talk\, I will dis cuss three pieces of work that revisit assumptions made by or about these systems. In the first work\, I develop a Defeasible Inference task\, which enables a system to recognize when a prior assumption it has made may no longer be true in light of new evidence it receives. The second work I wil l discuss revisits partial-input baselines\, which have highlighted issues of spurious correlations in natural language reasoning datasets and led t o unfavorable assumptions about models’ reasoning abilities. In particular \, I will discuss experiments that show models may still learn to reason i n the presence of spurious dataset artifacts. Finally\, I will touch on wo rk analyzing harmful assumptions made by reasoning models in the form of s ocial stereotypes\, particularly in the case of free-form generative reaso ning models.\nBiography\nRachel Rudinger is an Assistant Professor in the Department of Computer Science at the University of Maryland\, College Par k. She holds joint appointments in the Department of Linguistics and the I nstitute for Advanced Computer Studies (UMIACS). In 2019\, Rachel complete d her Ph.D. in Computer Science at Johns Hopkins University in the Center for Language and Speech Processing. From 2019-2020\, she was a Young Inves tigator at the Allen Institute for AI in Seattle\, and a visiting research er at the University of Washington. Her research interests include computa tional semantics\, common-sense reasoning\, and issues of social bias and fairness in NLP. DTSTART;TZID=America/New_York:20220916T120000 DTEND;TZID=America/New_York:20220916T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Rachel Rudinger (University of Maryland\, College Park) “Not So Fas t!: Revisiting Assumptions in (and about) Natural Language Reasoning” URL:https://www.clsp.jhu.edu/events/rachel-rudinger-university-of-maryland- college-park-not-so-fast-revisiting-assumptions-in-and-about-natural-langu age-reasoning/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nIn recent years\, the field of Natural Language Proce ssing has seen a profusion of tasks\, datasets\, and systems that facilita te reasoning about real-world situations through language (e.g.\, RTE\, MN LI\, COMET). Such systems might\, for example\, be trained to consider a s ituation where “somebody dropped a glass on the floor\,” and conclude it i s likely that “the glass shattered” as a result. In this talk\, I will dis cuss three pieces of work that revisit assumptions made by or about these systems. In the first work\, I develop a Defeasible Inference task\, which enables a system to recognize when a prior assumption it has made may no longer be true in light of new evidence it receives. The second work I wil l discuss revisits partial-input baselines\, which have highlighted issues of spurious correlations in natural language reasoning datasets and led t o unfavorable assumptions about models’ reasoning abilities. In particular \, I will discuss experiments that show models may still learn to reason i n the presence of spurious dataset artifacts. Finally\, I will touch on wo rk analyzing harmful assumptions made by reasoning models in the form of s ocial stereotypes\, particularly in the case of free-form generative reaso ning models.
\nBiography
\nRachel Rudinger is an Assistant Professor in the Department of Computer Science at the Unive rsity of Maryland\, College Park. She holds joint appointments in the Depa rtment of Linguistics and the Institute for Advanced Computer Studies (UMI ACS). In 2019\, Rachel completed her Ph.D. in Computer Science at Johns Ho pkins University in the Center for Language and Speech Processing. From 20 19-2020\, she was a Young Investigator at the Allen Institute for AI in Se attle\, and a visiting researcher at the University of Washington. Her res earch interests include computational semantics\, common-sense reasoning\, and issues of social bias and fairness in NLP.
\n X-TAGS;LANGUAGE=en-US:2022\,Rudinger\,September END:VEVENT END:VCALENDAR