BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21259@www.clsp.jhu.edu DTSTAMP:20240328T205433Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nNatural language processing has been revolutionized b y neural networks\, which perform impressively well in applications such a s machine translation and question answering. Despite their success\, neur al networks still have some substantial shortcomings: Their internal worki ngs are poorly understood\, and they are notoriously brittle\, failing on example types that are rare in their training data. In this talk\, I will use the unifying thread of hierarchical syntactic structure to discuss app roaches for addressing these shortcomings. First\, I will argue for a new evaluation paradigm based on targeted\, hypothesis-driven tests that bette r illuminate what models have learned\; using this paradigm\, I will show that even state-of-the-art models sometimes fail to recognize the hierarch ical structure of language (e.g.\, to conclude that “The book on the table is blue” implies “The table is blue.”) Second\, I will show how these beh avioral failings can be explained through analysis of models’ inductive bi ases and internal representations\, focusing on the puzzle of how neural n etworks represent discrete symbolic structure in continuous vector space. I will close by showing how insights from these analyses can be used to ma ke models more robust through approaches based on meta-learning\, structur ed architectures\, and data augmentation.\nBiography\nTom McCoy is a PhD c andidate in the Department of Cognitive Science at Johns Hopkins Universit y. As an undergraduate\, he studied computational linguistics at Yale. His research combines natural language processing\, cognitive science\, and m achine learning to study how we can achieve robust generalization in model s of language\, as this remains one of the main areas where current AI sys tems fall short. In particular\, he focuses on inductive biases and repres entations of linguistic structure\, since these are two of the major compo nents that determine how learners generalize to novel types of input. DTSTART;TZID=America/New_York:20220131T120000 DTEND;TZID=America/New_York:20220131T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Tom McCoy (Johns Hopkins University) “Opening the Black Box of Deep Learning: Representations\, Inductive Biases\, and Robustness” URL:https://www.clsp.jhu.edu/events/tom-mccoy-johns-hopkins-university-open ing-the-black-box-of-deep-learning-representations-inductive-biases-and-ro bustness/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nNatural language processing has been revolutionized b y neural networks\, which perform impressively well in applications such a s machine translation and question answering. Despite their success\, neur al networks still have some substantial shortcomings: Their internal worki ngs are poorly understood\, and they are notoriously brittle\, failing on example types that are rare in their training data. In this talk\, I will use the unifying thread of hierarchical syntactic structure to discuss app roaches for addressing these shortcomings. First\, I will argue for a new evaluation paradigm based on targeted\, hypothesis-driven tests that bette r illuminate what models have learned\; using this paradigm\, I will show that even state-of-the-art models sometimes fail to recognize the hierarch ical structure of language (e.g.\, to conclude that “The book on the table is blue” implies “The table is blue.”) Second\, I will show how these beh avioral failings can be explained through analysis of models’ inductive bi ases and internal representations\, focusing on the puzzle of how neural n etworks represent discrete symbolic structure in continuous vector space. I will close by showing how insights from these analyses can be used to ma ke models more robust through approaches based on meta-learning\, structur ed architectures\, and data augmentation.
\nBiography
\nTom McCoy is a PhD candidate in the Department of Cognitive Sci ence at Johns Hopkins University. As an undergraduate\, he studied computa tional linguistics at Yale. His research combines natural language process ing\, cognitive science\, and machine learning to study how we can achieve robust generalization in models of language\, as this remains one of the main areas where current AI systems fall short. In particular\, he focuses on inductive biases and representations of linguistic structure\, since t hese are two of the major components that determine how learners generaliz e to novel types of input.
\n X-TAGS;LANGUAGE=en-US:2022\,January\,McCoy END:VEVENT BEGIN:VEVENT UID:ai1ec-22394@www.clsp.jhu.edu DTSTAMP:20240328T205433Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\n\nModel robustness and spurious correlations have rec eived increasing attention in the NLP community\, both in methods and eval uation. The term “spurious correlation” is overloaded though and can refer to any undesirable shortcuts learned by the model\, as judged by domain e xperts.\n\n\nWhen designing mitigation algorithms\, we often (implicitly) assume that a spurious feature is irrelevant for prediction. However\, man y features in NLP (e.g. word overlap and negation) are not spurious in the sense that the background is spurious for classifying objects in an image . In contrast\, they carry important information that’s needed to make pre dictions by humans. In this talk\, we argue that it is more productive to characterize features in terms of their necessity and sufficiency for pred iction. We then discuss the implications of this categorization in represe ntation\, learning\, and evaluation.\nBiography\nHe He is an Assistant Pro fessor in the Department of Computer Science and the Center for Data Scien ce at New York University. She obtained her PhD in Computer Science at the University of Maryland\, College Park. Before joining NYU\, she spent a y ear at AWS AI and was a post-doc at Stanford University before that. She i s interested in building robust and trustworthy NLP systems in human-cente red settings. Her recent research focus includes robust language understan ding\, collaborative text generation\, and understanding capabilities and issues of large language models. DTSTART;TZID=America/New_York:20221014T120000 DTEND;TZID=America/New_York:20221014T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:He He (New York University) “What We Talk about When We Talk about Spurious Correlations in NLP” URL:https://www.clsp.jhu.edu/events/he-he-new-york-university/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nModel robustness and spuri ous correlations have received increasing attention in the NLP community\, both in methods and evaluation. The term “spurious correlation” is overlo aded though and can refer to any undesirable shortcuts learned by the mode l\, as judged by domain experts.
\nWhen designing mitigation algorithms\, we often (implicitly) assume that a spurious feature is irrelevant for prediction. However\, many features in NLP (e.g. word overlap and negation) are not spurious in the sense that the background is spurious for classifying objects in an image. In contra st\, they carry important information that’s needed to make predictions by humans. In this talk\, we argue that it is more productive to characteriz e features in terms of their necessity and sufficiency for prediction. We then discuss the implications of this categorization in representation\, l earning\, and evaluation.
\nBiography
\nHe He is an Assistant Professor in the Department of Computer Science and the C enter for Data Science at New York University. She obtained her PhD in Com puter Science at the University of Maryland\, College Park. Before joining NYU\, she spent a year at AWS AI and was a post-doc at Stanford Universit y before that. She is interested in building robust and trustworthy NLP sy stems in human-centered settings. Her recent research focus includes robus t language understanding\, collaborative text generation\, and understandi ng capabilities and issues of large language models.
\nAbstr act
\nNon-invasive neural interfaces ha ve the potential to transform human-computer interaction by providing user s with low friction\, information rich\, always available inputs. Reality Labs at Meta is developing such an interface for the control of augmented reality devices based on electromyographic (EMG) signals captured at the w rist. Speech and audio technologies turn out to be especially well suited to unlocking the full potential of these signals and interactions and this talk will present several specific problems and the speech and audio appr oaches that have advanced us towards this ultimate goal of effortless and joyful interfaces. We will provide the necessary neuroscientific backgroun d to understand these signals\, describe automatic speech recognition-insp ired interfaces generating text and beamforming-inspired interfaces for id entifying individual neurons\, and then explain how they connect with egoc entric machine intelligence tasks that might reside on these devices.
\nBiography
\nMichael I Mandel is a Research Sci entist in Reality Labs at Meta. Previously\, he was an Associate Professor of Computer and Information Science at Brooklyn College and the CUNY Grad uate Center working at the intersection of machine learning\, signal proce ssing\, and psychoacoustics. He earned his BSc in Computer Science from th e Massachusetts Institute of Technology and his MS and PhD with distinctio n in Electrical Engineering from Columbia University as a Fu Foundation Pr esidential Scholar. He was an FQRNT Postdoctoral Research Fellow in the Ma chine Learning laboratory (LISA/MILA) at the Université de Montréal\, an A lgorithm Developer at Audience Inc\, and a Research Scientist in Computer Science and Engineering at the Ohio State University. His work has been su pported by the National Science Foundation\, including via a CAREER award\ , the Alfred P. Sloan Foundation\, and Google\, Inc.
\n X-TAGS;LANGUAGE=en-US:2024\,January\,Mandel END:VEVENT END:VCALENDAR