BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-22394@www.clsp.jhu.edu DTSTAMP:20240328T211247Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\n\nModel robustness and spurious correlations have rec eived increasing attention in the NLP community\, both in methods and eval uation. The term “spurious correlation” is overloaded though and can refer to any undesirable shortcuts learned by the model\, as judged by domain e xperts.\n\n\nWhen designing mitigation algorithms\, we often (implicitly) assume that a spurious feature is irrelevant for prediction. However\, man y features in NLP (e.g. word overlap and negation) are not spurious in the sense that the background is spurious for classifying objects in an image . In contrast\, they carry important information that’s needed to make pre dictions by humans. In this talk\, we argue that it is more productive to characterize features in terms of their necessity and sufficiency for pred iction. We then discuss the implications of this categorization in represe ntation\, learning\, and evaluation.\nBiography\nHe He is an Assistant Pro fessor in the Department of Computer Science and the Center for Data Scien ce at New York University. She obtained her PhD in Computer Science at the University of Maryland\, College Park. Before joining NYU\, she spent a y ear at AWS AI and was a post-doc at Stanford University before that. She i s interested in building robust and trustworthy NLP systems in human-cente red settings. Her recent research focus includes robust language understan ding\, collaborative text generation\, and understanding capabilities and issues of large language models. DTSTART;TZID=America/New_York:20221014T120000 DTEND;TZID=America/New_York:20221014T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:He He (New York University) “What We Talk about When We Talk about Spurious Correlations in NLP” URL:https://www.clsp.jhu.edu/events/he-he-new-york-university/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nModel robustness and spuri ous correlations have received increasing attention in the NLP community\, both in methods and evaluation. The term “spurious correlation” is overlo aded though and can refer to any undesirable shortcuts learned by the mode l\, as judged by domain experts.
\nWhen designing mitigation algorithms\, we often (implicitly) assume that a spurious feature is irrelevant for prediction. However\, many features in NLP (e.g. word overlap and negation) are not spurious in the sense that the background is spurious for classifying objects in an image. In contra st\, they carry important information that’s needed to make predictions by humans. In this talk\, we argue that it is more productive to characteriz e features in terms of their necessity and sufficiency for prediction. We then discuss the implications of this categorization in representation\, l earning\, and evaluation.
\nBiography
\nHe He is an Assistant Professor in the Department of Computer Science and the C enter for Data Science at New York University. She obtained her PhD in Com puter Science at the University of Maryland\, College Park. Before joining NYU\, she spent a year at AWS AI and was a post-doc at Stanford Universit y before that. She is interested in building robust and trustworthy NLP sy stems in human-centered settings. Her recent research focus includes robus t language understanding\, collaborative text generation\, and understandi ng capabilities and issues of large language models.
\nAbstr act
\nOne of the keys to success in machine learning applic ations is to improve each user’s personal experience via personalized mode ls. A personalized model can be a more resource-efficient solution than a general-purpose model\, too\, because it focuses on a particular sub-probl em\, for which a smaller model architecture can be good enough. However\, training a personalized model requires data from the particular test-time user\, which are not always available due to their private nature and tech nical challenges. Furthermore\, such data tend to be unlabeled as they can be collected only during the test time\, once after the system is deploye d to user devices. One could rely on the generalization power of a generic model\, but such a model can be too computationally/spatially complex for real-time processing in a resource-constrained device. In this talk\, I will present some techniques to circumvent the lack of labeled personal data in the context of speech enhancement. Ou r machine learning models will require zero or few data samples from the t est-time users\, while they can still achieve the personalization goal. To this end\, we will investigate modularized speech enhancement models as w ell as the potential of self-supervised learning for personalized speech e nhancement. Because our research achieves the personalization goal in a da ta- and resource-efficient way\, it is a step towards a more available and affordable AI for society.
\nBiography
\n