BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-22394@www.clsp.jhu.edu DTSTAMP:20240328T161241Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\n\nModel robustness and spurious correlations have rec eived increasing attention in the NLP community\, both in methods and eval uation. The term “spurious correlation” is overloaded though and can refer to any undesirable shortcuts learned by the model\, as judged by domain e xperts.\n\n\nWhen designing mitigation algorithms\, we often (implicitly) assume that a spurious feature is irrelevant for prediction. However\, man y features in NLP (e.g. word overlap and negation) are not spurious in the sense that the background is spurious for classifying objects in an image . In contrast\, they carry important information that’s needed to make pre dictions by humans. In this talk\, we argue that it is more productive to characterize features in terms of their necessity and sufficiency for pred iction. We then discuss the implications of this categorization in represe ntation\, learning\, and evaluation.\nBiography\nHe He is an Assistant Pro fessor in the Department of Computer Science and the Center for Data Scien ce at New York University. She obtained her PhD in Computer Science at the University of Maryland\, College Park. Before joining NYU\, she spent a y ear at AWS AI and was a post-doc at Stanford University before that. She i s interested in building robust and trustworthy NLP systems in human-cente red settings. Her recent research focus includes robust language understan ding\, collaborative text generation\, and understanding capabilities and issues of large language models. DTSTART;TZID=America/New_York:20221014T120000 DTEND;TZID=America/New_York:20221014T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:He He (New York University) “What We Talk about When We Talk about Spurious Correlations in NLP” URL:https://www.clsp.jhu.edu/events/he-he-new-york-university/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nModel robustness and spuri ous correlations have received increasing attention in the NLP community\, both in methods and evaluation. The term “spurious correlation” is overlo aded though and can refer to any undesirable shortcuts learned by the mode l\, as judged by domain experts.
\nWhen designing mitigation algorithms\, we often (implicitly) assume that a spurious feature is irrelevant for prediction. However\, many features in NLP (e.g. word overlap and negation) are not spurious in the sense that the background is spurious for classifying objects in an image. In contra st\, they carry important information that’s needed to make predictions by humans. In this talk\, we argue that it is more productive to characteriz e features in terms of their necessity and sufficiency for prediction. We then discuss the implications of this categorization in representation\, l earning\, and evaluation.
\nBiography
\nHe He is an Assistant Professor in the Department of Computer Science and the C enter for Data Science at New York University. She obtained her PhD in Com puter Science at the University of Maryland\, College Park. Before joining NYU\, she spent a year at AWS AI and was a post-doc at Stanford Universit y before that. She is interested in building robust and trustworthy NLP sy stems in human-centered settings. Her recent research focus includes robus t language understanding\, collaborative text generation\, and understandi ng capabilities and issues of large language models.
\nAbstr act
\nZipf’s law is commonly glossed by the aphorism “infre quent words are frequent\,” but in practice\, it has often meant that ther e are three types of words: frequent\, infrequent\, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (wi th dynamic time warping). Hidden Markov models worked well for moderately infrequent words\, but the problem of OOV words was not solved until sequ ence-to-sequence neural nets de-reified the concept of a word. Many other social phenomena follow power-law distributions. The number of native sp eakers of the N’th most spoken language\, for example\, is 1.44 billion ov er N to the 1.09. In languages with sufficient data\, we have shown that monolingual pre-training outperforms multilingual pre-training. In less-f requent languages\, multilingual knowledge transfer can significantly redu ce phone error rates. In languages with no training data\, unsupervised A SR methods can be proven to converge\, as long as the eigenvalues of the l anguage model are sufficiently well separated to be measurable. Other syst ems of social categorization may follow similar power-law distributions. Disability\, for example\, can cause speech patterns that were never seen in the training database\, but not all disabilities need do so. The inabi lity of speech technology to work for people with even common disabilities is probably caused by a lack of data\, and can probably be solved by find ing better modes of interaction between technology researchers and the com munities served by technology.
\nBiography
\nMark Hasegawa-Johnson is a William L. Everitt Faculty Fellow of Electrical and Computer Engineering at the University of Illinois in Urbana-Champaig n. He has published research in speech production and perception\, source separation\, voice conversion\, and low-resource automatic speech recogni tion.
\n X-TAGS;LANGUAGE=en-US:2022\,December\,Hasegawa-Johnson END:VEVENT END:VCALENDAR