BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21041@www.clsp.jhu.edu DTSTAMP:20240329T115400Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nNarration is a universal human practice that serves a s a key site of education\, collective memory\, fostering social belief sy stems\, and furthering human creativity. Recent studies in economics (Shil ler\, 2020)\, climate science (Bushell et al.\, 2017)\, political polariza tion (Kubin et al.\, 2021)\, and mental health (Adler et al.\, 2016) sugge st an emerging interdisciplinary consensus that narrative is a central con cept for understanding human behavior and beliefs. For close to half a cen tury\, the field of narratology has developed a rich set of theoretical fr ameworks for understanding narrative. And yet these theories have largely gone untested on large\, heterogenous collections of texts. Scholars conti nue to generate schemas by extrapolating from small numbers of manually ob served documents. In this talk\, I will discuss how we can use machine lea rning to develop data-driven theories of narration to better understand wh at Labov and Waletzky called “the simplest and most fundamental narrative structures.” How can machine learning help us approach what we might call a minimal theory of narrativity?\nBiography\nAndrew Piper is Professor and William Dawson Scholar in the Department of Languages\, Literatures\, and Cultures at McGill University. He is the director of _.txtlab \n_\,\n a l aboratory for cultural analytics\, and editor of the /Journal of Cultural Analytics/\, an open-access journal dedicated to the computational study o f culture. He is the author of numerous books and articles on the relation ship of technology and reading\, including /Book Was There: Reading in Ele ctronic Times/(Chicago 2012)\, /Enumerations: Data and Literary Study/(Chi cago 2018)\, and most recently\, /Can We Be Wrong? The Problem of Textual Evidence in a Time of Data/(Cambridge 2020). DTSTART;TZID=America/New_York:20211112T120000 DTEND;TZID=America/New_York:20211112T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Andrew Piper (McGill University) ” How can we use machine learning to understand narration?” URL:https://www.clsp.jhu.edu/events/andrew-piper-mcgill-university-how-can- we-use-machine-learning-to-understand-narration/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nNarration is a universal human practice that serves a s a key site of education\, collective memory\, fostering social belief sy stems\, and furthering human creativity. Recent studies in economics (Shil ler\, 2020)\, climate science (Bushell et al.\, 2017)\, political polariza tion (Kubin et al.\, 2021)\, and mental health (Adler et al.\, 2016) sugge st an emerging interdisciplinary consensus that narrative is a central con cept for understanding human behavior and beliefs. For close to half a cen tury\, the field of narratology has developed a rich set of theoretical fr ameworks for understanding narrative. And yet these theories have largely gone untested on large\, heterogenous collections of texts. Scholars conti nue to generate schemas by extrapolating from small numbers of manually ob served documents. In this talk\, I will discuss how we can use machine lea rning to develop data-driven theories of narration to better understand wh at Labov and Waletzky called “the simplest and most fundamental narrative structures.” How can machine learning help us approach what we might call a minimal theory of narrativity?
\nBiography
\n< p>Andrew Piper is Professor and William D awson Scholar in the Department of Languages\, Literatures\, and Cultures at McGill University. He is the director of _.txtlab \n\na laboratory for cultural ana lytics\, and editor of the /Journal of Cultural Analytics/\, an open-acces s journal dedicated to the computational study of culture. He is the autho r of numerous books and articles on the relationship of technology and rea ding\, including /Book Was There: Reading in Electronic Times/(Chicago 201 2)\, /Enumerations: Data and Literary Study/(Chicago 2018)\, and most rece ntly\, /Can We Be Wrong? The Problem of Textual Evidence in a Time of Data /(Cambridge 2020).
\n X-TAGS;LANGUAGE=en-US:2021\,November\,Piper END:VEVENT BEGIN:VEVENT UID:ai1ec-22412@www.clsp.jhu.edu DTSTAMP:20240329T115400Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nDriven by the goal of eradicating language barriers o n a global scale\, machine translation has solidified itself as a key focu s of artificial intelligence research today. However\, such efforts have c oalesced around a small subset of languages\, leaving behind the vast majo rity of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe\, high-quality results\, all while ke eping ethical considerations in mind? In this talk\, I introduce No Langua ge Left Behind\, an initiative to break language barriers for low-resource languages. In No Language Left Behind\, we took on the low-resource langu age translation challenge by first contextualizing the need for translatio n support through exploratory interviews with native speakers. Then\, we c reated datasets and models aimed at narrowing the performance gap between low and high-resource languages. We proposed multiple architectural and tr aining improvements to counteract overfitting while training on thousands of tasks. Critically\, we evaluated the performance of over 40\,000 differ ent translation directions using a human-translated benchmark\, Flores-200 \, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achiev es an improvement of 44% BLEU relative to the previous state-of-the-art\, laying important groundwork towards realizing a universal translation syst em in an open-source manner.\nBiography\nAngela is a research scientist at Meta AI Research in New York\, focusing on supporting efforts in speech a nd language research. Recent projects include No Language Left Behind (htt ps://ai.facebook.com/research/no-language-left-behind/) and Universal Spee ch Translation for Unwritten Languages (https://ai.facebook.com/blog/ai-tr anslation-hokkien/). Before translation\, Angela previously focused on res earch in on-device models for NLP and computer vision and text generation. DTSTART;TZID=America/New_York:20221118T120000 DTEND;TZID=America/New_York:20221118T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Angela Fan (Meta AI Research) “No Language Left Behind: Scaling Hu man-Centered Machine Translation” URL:https://www.clsp.jhu.edu/events/angela-fan-facebook/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nDriven by the goal of eradicating language barriers o n a global scale\, machine translation has solidified itself as a key focu s of artificial intelligence research today. However\, such efforts have c oalesced around a small subset of languages\, leaving behind the vast majo rity of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe\, high-quality results\, all while ke eping ethical considerations in mind? In this talk\, I introduce No Langua ge Left Behind\, an initiative to break language barriers for low-resource languages. In No Language Left Behind\, we took on the low-resource langu age translation challenge by first contextualizing the need for translatio n support through exploratory interviews with native speakers. Then\, we c reated datasets and models aimed at narrowing the performance gap between low and high-resource languages. We proposed multiple architectural and tr aining improvements to counteract overfitting while training on thousands of tasks. Critically\, we evaluated the performance of over 40\,000 differ ent translation directions using a human-translated benchmark\, Flores-200 \, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achiev es an improvement of 44% BLEU relative to the previous state-of-the-art\, laying important groundwork towards realizing a universal translation syst em in an open-source manner.
\nBiography
\nAngela is a research scientist at Meta AI Research in Ne w York\, focusing on supporting efforts in speech and language research. R ecent projects include No Language Left Behind (https://ai.facebook.com/research/no-language-left-be hind/) and Universal Speech Translation for Unwritten Languages (https://ai.facebook.com/blog/ai-translation -hokkien/). Before translation\, Angela previously focused on research in on-device models for NLP and computer vision and text generation.
\n\n X-TAGS;LANGUAGE=en-US:2022\,Fan\,November END:VEVENT END:VCALENDAR