BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21041@www.clsp.jhu.edu DTSTAMP:20240329T101236Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nNarration is a universal human practice that serves a s a key site of education\, collective memory\, fostering social belief sy stems\, and furthering human creativity. Recent studies in economics (Shil ler\, 2020)\, climate science (Bushell et al.\, 2017)\, political polariza tion (Kubin et al.\, 2021)\, and mental health (Adler et al.\, 2016) sugge st an emerging interdisciplinary consensus that narrative is a central con cept for understanding human behavior and beliefs. For close to half a cen tury\, the field of narratology has developed a rich set of theoretical fr ameworks for understanding narrative. And yet these theories have largely gone untested on large\, heterogenous collections of texts. Scholars conti nue to generate schemas by extrapolating from small numbers of manually ob served documents. In this talk\, I will discuss how we can use machine lea rning to develop data-driven theories of narration to better understand wh at Labov and Waletzky called “the simplest and most fundamental narrative structures.” How can machine learning help us approach what we might call a minimal theory of narrativity?\nBiography\nAndrew Piper is Professor and William Dawson Scholar in the Department of Languages\, Literatures\, and Cultures at McGill University. He is the director of _.txtlab \n_\,\n a l aboratory for cultural analytics\, and editor of the /Journal of Cultural Analytics/\, an open-access journal dedicated to the computational study o f culture. He is the author of numerous books and articles on the relation ship of technology and reading\, including /Book Was There: Reading in Ele ctronic Times/(Chicago 2012)\, /Enumerations: Data and Literary Study/(Chi cago 2018)\, and most recently\, /Can We Be Wrong? The Problem of Textual Evidence in a Time of Data/(Cambridge 2020). DTSTART;TZID=America/New_York:20211112T120000 DTEND;TZID=America/New_York:20211112T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Andrew Piper (McGill University) ” How can we use machine learning to understand narration?” URL:https://www.clsp.jhu.edu/events/andrew-piper-mcgill-university-how-can- we-use-machine-learning-to-understand-narration/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nNarration is a universal human practice that serves a s a key site of education\, collective memory\, fostering social belief sy stems\, and furthering human creativity. Recent studies in economics (Shil ler\, 2020)\, climate science (Bushell et al.\, 2017)\, political polariza tion (Kubin et al.\, 2021)\, and mental health (Adler et al.\, 2016) sugge st an emerging interdisciplinary consensus that narrative is a central con cept for understanding human behavior and beliefs. For close to half a cen tury\, the field of narratology has developed a rich set of theoretical fr ameworks for understanding narrative. And yet these theories have largely gone untested on large\, heterogenous collections of texts. Scholars conti nue to generate schemas by extrapolating from small numbers of manually ob served documents. In this talk\, I will discuss how we can use machine lea rning to develop data-driven theories of narration to better understand wh at Labov and Waletzky called “the simplest and most fundamental narrative structures.” How can machine learning help us approach what we might call a minimal theory of narrativity?
\nBiography
\n< p>Andrew Piper is Professor and William D awson Scholar in the Department of Languages\, Literatures\, and Cultures at McGill University. He is the director of _.txtlab \n\na laboratory for cultural ana lytics\, and editor of the /Journal of Cultural Analytics/\, an open-acces s journal dedicated to the computational study of culture. He is the autho r of numerous books and articles on the relationship of technology and rea ding\, including /Book Was There: Reading in Electronic Times/(Chicago 201 2)\, /Enumerations: Data and Literary Study/(Chicago 2018)\, and most rece ntly\, /Can We Be Wrong? The Problem of Textual Evidence in a Time of Data /(Cambridge 2020).
\n X-TAGS;LANGUAGE=en-US:2021\,November\,Piper END:VEVENT BEGIN:VEVENT UID:ai1ec-21497@www.clsp.jhu.edu DTSTAMP:20240329T101236Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nWhile the “deep learning tsunami” continues to define the state of the art in speech and language processing\, finite-state tra nsducer grammars developed by linguists and engineers are still widely use d in industrial\, highly-multilingual settings\, particularly for symbolic \, “front-end” speech applications. In this talk\, I will first briefly re view the current state of the OpenFst and OpenGrm finite-state transducer libraries. I then review two “late-breaking” algorithms found in these lib raries. The first is a heuristic but highly-effective general-purpose opti mization routine for weighted transducers. The second is an algorithm for computing the single shortest string of non-deterministic weighted accepto rs which lack certain properties required by classic shortest-path algorit hms. I will then illustrate how the OpenGrm tools can be used to induce a finite-state string-to-string transduction model known as a pair n-gram mo del. This model has been applied to grapheme-to-phoneme conversion\, loanw ord detection\, abbreviation expansion\, and back-transliteration\, among other tasks.\nBiography\nKyle Gorman is an assistant professor of linguist ics at the Graduate Center\, City University of New York\, and director of the master’s program in computational linguistics\; he is also a software engineer in the speech and language algorithms group at Google. With Rich ard Sproat\, he is the coauthor of Finite-State Text Processing (Morgan & Claypool\, 2021) and the creator of Pynini\, a finite-state text processin g library for Python. He has also published on statistical methods for com paring computational models\, text normalization\, grapheme-to-phoneme con version\, and morphological analysis\, as well as many topics in linguisti c theory. DTSTART;TZID=America/New_York:20220401T120000 DTEND;TZID=America/New_York:20220401T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Kyle Gorman (City University of New York) ” Weighted Finite-State T ransducers: The Later Years” URL:https://www.clsp.jhu.edu/events/kyle-gorman-city-university-of-new-york -weighted-finite-state-transducers-the-later-years/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nWhile the “deep learning tsunami” continues to define the state of the art in speech and language processing\, finite-state tra nsducer grammars developed by linguists and engineers are still widely use d in industrial\, highly-multilingual settings\, particularly for symbolic \, “front-end” speech applications. In this talk\, I will first briefly re view the current state of the OpenFst and OpenGrm finite-state transducer libraries. I then review two “late-breaking” algorithms found in these lib raries. The first is a heuristic but highly-effective general-purpose opti mization routine for weighted transducers. The second is an algorithm for computing the single shortest string of non-deterministic weighted accepto rs which lack certain properties required by classic shortest-path algorit hms. I will then illustrate how the OpenGrm tools can be used to induce a finite-state string-to-string transduction model known as a pair n-gram mo del. This model has been applied to grapheme-to-phoneme conversion\, loanw ord detection\, abbreviation expansion\, and back-transliteration\, among other tasks.
\nBiography
\nKyle Gorman is an assistant professor of linguistics at the Graduate Center\, City Universit y of New York\, and director of the master’s program in computational ling uistics\; he is also a software engineer in the speech and language algori thms group at Google. With Richard Sproat\, he is the coauthor of Finit e-State Text Processing (Morgan & Claypool\, 2021) and the creator of Pynini\, a finite-state text processing library for Python. He has also pu blished on statistical methods for comparing computational models\, text n ormalization\, grapheme-to-phoneme conversion\, and morphological analysis \, as well as many topics in linguistic theory.
\n X-TAGS;LANGUAGE=en-US:2022\,Gorman\,March END:VEVENT END:VCALENDAR