BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20723@www.clsp.jhu.edu DTSTAMP:20240329T103820Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nText simplification aims t o help audiences read and understand a piece of text through lexical\, syn tactic\, and discourse modifications\, while remaining faithful to its cen tral idea and meaning. Thanks to large-scale parallel corpora derived from Wikipedia and News\, much of modern-day text simplification research focu ses on sentence simplification\, transforming original\, more complex sent ences into simplified versions. In this talk\, I present new frontiers tha t focus on discourse operations. First\, we consider the challenging task of simplifying highly technical language\, in our case\, medical texts. We introduce a new corpus of parallel texts in English comprising technical and lay summaries of all published evidence pertaining to different clinic al topics. We then propose a new metric to quantify stylistic differentiat es between the two\, and models for paragraph-level simplification. Second \, we present the first data-driven study of inserting elaborations and ex planations during simplification\, and illustrate the richness and complex ities of this phenomenon.
\nBiography
\nAbstract
\nWhile the “deep learning t sunami” continues to define the state of the art in speech and language pr ocessing\, finite-state transducer grammars developed by linguists and eng ineers are still widely used in industrial\, highly-multilingual settings\ , particularly for symbolic\, “front-end” speech applications. In this tal k\, I will first briefly review the current state of the OpenFst and OpenG rm finite-state transducer libraries. I then review two “late-breaking” al gorithms found in these libraries. The first is a heuristic but highly-eff ective general-purpose optimization routine for weighted transducers. The second is an algorithm for computing the single shortest string of non-det erministic weighted acceptors which lack certain properties required by cl assic shortest-path algorithms. I will then illustrate how the OpenGrm too ls can be used to induce a finite-state string-to-string transduction mode l known as a pair n-gram model. This model has been applied to grapheme-to -phoneme conversion\, loanword detection\, abbreviation expansion\, and ba ck-transliteration\, among other tasks.
\nBiography
\nKyle Gorman is an assistant professor of linguistics at the Gradu ate Center\, City University of New York\, and director of the master’s pr ogram in computational linguistics\; he is also a software engineer in the speech and language algorithms group at Google. With Richard Sproat\, he is the coauthor of Finite-State Text Processing (Morgan & Claypool\ , 2021) and the creator of Pynini\, a finite-state text processing library for Python. He has also published on statistical methods for comparing co mputational models\, text normalization\, grapheme-to-phoneme conversion\, and morphological analysis\, as well as many topics in linguistic theory.
DTSTART;TZID=America/New_York:20220401T120000 DTEND;TZID=America/New_York:20220401T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Kyle Gorman (City University of New York) ” Weighted Finite-State T ransducers: The Later Years” URL:https://www.clsp.jhu.edu/events/kyle-gorman-city-university-of-new-york -weighted-finite-state-transducers-the-later-years/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,Gorman\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-23882@www.clsp.jhu.edu DTSTAMP:20240329T103820Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nLarge language models (LLM s) have demonstrated incredible power\, but they also possess vulnerabilit ies that can lead to misuse and potential attacks. In this presentation\, we will address two fundamental questions regarding the responsible utiliz ation of LLMs: (1) How can we accurately identify AI-generated text? (2) W hat measures can safeguard the intellectual property of LLMs? We will intr oduce two recent watermarking techniques designed for text and models\, re spectively. Our discussion will encompass the theoretical underpinnings th at ensure the correctness of watermark detection\, along with robustness a gainst evasion attacks. Furthermore\, we will showcase empirical evidence validating their effectiveness. These findings establish a solid technical groundwork for policymakers\, legal professionals\, and generative AI pra ctitioners alike.
\nBiography
\nLei Li is an Assistant Professor in Language Technology Institute at Carnegie Mellon Un iversity. He received Ph.D. from Carnegie Mellon University School of Comp uter Science. He is a recipient of ACL 2021 Best Paper Award\, CCF Young E lite Award in 2019\, CCF distinguished speaker in 2017\, Wu Wen-tsün AI pr ize in 2017\, and 2012 ACM SIGKDD dissertation award (runner-up)\, and is recognized as Notable Area Chair of ICLR 2023. Previously\, he was a facul ty member at UC Santa Barbara. Prior to that\, he founded ByteDance AI La b in 2016 and led its research in NLP\, ML\, Robotics\, and Drug Discovery . He launched ByteDance’s machine translation system VolcTrans and AI writ ing system Xiaomingbot\, serving one billion users.
DTSTART;TZID=America/New_York:20230901T120000 DTEND;TZID=America/New_York:20230901T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Lei Li (Carnegie Mellon University) “Empowering Responsible Use of Large Language Models” URL:https://www.clsp.jhu.edu/events/lei-li-carnegie-mellon-university-empow ering-responsible-use-of-large-language-models/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Li\,September END:VEVENT END:VCALENDAR