BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20115@www.clsp.jhu.edu DTSTAMP:20240329T095712Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nData science in small medical datasets usually means doing precision guesswork on unreliable data provided by those with high e xpectations. The first part of this talk will focus on issues that data sc ientists and engineers have to address when working with this kind of data (e.g. unreliable labels\, the effect of confounding factors\, necessity o f clinical interpretability\, difficulties with fusing more data sets). Th e second part of the talk will include some real examples of this kind of data science in the field of neurology (prediction of motor deficits in Pa rkinson’s disease based on acoustic analysis of speech\, diagnosis of Park inson’s disease dysgraphia utilising online handwriting\, exploring the Mo zart effect in epilepsy based on the music information retrieval) and psyc hology (assessment of graphomotor disabilities in children with developmen tal dysgraphia).\nBiography\nJiri Mekyska is the head of the BDALab (Brain Diseases Analysis Laboratory) at the Brno University of Technology\, wher e he leads a multidisciplinary team of researchers (signal processing engi neers\, data scientists\, neurologists\, psychologists) with a special foc us on the development of new digital endpoints and digital biomarkers enab ling to better understand\, diagnose and monitor neurodegenerative (e.g. P arkinson’s disease) and neurodevelopmental (e.g. dysgraphia) diseases. DTSTART;TZID=America/New_York:20210329T120000 DTEND;TZID=America/New_York:20210329T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Jiri Mekyska (Brno University of Technology) “Data Science in Small Medical Data Sets: From Logistic Regression Towards Logistic Regression” URL:https://www.clsp.jhu.edu/events/jiri-mekyska-brno-university-of-technol ogy/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nData science in small medical datasets usually means doing precision guesswork on unreliable data provided by those with high e xpectations. The first part of this talk will focus on issues that data sc ientists and engineers have to address when working with this kind of data (e.g. unreliable labels\, the effect of confounding factors\, necessity o f clinical interpretability\, difficulties with fusing more data sets). Th e second part of the talk will include some real examples of this kind of data science in the field of neurology (prediction of motor deficits in Pa rkinson’s disease based on acoustic analysis of speech\, diagnosis of Park inson’s disease dysgraphia utilising online handwriting\, exploring the Mo zart effect in epilepsy based on the music information retrieval) and psyc hology (assessment of graphomotor disabilities in children with developmen tal dysgraphia).
\nBiography
\nAbstr act
\nText simplification aims to help audiences read and u nderstand a piece of text through lexical\, syntactic\, and discourse modi fications\, while remaining faithful to its central idea and meaning. Than ks to large-scale parallel corpora derived from Wikipedia and News\, much of modern-day text simplification research focuses on sentence simplificat ion\, transforming original\, more complex sentences into simplified versi ons. In this talk\, I present new frontiers that focus on discourse operat ions. First\, we consider the challenging task of simplifying highly techn ical language\, in our case\, medical texts. We introduce a new corpus of parallel texts in English comprising technical and lay summaries of all pu blished evidence pertaining to different clinical topics. We then propose a new metric to quantify stylistic differentiates between the two\, and mo dels for paragraph-level simplification. Second\, we present the first dat a-driven study of inserting elaborations and explanations during simplific ation\, and illustrate the richness and complexities of this phenomenon. p>\n
Biography
\nAbstr act
\nWhile the “deep learning tsunami” continues to define the state of the art in speech and language processing\, finite-state tra nsducer grammars developed by linguists and engineers are still widely use d in industrial\, highly-multilingual settings\, particularly for symbolic \, “front-end” speech applications. In this talk\, I will first briefly re view the current state of the OpenFst and OpenGrm finite-state transducer libraries. I then review two “late-breaking” algorithms found in these lib raries. The first is a heuristic but highly-effective general-purpose opti mization routine for weighted transducers. The second is an algorithm for computing the single shortest string of non-deterministic weighted accepto rs which lack certain properties required by classic shortest-path algorit hms. I will then illustrate how the OpenGrm tools can be used to induce a finite-state string-to-string transduction model known as a pair n-gram mo del. This model has been applied to grapheme-to-phoneme conversion\, loanw ord detection\, abbreviation expansion\, and back-transliteration\, among other tasks.
\nBiography
\nKyle Gorman is an assistant professor of linguistics at the Graduate Center\, City Universit y of New York\, and director of the master’s program in computational ling uistics\; he is also a software engineer in the speech and language algori thms group at Google. With Richard Sproat\, he is the coauthor of Finit e-State Text Processing (Morgan & Claypool\, 2021) and the creator of Pynini\, a finite-state text processing library for Python. He has also pu blished on statistical methods for comparing computational models\, text n ormalization\, grapheme-to-phoneme conversion\, and morphological analysis \, as well as many topics in linguistic theory.
\n X-TAGS;LANGUAGE=en-US:2022\,Gorman\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-23882@www.clsp.jhu.edu DTSTAMP:20240329T095712Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nLarge language models (LLMs) have demonstrated incred ible power\, but they also possess vulnerabilities that can lead to misuse and potential attacks. In this presentation\, we will address two fundame ntal questions regarding the responsible utilization of LLMs: (1) How can we accurately identify AI-generated text? (2) What measures can safeguard the intellectual property of LLMs? We will introduce two recent watermarki ng techniques designed for text and models\, respectively. Our discussion will encompass the theoretical underpinnings that ensure the correctness o f watermark detection\, along with robustness against evasion attacks. Fur thermore\, we will showcase empirical evidence validating their effectiven ess. These findings establish a solid technical groundwork for policymaker s\, legal professionals\, and generative AI practitioners alike.\nBiograph y\nLei Li is an Assistant Professor in Language Technology Institute at Ca rnegie Mellon University. He received Ph.D. from Carnegie Mellon Universit y School of Computer Science. He is a recipient of ACL 2021 Best Paper Awa rd\, CCF Young Elite Award in 2019\, CCF distinguished speaker in 2017\, W u Wen-tsün AI prize in 2017\, and 2012 ACM SIGKDD dissertation award (runn er-up)\, and is recognized as Notable Area Chair of ICLR 2023. Previously\ , he was a faculty member at UC Santa Barbara. Prior to that\, he founded ByteDance AI Lab in 2016 and led its research in NLP\, ML\, Robotics\, an d Drug Discovery. He launched ByteDance’s machine translation system VolcT rans and AI writing system Xiaomingbot\, serving one billion users. DTSTART;TZID=America/New_York:20230901T120000 DTEND;TZID=America/New_York:20230901T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Lei Li (Carnegie Mellon University) “Empowering Responsible Use of Large Language Models” URL:https://www.clsp.jhu.edu/events/lei-li-carnegie-mellon-university-empow ering-responsible-use-of-large-language-models/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nLarge language models (LLMs) have demonstrated incred ible power\, but they also possess vulnerabilities that can lead to misuse and potential attacks. In this presentation\, we will address two fundame ntal questions regarding the responsible utilization of LLMs: (1) How can we accurately identify AI-generated text? (2) What measures can safeguard the intellectual property of LLMs? We will introduce two recent watermarki ng techniques designed for text and models\, respectively. Our discussion will encompass the theoretical underpinnings that ensure the correctness o f watermark detection\, along with robustness against evasion attacks. Fur thermore\, we will showcase empirical evidence validating their effectiven ess. These findings establish a solid technical groundwork for policymaker s\, legal professionals\, and generative AI practitioners alike.
\n< strong>Biography
\nLei Li is an Assistant Professor in Lang uage Technology Institute at Carnegie Mellon University. He received Ph.D. from Carnegie Mellon University School of Computer Science. He is a recip ient of ACL 2021 Best Paper Award\, CCF Young Elite Award in 2019\, CCF di stinguished speaker in 2017\, Wu Wen-tsün AI prize in 2017\, and 2012 ACM SIGKDD dissertation award (runner-up)\, and is recognized as Notable Area Chair of ICLR 2023. Previously\, he was a faculty member at UC Santa Barba ra. Prior to that\, he founded ByteDance AI Lab in 2016 and led its resea rch in NLP\, ML\, Robotics\, and Drug Discovery. He launched ByteDance’s m achine translation system VolcTrans and AI writing system Xiaomingbot\, se rving one billion users.
\n X-TAGS;LANGUAGE=en-US:2023\,Li\,September END:VEVENT END:VCALENDAR