BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20117@www.clsp.jhu.edu DTSTAMP:20240329T160358Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nNeural sequence generation systems oftentimes generat e sequences by searching for the most likely sequence under the learnt pro bability distribution. This assumes that the most likely sequence\, i.e. t he mode\, under such a model must also be the best sequence it has to offe r (often in a given context\, e.g. conditioned on a source sentence in tra nslation). Recent findings in neural machine translation (NMT) show that t he true most likely sequence oftentimes is empty under many state-of-the-a rt NMT models. This follows a large list of other pathologies and biases o bserved in NMT and other sequence generation models: a length bias\, large r beams degrading performance\, exposure bias\, and many more. Many of the se works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this: it is mode-seeking search \, e.g. beam search\, that introduces many of these pathologies and biases \, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass ove r many translations\, and that the most likely translation oftentimes is a rare event. We further show that translation distributions do capture imp ortant aspects of translation well in expectation. Therefore\, we advocate for decision rules that take into account the entire probability distribu tion and not just its mode. We provide one example of such a decision rule \, and show that this is a fruitful research direction.\nBiography\nI am a n assistant professor (UD) in natural language processing at the Institute for Logic\, Language and Computation where I lead the Probabilistic Langu age Learning group.\nMy work concerns the design of models and algorithms that learn to represent\, understand\, and generate language data. Example s of specific problems I am interested in include language modelling\, mac hine translation\, syntactic parsing\, textual entailment\, text classific ation\, and question answering.\nI also develop techniques to approach gen eral machine learning problems such as probabilistic inference\, gradient and density estimation.\nMy interests sit at the intersection of disciplin es such as statistics\, machine learning\, approximate inference\, global optimization\, formal languages\, and computational linguistics.\n \n DTSTART;TZID=America/New_York:20210419T120000 DTEND;TZID=America/New_York:20210419T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Wilker Aziz (University of Amsterdam) “The Inadequacy of the Mode in Neural Machine Translation” URL:https://www.clsp.jhu.edu/events/wilker-aziz-university-of-amsterdam/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nNeural sequence generation systems oftentimes generat e sequences by searching for the most likely sequence under the learnt pro bability distribution. This assumes that the most likely sequence\, i.e. t he mode\, under such a model must also be the best sequence it has to offe r (often in a given context\, e.g. conditioned on a source sentence in tra nslation). Recent findings in neural machine translation (NMT) show that t he true most likely sequence oftentimes is empty under many state-of-the-a rt NMT models. This follows a large list of other pathologies and biases o bserved in NMT and other sequence generation models: a length bias\, large r beams degrading performance\, exposure bias\, and many more. Many of the se works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this: it is mode-seeking search \, e.g. beam search\, that introduces many of these pathologies and biases \, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass ove r many translations\, and that the most likely translation oftentimes is a rare event. We further show that translation distributions do capture imp ortant aspects of translation well in expectation. Therefore\, we advocate for decision rules that take into account the entire probability distribu tion and not just its mode. We provide one example of such a decision rule \, and show that this is a fruitful research direction.
\nBi ography
\nI am an assistant professor (UD) in natu ral language processing at the Institute for Logic\, Language and Computation where I lead the Probabilistic Language Learning group.
\nMy work concerns the design of models and algorithms that learn to represe nt\, understand\, and generate language data. Examples of specific problem s I am interested in include language modelling\, machine translation\, sy ntactic parsing\, textual entailment\, text classification\, and question answering.
\nI also develop techniques to approach general machine l earning problems such as probabilistic inference\, gradient and density es timation.
\nMy interests sit at the intersection of disciplines such as statistics\, machine learning\, approximate inference\, global optimiz ation\, formal languages\, and computational linguistics.
\n\n< p> \n X-TAGS;LANGUAGE=en-US:2021\,April\,Aziz END:VEVENT BEGIN:VEVENT UID:ai1ec-20120@www.clsp.jhu.edu DTSTAMP:20240329T160358Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nRobotics@Google’s mission is to make robots useful in the real world through machine learning. We are excited about a new model for robotics\, designed for generalization across diverse environments an d instructions. This model is focused on scalable data-driven learning\, w hich is task-agnostic\, leverages simulation\, learns from past experience \, and can be quickly adapted to work in the real-world through limited in teractions. In this talk\, we’ll share some of our recent work in this dir ection in both manipulation and locomotion applications.\nBiography\nCarol ina Parada is a Senior Engineering Manager at Google Robotics. She leads t he robot-mobility group\, which focuses on improving robot motion planning \, navigation\, and locomotion\, using reinforcement learning. Prior to th at\, she led the camera perception team for self-driving cars at Nvidia fo r 2 years. She was also a lead with Speech @ Google for 7 years\, where sh e drove multiple research and engineering efforts that enabled Ok Google\, the Google Assistant\, and Voice-Search. Carolina grew up in Venezuela an d moved to the US to pursue a B.S. and M.S. degree in Electrical Engineeri ng at University of Washington and her Phd at Johns Hopkins University at the Center for Language and Speech Processing (CLSP). DTSTART;TZID=America/New_York:20210423T120000 DTEND;TZID=America/New_York:20210423T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Carolina Parada (Google AI) “State of Robotics @ Google” URL:https://www.clsp.jhu.edu/events/carolina-parada-google-ai/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Abstr act
\nRobotics@Google’s mission is to make robots useful i n the real world through machine learning. We are excited about a new mode l for robotics\, designed for generalization across diverse environments a nd instructions. This model is focused on scalable data-driven learning\, which is task-agnostic\, leverages simulation\, learns from past experienc e\, and can be quickly adapted to work in the real-world through limited i nteractions. In this talk\, we’ll share some of our recent work in this di rection in both manipulation and locomotion applications.
\n< strong>Biography
\nCarolina Parad a is a Senior Engineering Manager at Google Robotics. She leads the robot-mobility group\, which focuses on improving robot motion planning\, navigation\, and locomotion\, using reinforcement learning. Prior to that \, she led the camera perception team for self-driving cars at Nvidia for 2 years. She was also a lead with Speech @ Google for 7 years\, where she drove multiple research and engineering efforts that enabled Ok Google\, t he Google Assistant\, and Voice-Search. Carolina< /span> grew up in Venezuela and moved to the US to pursue a B.S. and M.S. degree in Electrical Engineering at University of Washington and her Phd a t Johns Hopkins University at the Center for Language and Speech Processin g (CLSP).
\n X-TAGS;LANGUAGE=en-US:2021\,April\,Parada END:VEVENT BEGIN:VEVENT UID:ai1ec-20716@www.clsp.jhu.edu DTSTAMP:20240329T160358Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nOver the last few years\, deep neural models have tak en over the field of natural language processing (NLP)\, brandishing great improvements on many of its sequence-level tasks. But the end-to-end natu re of these models makes it hard to figure out whether the way they repres ent individual words aligns with how language builds itself from the botto m up\, or how lexical changes in register and domain can affect the untest ed aspects of such representations.\nIn this talk\, I will present NYTWIT\ , a dataset created to challenge large language models at the lexical leve l\, tasking them with identification of processes leading to the formation of novel English words\, as well as with segmentation and recovery of the specific subclass of novel blends. I will then present XRayEmb\, a method which alleviates the hardships of processing these novelties by fitting a character-level encoder to the existing models’ subword tokenizers\; and conclude with a discussion of the drawbacks of current tokenizers’ vocabul ary creation schemes.\nBiography\nYuval Pinter is a Senior Lecturer in the Department of Computer Science at Ben-Gurion University of the Negev\, fo cusing on natural language processing. Yuval got his PhD at the Georgia In stitute of Technology School of Interactive Computing as a Bloomberg Data Science PhD Fellow. Before that\, he worked as a Research Engineer at Yaho o Labs and as a Computational Linguist at Ginger Software\, and obtained a n MA in Linguistics and a BSc in CS and Mathematics\, both from Tel Aviv U niversity. Yuval blogs (in Hebrew) about language matters on Dagesh Kal. DTSTART;TZID=America/New_York:20210910T120000 DTEND;TZID=America/New_York:20210910T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD SEQUENCE:0 SUMMARY:Yuval Pinter (Ben-Gurion University – Virtual Visit) “Challenging a nd Adapting NLP Models to Lexical Phenomena” URL:https://www.clsp.jhu.edu/events/yuval-pinter/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nOver the last few years\, deep neural models have tak en over the field of natural language processing (NLP)\, brandishing great improvements on many of its sequence-level tasks. But the end-to-end natu re of these models makes it hard to figure out whether the way they repres ent individual words aligns with how language builds itself from the botto m up\, or how lexical changes in register and domain can affect the untest ed aspects of such representations.
\nIn this talk\, I will present NYTWIT\, a dataset created to challenge large language models at the lexic al level\, tasking them with identification of processes leading to the fo rmation of novel English words\, as well as with segmentation and recovery of the specific subclass of novel blends. I will then present XRayEmb\, a method which alleviates the hardships of processing these novelties by fi tting a character-level encoder to the existing models’ subword tokenizers \; and conclude with a discussion of the drawbacks of current tokenizers’ vocabulary creation schemes.
\nBiography
\nYuval Pinter
is a Senior Lecturer in the Department of Computer Science at Ben-Gurion
University of the Negev\, focusing on natural language processing. Yuval got his PhD at the Georgia Institute of Tec
hnology School of Interactive Computing as a Bloomberg Data Science PhD Fe
llow. Before that\, he worked as a Research Engineer at Yahoo Labs and as
a Computational Linguist at Ginger Software\, and obtained an MA in Lingui
stics and a BSc in CS and Mathematics\, both from Tel Aviv University.
Abstr act
\nText simplification aims to help audiences read and u nderstand a piece of text through lexical\, syntactic\, and discourse modi fications\, while remaining faithful to its central idea and meaning. Than ks to large-scale parallel corpora derived from Wikipedia and News\, much of modern-day text simplification research focuses on sentence simplificat ion\, transforming original\, more complex sentences into simplified versi ons. In this talk\, I present new frontiers that focus on discourse operat ions. First\, we consider the challenging task of simplifying highly techn ical language\, in our case\, medical texts. We introduce a new corpus of parallel texts in English comprising technical and lay summaries of all pu blished evidence pertaining to different clinical topics. We then propose a new metric to quantify stylistic differentiates between the two\, and mo dels for paragraph-level simplification. Second\, we present the first dat a-driven study of inserting elaborations and explanations during simplific ation\, and illustrate the richness and complexities of this phenomenon. p>\n
Biography
\nAbstr act
\nRaytheon BBN participated in the IARPA MATERIAL progr am\, whose objective is to enable rapid development of language-independen t methods for cross-lingual information retrieval (CLIR). The challenging CLIR task of retrieving documents written (or spoken) in one language so t hat they satisfy an information need expressed in a different language is exacerbated by unique challenges posed by the MATERIAL program: limited tr aining data for automatic speech recognition and machine translation\, sca nt lexical resources\, non-standardized orthography\, etc. Furthermore\, t he format of the queries and the “Query-Weighted Value” performance measur e are non-standard and not previously studied in the IR community. In this talk\, we will describe the Raytheon BBN CLIR system\, which was successf ul at addressing the above challenges and unique characteristics of the pr ogram.
\nBiography
\nDamianos Karakos has been at Raytheon BBN for the past nine years\, wh ere he is currently a Senior Principal Engineer\, Research. Before that\, he was research faculty at Johns Hopkins University. He has worked on seve ral Government projects (e.g.\, DARPA GALE\, DARPA RATS\, IARPA BABEL\, IA RPA MATERIAL\, IARPA BETTER) and on a variety of HLT-related topics (e.g.\ , speech recognition\, speech activity detection\, keyword search\, inform ation retrieval). He has published more than 60 peer-reviewed papers. His research interests lie at the intersection of human language technology an d machine learning\, with an emphasis on statistical methods. He obtained a PhD in Electrical Engineering from the University of Maryland\, College Park\, in 2002.
\n\n
Abstr act
\nWhile there is a vast amount of text written about ne arly any topic\, this is often difficult for someone unfamiliar with a spe cific field to understand. Automated text simplification aims to reduce th e complexity of a document\, making it more comprehensible to a broader au dience. Much of the research in this field has traditionally focused on si mplification sub-tasks\, such as lexical\, syntactic\, or sentence-level s implification. However\, current systems struggle to consistently produce high-quality simplifications. Phrase-based models tend to make too many po or transformations\; on the other hand\, recent neural models\, while prod ucing grammatical output\, often do not make all needed changes to the ori ginal text. In this thesis\, I discuss novel approaches for improving lexi cal and sentence-level simplification systems. Regarding sentence simplifi cation models\, after noting that encouraging diversity at inference time leads to significant improvements\, I take a closer look at the idea of di versity and perform an exhaustive comparison of diverse decoding technique s on other generation tasks. I also discuss the limitations in the framing of current simplification tasks\, which prevent these models from yet bei ng practically useful. Thus\, I also propose a retrieval-based reformulati on of the problem. Specifically\, starting with a document\, I identify co ncepts critical to understanding its content\, and then retrieve documents relevant for each concept\, re-ranking them based on the desired complexi ty level.
\nBiography
\nI ’m a research scientist at the HLTCOE at Johns Hopkins University. My prim ary research interests are in language generation\, diverse and constraine d decoding\, and information retrieval. During my PhD I focused mainly on the task of text simplification\, and now am working on formulating struct ured prediction problems as end-to-end generation tasks. I received my PhD in July 2021 from the University of Pennsylvania with Chris Callison-Burc h and Marianna Apidianaki.
\n\n X-TAGS;LANGUAGE=en-US:2021\,Kriz\,October END:VEVENT BEGIN:VEVENT UID:ai1ec-21023@www.clsp.jhu.edu DTSTAMP:20240329T160358Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nSpeech data is notoriously difficult to work with due to a variety of codecs\, lengths of recordings\, and meta-data formats. W e present Lhotse\, a speech data representation library that draws upon le ssons learned from Kaldi speech recognition toolkit and brings its concept s into the modern deep learning ecosystem. Lhotse provides a common JSON d escription format with corresponding Python classes and data preparation r ecipes for over 30 popular speech corpora. Various datasets can be easily combined together and re-purposed for different tasks. The library handles multi-channel recordings\, long recordings\, local and cloud storage\, la zy and on-the-fly operations amongst other features. We introduce Cut and CutSet concepts\, which simplify common data wrangling tasks for audio and help incorporate acoustic context of speech utterances. Finally\, we show how Lhotse leverages PyTorch data API abstractions and adopts them to han dle speech data for deep learning.\nBiography\nPiotr Zelasko is an assista nt research scientist in the Center for Language and Speech Processing (CL SP) who specializes in automatic speech recognition (ASR) and spoken langu age understanding (SLU). His current research focuses on applying multilin gual and crosslingual speech recognition systems to categorize the phoneti c inventory of a previously unknown language and on improving defenses aga inst adversarial attacks on both speaker identification and automatic spee ch recognition systems. He is also addressing the question of how to struc ture a spontaneous conversation into high-level semantic units such as dia log acts or topics. Finally\, he is working on Lhotse + K2\, the next-gene ration speech processing research software ecosystem. Before joining Johns Hopkins\, Zelasko worked as a machine learning consultant for Avaya (2017 -2019)\, and as a machine learning engineer for Techmo (2015-2017). Zelask o received his PhD (2019) in electronics engineering\, as well as his mast er’s (2014) and undergraduate degrees (2013) in acoustic engineering from AGH University of Science and Technology in Kraków\, Poland. DTSTART;TZID=America/New_York:20211029T120000 DTEND;TZID=America/New_York:20211029T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore MD 21218 SEQUENCE:0 SUMMARY:Piotr Zelasko (CLSP at JHU) “Lhotse: a speech data representation l ibrary for the modern deep learning ecosystem” URL:https://www.clsp.jhu.edu/events/piotr-zelasko-clsp-at-jhu-lhotse-a-spee ch-data-representation-library-for-the-modern-deep-learning-ecosystem/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nSpeech data is notoriously difficult t o work with due to a variety of codecs\, lengths of recordings\, and meta- data formats. We present Lhotse\, a speech data representation library tha t draws upon lessons learned from Kaldi speech recognition toolkit and bri ngs its concepts into the modern deep learning ecosystem. Lhotse provides a common JSON description format with corresponding Python classes and dat a preparation recipes for over 30 popular speech corpora. Various datasets can be easily combined together and re-purposed for different tasks. The library handles multi-channel recordings\, long recordings\, local and clo ud storage\, lazy and on-the-fly operations amongst other features. We int roduce Cut and CutSet concepts\, which simplify common data wrangling task s for audio and help incorporate acoustic context of speech utterances. Fi nally\, we show how Lhotse leverages PyTorch data API abstractions and ado pts them to handle speech data for deep learning.
\nB iography
\nPiotr Zelasko is an assistant research scientist in the Center for Language and Speech Processing (CLSP) who specializes i n automatic speech recognition (ASR) and spoken language understanding (SL U). His current research focuses on applying multilingual and crosslingual speech recognition systems to categorize the phonetic inventory of a prev iously unknown language and on improving defenses against adversarial atta cks on both speaker identification and automatic speech recognition system s. He is also addressing the question of how to structure a spontaneous co nversation into high-level semantic units such as dialog acts or topics. F inally\, he is working on Lhotse + K2\, the next-generation speech process ing research software ecosystem. Before joining Johns Hopkins\, Zelasko wo rked as a machine learning consultant for Avaya (2017-2019)\, and as a mac hine learning engineer for Techmo (2015-2017). Zelasko received his PhD (2 019) in electronics engineering\, as well as his master’s (2014) and under graduate degrees (2013) in acoustic engineering from AGH University of Sci ence and Technology in Kraków\, Poland.
\n X-TAGS;LANGUAGE=en-US:2021\,October\,Zelasko END:VEVENT BEGIN:VEVENT UID:ai1ec-21031@www.clsp.jhu.edu DTSTAMP:20240329T160358Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nMost people take for granted that when they speak\, t hey will be heard and understood. But for the millions who live with speec h impairments caused by physical or neurological conditions\, trying to co mmunicate with others can be difficult and lead to frustration. While ther e have been a great number of recent advances in Automatic Speech Recognit ion (ASR) technologies\, these interfaces can be inaccessible for those wi th speech impairments.\nIn this talk\, we will present Parrotron\, an end- to-end-trained speech-to-speech conversion model that maps an input spectr ogram directly to another spectrogram\, without utilizing any intermediate discrete representation. The system is also trained to emit words in addi tion to a spectrogram\, in parallel. We demonstrate that this model can be trained to normalize speech from any speaker regardless of accent\, pro sody\, and background noise\, into the voice of a single canonical target speaker with a fixed accent and consistent articulation and prosody. We fu rther show that this normalization model can be adapted to normalize highl y atypical speech from speakers with a variety of speech impairments (due to\, ALS\, Cerebral-Palsy\, Deafness\, Stroke\, Brain Injury\, etc.) \, r esulting in significant improvements in intelligibility and naturalness\, measured via a speech recognizer and listening tests. Finally\, demonstrat ing the utility of this model on other speech tasks\, we show that the sam e model architecture can be trained to perform a speech separation task.\n Dimitri will give a brief description of some key moments in development o f speech recognition algorithms that he was involved in and their applicat ions to YouTube closed captions\, Live Transcribe and wearable subtitles. \nFadi will then speak about the development of Parrotron.\nBiographies\nD imitri Kanevsky started his career at Google working on speech recognition algorithms. Prior to joining Google\, Dimitri was a Research staff member in the Speech Algorithms Department at IBM. Prior to IBM\, he worked at a number of centers for higher mathematics\, including Max Planck Institu te in Germany and the Institute for Advanced Studies in Princeton. He curr ently holds 295 US patents and was Master Inventor at IBM. MIT Technology Review recognized Dimitri conversational biometrics based security patent as one of five most influential patents for 2003. In 2012 Dimitri was hono red at the White House as a Champion of Change for his efforts to advance access to science\, technology\, engineering\, and math.\nFadi Biadsy is a senior staff research scientist at Google NY for the past ten years. He h as been exploring and leading multiple projects at Google\, including spee ch recognition\, speech conversion\, language modeling\, and semantic unde rstanding. He received his PhD from Columbia University in 2011. At Colum bia\, he researched a variety of speech and language processing projects i ncluding\, dialect and accent recognition\, speech recognition\, charismat ic speech and question answering. He holds a BSc and MSc in mathematics a nd computer science. He worked on handwriting recognition during his maste rs degree and he worked as a senior software developer for five years at D alet digital media systems building multimedia broadcasting systems. DTSTART;TZID=America/New_York:20211105T120000 DTEND;TZID=America/New_York:20211105T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Fadi Biadsy and Dimitri Kanevsky (Google) “Speech Recognition: From Speaker Dependent to Speaker Independent to Full Personalization” “Parrot ron: A Unified E2E Speech-to Speech Conversion and ASR Model for Atypical Speech” URL:https://www.clsp.jhu.edu/events/fadi-biadsy-and-dimitri-kanevsky-google / X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nMost people take for granted that when they speak\, they will be heard and understood. But for the millions who live with speech impairments caused by physical or neurological condi tions\, trying to communicate with others can be difficult and lead to fru stration. While there have been a great number of recent advances in Autom atic Speech Recognition (ASR) technologies\, these interfaces can be inacc essible for those with speech impairments.
\nIn this talk\, we will present Parrotron\, an end-to-end-trained speech-to-sp eech conversion model that maps an input spectrogram directly to another s pectrogram\, without utilizing any intermediate discrete representation. T he system is also trained to emit words in addition to a spectrogram\, in parallel. We demonstrate that this model can be trained to normalize spe ech from any speaker regardless of accent\, prosody\, and background noise \, into the voice of a single canonical target speaker with a fixed accent and consistent articulation and prosody. We further show that this normal ization model can be adapted to normalize highly atypical speech from spea kers with a variety of speech impairments (due to\, ALS\, Cerebral-Palsy\, Deafness\, Stroke\, Brain Injury\, etc.) \, resulting in significant imp rovements in intelligibility and naturalness\, measured via a speech recog nizer and listening tests. Finally\, demonstrating the utility of this mod el on other speech tasks\, we show that the same model architecture can be trained to perform a speech separation task.
\nDimitri will give a brief description of some key moments in development o f speech recognition algorithms that he was involved in and their applicat ions to YouTube closed captions\, Live Transcribe and wearable subtitles.
\nFadi will then speak about the development of Parrotron.
\nBiographies
\nDimitri K anevsky started his career at Google working on speech recognitio n algorithms. Prior to joining Google\, Dimitri was a Research staff membe r in the Speech Algorithms Department at IBM. Prior to IBM\, he worked a t a number of centers for higher mathematics\, including Max Planck Instit ute in Germany and the Institute for Advanced Studies in Princeton. He cur rently holds 295 US patents and was Master Inventor at IBM. MIT Technology Review recognized Dimitri conversational biometrics based security patent as one of five most influential patents for 2003. In 2012 Dimitri was hon ored at the White House as a Champion of Change for his efforts to advance access to science\, technology\, engineering\, and math.
\nFadi Biadsy is a senior staff research scientist at Google NY for the past ten years. He has been exploring and leading multiple projects a t Google\, including speech recognition\, speech conversion\, language mod eling\, and semantic understanding. He received his PhD from Columbia Uni versity in 2011. At Columbia\, he researched a variety of speech and langu age processing projects including\, dialect and accent recognition\, speec h recognition\, charismatic speech and question answering. He holds a BSc and MSc in mathematics and computer science. He worked on handwriting rec ognition during his masters degree and he worked as a senior software deve loper for five years at Dalet digital media systems building multimedia br oadcasting systems.
\n X-TAGS;LANGUAGE=en-US:2021\,Biadsy and Kanevsky\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-21041@www.clsp.jhu.edu DTSTAMP:20240329T160358Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nNarration is a universal human practice that serves a s a key site of education\, collective memory\, fostering social belief sy stems\, and furthering human creativity. Recent studies in economics (Shil ler\, 2020)\, climate science (Bushell et al.\, 2017)\, political polariza tion (Kubin et al.\, 2021)\, and mental health (Adler et al.\, 2016) sugge st an emerging interdisciplinary consensus that narrative is a central con cept for understanding human behavior and beliefs. For close to half a cen tury\, the field of narratology has developed a rich set of theoretical fr ameworks for understanding narrative. And yet these theories have largely gone untested on large\, heterogenous collections of texts. Scholars conti nue to generate schemas by extrapolating from small numbers of manually ob served documents. In this talk\, I will discuss how we can use machine lea rning to develop data-driven theories of narration to better understand wh at Labov and Waletzky called “the simplest and most fundamental narrative structures.” How can machine learning help us approach what we might call a minimal theory of narrativity?\nBiography\nAndrew Piper is Professor and William Dawson Scholar in the Department of Languages\, Literatures\, and Cultures at McGill University. He is the director of _.txtlab \n_\,\n a l aboratory for cultural analytics\, and editor of the /Journal of Cultural Analytics/\, an open-access journal dedicated to the computational study o f culture. He is the author of numerous books and articles on the relation ship of technology and reading\, including /Book Was There: Reading in Ele ctronic Times/(Chicago 2012)\, /Enumerations: Data and Literary Study/(Chi cago 2018)\, and most recently\, /Can We Be Wrong? The Problem of Textual Evidence in a Time of Data/(Cambridge 2020). DTSTART;TZID=America/New_York:20211112T120000 DTEND;TZID=America/New_York:20211112T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Andrew Piper (McGill University) ” How can we use machine learning to understand narration?” URL:https://www.clsp.jhu.edu/events/andrew-piper-mcgill-university-how-can- we-use-machine-learning-to-understand-narration/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nNarration is a universal human practice that serves a s a key site of education\, collective memory\, fostering social belief sy stems\, and furthering human creativity. Recent studies in economics (Shil ler\, 2020)\, climate science (Bushell et al.\, 2017)\, political polariza tion (Kubin et al.\, 2021)\, and mental health (Adler et al.\, 2016) sugge st an emerging interdisciplinary consensus that narrative is a central con cept for understanding human behavior and beliefs. For close to half a cen tury\, the field of narratology has developed a rich set of theoretical fr ameworks for understanding narrative. And yet these theories have largely gone untested on large\, heterogenous collections of texts. Scholars conti nue to generate schemas by extrapolating from small numbers of manually ob served documents. In this talk\, I will discuss how we can use machine lea rning to develop data-driven theories of narration to better understand wh at Labov and Waletzky called “the simplest and most fundamental narrative structures.” How can machine learning help us approach what we might call a minimal theory of narrativity?
\nBiography
\n< p>Andrew Piper is Professor and William D awson Scholar in the Department of Languages\, Literatures\, and Cultures at McGill University. He is the director of _.txtlab \n\na laboratory for cultural ana lytics\, and editor of the /Journal of Cultural Analytics/\, an open-acces s journal dedicated to the computational study of culture. He is the autho r of numerous books and articles on the relationship of technology and rea ding\, including /Book Was There: Reading in Electronic Times/(Chicago 201 2)\, /Enumerations: Data and Literary Study/(Chicago 2018)\, and most rece ntly\, /Can We Be Wrong? The Problem of Textual Evidence in a Time of Data /(Cambridge 2020).
\n X-TAGS;LANGUAGE=en-US:2021\,November\,Piper END:VEVENT BEGIN:VEVENT UID:ai1ec-21057@www.clsp.jhu.edu DTSTAMP:20240329T160358Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nThis talk will outline the major challenging in porti ng mainstream speech technology to the domain of clinical applications\; i n particular\, the need for personalised systems\, the challenge of workin g in an inherently sparse data domain and developing meaningful collaborat ions with all stakeholders. The talk will give an overview of recent state -of-the-art research from current projects including in the areas of recog nition of disordered speech\, automatic processing of conversations and th e automatic detection and tracking of paralinguistic information at the Un iversity of Sheffield (UK)’s Speech and Hearing (SPandH) & Healthcare lab. \nBiography\nHeidi is a Senior Lecturer (associate professor) in Computer Science at the University of Sheffield\, United Kingdom. Her research inte rests are on the application of AI-based voice technologies to healthcare. In particular\, the detection and monitoring of people’s physical and men tal health including verbal and non-verbal traits for expressions of emoti on\, anxiety\, depression and neurodegenerative conditions in e.g.\, thera peutic or diagnostic settings. DTSTART;TZID=America/New_York:20211119T120000 DTEND;TZID=America/New_York:20211119T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Heidi Christensen (University of Sheffield\, UK) Virtual Seminar “A utomated Processing of Pathological Speech: Recent Work and Ongoing Challe nges” URL:https://www.clsp.jhu.edu/events/heidi-christensen-university-of-sheffie ld-uk-virtual-seminar-automated-processing-of-pathological-speech-recent-w ork-and-ongoing-challenges/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nThis talk will outline the major challenging in porti ng mainstream speech technology to the domain of clinical applications\; i n particular\, the need for personalised systems\, the challenge of workin g in an inherently sparse data domain and developing meaningful collaborat ions with all stakeholders. The talk will give an overview of recent state -of-the-art research from current projects including in the areas of recog nition of disordered speech\, automatic processing of conversations and th e automatic detection and tracking of paralinguistic information at the Un iversity of Sheffield (UK)’s Speech and Hearing (SPandH) & Healthcare lab.
\nBiography
\nHeidi is a Senior Lecturer (as sociate professor) in Computer Science at the University of Sheffield\, Un ited Kingdom. Her research interests are on the application of AI-based vo ice technologies to healthcare. In particular\, the detection and monitori ng of people’s physical and mental health including verbal and non-verbal traits for expressions of emotion\, anxiety\, depression and neurodegenera tive conditions in e.g.\, therapeutic or diagnostic settings.
\n X-TAGS;LANGUAGE=en-US:2021\,Christensen\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-21068@www.clsp.jhu.edu DTSTAMP:20240329T160358Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20211203T120000 DTEND;TZID=America/New_York:20211203T131500 LOCATION:Hackerman HallB17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Eric Ringger (Zillow Group) URL:https://www.clsp.jhu.edu/events/eric-ringger-zillow-group/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,December\,Ringger END:VEVENT BEGIN:VEVENT UID:ai1ec-21072@www.clsp.jhu.edu DTSTAMP:20240329T160358Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nEmotion has intrigued researchers for generations. Th is fascination has permeated the engineering community\, motivating the de velopment of affective computing methods. However\, human emotion remains notoriously difficult to accurately detect. As a result\, emotion classifi cation techniques are not always effective when deployed. This is a probl em because we are missing out on the potential that emotion recognition pr ovides: the opportunity to automatically measure an aspect of behavior tha t provides critical insight into our health and wellbeing\, insight that i s not always easily accessible. In this talk\, I will discuss our efforts in developing emotion recognition approaches that are effective in natura l environments and demonstrate how these approaches can be used to support mental health.\n\nBiography\n\nEmily Mower Provost is an Associate Profes sor in Computer Science and Engineering and Toyota Faculty Scholar at the University of Michigan. She received her Ph.D. in Electrical Engineering f rom the University of Southern California (USC)\, Los Angeles\, CA in 2010 . She has been awarded a National Science Foundation CAREER Award (2017)\, the Oscar Stern Award for Depression Research (2015)\, a National Science Foundation Graduate Research Fellowship (2004-2007). She is a co-author o n the paper\, “Say Cheese vs. Smile: Reducing Speech-Related Variability f or Facial Emotion Recognition\,” winner of Best Student Paper at ACM Multi media\, 2014\, and a co-author of the winner of the Classifier Sub-Challen ge event at the Interspeech 2009 emotion challenge. Her research interests are in human-centered speech and video processing\, multimodal interfaces design\, and speech-based assistive technology. The goals of her research are motivated by the complexities of the perception and expression of hum an behavior. DTSTART;TZID=America/New_York:20211206T120000 DTEND;TZID=America/New_York:20211206T131500 LOCATION:Maryland Hall 110 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Emily Mower-Provost (University of Michigan) “Automatically Measuri ng Emotion from Speech: New Methods to Move from the Lab to the Real World ” URL:https://www.clsp.jhu.edu/events/emily-mower-provost-university-of-michi gan/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAbstr act
\nAs humans\, our understanding of language is grounded
in a rich mental model about “how the world works” – that we learn throug
h perception and interaction. We use this understanding to reason beyond w
hat we literally observe or read\, imagining how situations might unfold i
n the world. Machines today struggle at this kind of reasoning\, which lim
its how they can communicate with humans.
In my talk\, I will discuss three lines of work to bridge
this gap between machines and humans. I will first discuss how we might m
easure grounded understanding. I will introduce a suite of approaches for
constructing benchmarks\, using machines in the loop to filter out spuriou
s biases. Next\, I will introduce PIGLeT: a model that learns physical com
monsense understanding by interacting with the world through simulation\,
using this knowledge to ground language. From an English-language descript
ion of an event\, PIGLeT can anticipate how the world state might change –
outperforming text-only models that are orders of magnitude larger. Final
ly\, I will introduce MERLOT\, which learns about situations in the world
by watching millions of YouTube videos with transcribed speech. Through tr
aining objectives inspired by the developmental psychology idea of multimo
dal reentry\, MERLOT learns to fuse language\, vision\, and sound together
into powerful representations.
Together\, these directions suggest a path forward for building mac
hines that learn language rooted in the world.
Biography strong>
\nRowan Zellers is a final year PhD candidate at the Univers ity of Washington in Computer Science & Engineering\, advised by Yejin Cho i and Ali Farhadi. His research focuses on enabling machines to understand language\, vision\, sound\, and the world beyond these modalities. He has been recognized through an NSF Graduate Fellowship and a NeurIPS 2021 out standing paper award. His work has appeared in several media outlets\, inc luding Wired\, the Washington Post\, and the New York Times. In the past\, he graduated from Harvey Mudd College with a B.S. in Computer Science & M athematics\, and has interned at the Allen Institute for AI.
\n< /HTML> X-TAGS;LANGUAGE=en-US:2022\,February\,Zellers END:VEVENT BEGIN:VEVENT UID:ai1ec-22417@www.clsp.jhu.edu DTSTAMP:20240329T160358Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nOne of the keys to success in machine learning applic ations is to improve each user’s personal experience via personalized mode ls. A personalized model can be a more resource-efficient solution than a general-purpose model\, too\, because it focuses on a particular sub-probl em\, for which a smaller model architecture can be good enough. However\, training a personalized model requires data from the particular test-time user\, which are not always available due to their private nature and tech nical challenges. Furthermore\, such data tend to be unlabeled as they can be collected only during the test time\, once after the system is deploye d to user devices. One could rely on the generalization power of a generic model\, but such a model can be too computationally/spatially complex for real-time processing in a resource-constrained device. In this talk\, I w ill present some techniques to circumvent the lack of labeled personal dat a in the context of speech enhancement. Our machine learning models will r equire zero or few data samples from the test-time users\, while they can still achieve the personalization goal. To this end\, we will investigate modularized speech enhancement models as well as the potential of self-sup ervised learning for personalized speech enhancement. Because our research achieves the personalization goal in a data- and resource-efficient way\, it is a step towards a more available and affordable AI for society.\nBio graphy\nMinje Kim is an associate professor in the Dept. of Intelligent Sy stems Engineering at Indiana University\, where he leads his research grou p\, Signals and AI Group in Engineering (SAIGE). He is also an Amazon Visi ting Academic\, consulting for Amazon Lab126. At IU\, he is affiliated wit h various programs and labs such as Data Science\, Cognitive Science\, Dep t. of Statistics\, and Center for Machine Learning. He earned his Ph.D. in the Dept. of Computer Science at the University of Illinois at Urbana-Cha mpaign. Before joining UIUC\, He worked as a researcher at ETRI\, a nation al lab in Korea\, from 2006 to 2011. Before then\, he received his Master’ s and Bachelor’s degrees in the Dept. of Computer Science and Engineering at POSTECH (Summa Cum Laude) and in the Division of Information and Comput er Engineering at Ajou University (with honor) in 2006 and 2004\, respecti vely. He is a recipient of various awards including NSF Career Award (2021 )\, IU Trustees Teaching Award (2021)\, IEEE SPS Best Paper Award (2020)\, and Google and Starkey’s grants for outstanding student papers in ICASSP 2013 and 2014\, respectively. He is an IEEE Senior Member and also a membe r of the IEEE Audio and Acoustic Signal Processing Technical Committee (20 18-2023). He is serving as an Associate Editor for EURASIP Journal of Audi o\, Speech\, and Music Processing\, and as a Consulting Associate Editor f or IEEE Open Journal of Signal Processing. He is also a reviewer\, program committee member\, or area chair for the major machine learning and signa l processing. He filed more than 50 patent applications as an inventor. DTSTART;TZID=America/New_York:20221202T120000 DTEND;TZID=America/New_York:20221202T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Minje Kim (Indiana University) “Personalized Speech Enhancement: Da ta- and Resource-Efficient Machine Learning” URL:https://www.clsp.jhu.edu/events/minje-kim-indiana-university/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nOne of the keys to success in machine learning applic ations is to improve each user’s personal experience via personalized mode ls. A personalized model can be a more resource-efficient solution than a general-purpose model\, too\, because it focuses on a particular sub-probl em\, for which a smaller model architecture can be good enough. However\, training a personalized model requires data from the particular test-time user\, which are not always available due to their private nature and tech nical challenges. Furthermore\, such data tend to be unlabeled as they can be collected only during the test time\, once after the system is deploye d to user devices. One could rely on the generalization power of a generic model\, but such a model can be too computationally/spatially complex for real-time processing in a resource-constrained device. In this talk\, I will present some techniques to circumvent the lack of labeled personal data in the context of speech enhancement. Ou r machine learning models will require zero or few data samples from the t est-time users\, while they can still achieve the personalization goal. To this end\, we will investigate modularized speech enhancement models as w ell as the potential of self-supervised learning for personalized speech e nhancement. Because our research achieves the personalization goal in a da ta- and resource-efficient way\, it is a step towards a more available and affordable AI for society.
\nBiography
\n