BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21031@www.clsp.jhu.edu DTSTAMP:20240328T232333Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nMost p eople take for granted that when they speak\, they will be heard and under stood. But for the millions who live with speech impairments caused by phy sical or neurological conditions\, trying to communicate with others can b e difficult and lead to frustration. While there have been a great number of recent advances in Automatic Speech Recognition (ASR) technologies\, th ese interfaces can be inaccessible for those with speech impairments.
\nIn this talk\, we will present Parrotron\, an end -to-end-trained speech-to-speech conversion model that maps an input spect rogram directly to another spectrogram\, without utilizing any intermediat e discrete representation. The system is also trained to emit words in add ition to a spectrogram\, in parallel. We demonstrate that this model can be trained to normalize speech from any speaker regardless of accent\, pr osody\, and background noise\, into the voice of a single canonical target speaker with a fixed accent and consistent articulation and prosody. We f urther show that this normalization model can be adapted to normalize high ly atypical speech from speakers with a variety of speech impairments (due to\, ALS\, Cerebral-Palsy\, Deafness\, Stroke\, Brain Injury\, etc.) \, resulting in significant improvements in intelligibility and naturalness\, measured via a speech recognizer and listening tests. Finally\, demonstra ting the utility of this model on other speech tasks\, we show that the sa me model architecture can be trained to perform a speech separation task.< /p>\n
Dimitri will give a brief description of some key moments in development of speech recognition algorithms that he was in volved in and their applications to YouTube closed captions\, Live Transc ribe and wearable subtitles.
\nFadi will then sp eak about the development of Parrotron.
\nBiographies
\nDimitri Kanevsky started his career at Google working on speech recognition algorithms. Prior to joining Google\, Dimitr i was a Research staff member in the Speech Algorithms Department at IBM . Prior to IBM\, he worked at a number of centers for higher mathematics\, including Max Planck Institute in Germany and the Institute for Advanced Studies in Princeton. He currently holds 295 US patents and was Master Inv entor at IBM. MIT Technology Review recognized Dimitri conversational biom etrics based security patent as one of five most influential patents for 2 003. In 2012 Dimitri was honored at the White House as a Champion of Chang e for his efforts to advance access to science\, technology\, engineering\ , and math.
\nFadi Biadsy is a senior staff researc h scientist at Google NY for the past ten years. He has been exploring and leading multiple projects at Google\, including speech recognition\, spee ch conversion\, language modeling\, and semantic understanding. He receiv ed his PhD from Columbia University in 2011. At Columbia\, he researched a variety of speech and language processing projects including\, dialect an d accent recognition\, speech recognition\, charismatic speech and questio n answering. He holds a BSc and MSc in mathematics and computer science. He worked on handwriting recognition during his masters degree and he work ed as a senior software developer for five years at Dalet digital media sy stems building multimedia broadcasting systems.
DTSTART;TZID=America/New_York:20211105T120000 DTEND;TZID=America/New_York:20211105T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Fadi Biadsy and Dimitri Kanevsky (Google) “Speech Recognition: From Speaker Dependent to Speaker Independent to Full Personalization” “Parrot ron: A Unified E2E Speech-to Speech Conversion and ASR Model for Atypical Speech” URL:https://www.clsp.jhu.edu/events/fadi-biadsy-and-dimitri-kanevsky-google / X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,Biadsy and Kanevsky\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-21489@www.clsp.jhu.edu DTSTAMP:20240328T232333Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nSince it is increasingly h arder to opt out from interacting with AI technology\, people demand that AI is capable of maintaining contracts such that it supports agency and ov ersight of people who are required to use it or who are affected by it. To help those people create a mental model about how to interact with AI sys tems\, I extend the underlying models to self-explain—predict the label/an swer and explain this prediction. In this talk\, I will present how to gen erate (1) free-text explanations given in plain English that immediately t ell users the gist of the reasoning\, and (2) contrastive explanations tha t help users understand how they could change the text to get another labe l.
\nBiography
\nAna Marasović is a postdocto ral researcher at the Allen Institute for AI (AI2) and the Paul G. Allen S chool of Computer Science & Engineering at University of Washington. Her r esearch interests broadly lie in the fields of natural language processing \, explainable AI\, and vision-and-language learning. Her projects are mot ivated by a unified goal: improve interaction and control of the NLP syste ms to help people make these systems do what they want with the confidence that they’re getting exactly what they need. Prior to joining AI2\, Ana o btained her PhD from Heidelberg University.
\nHow to pronounce my name: the first name is Ana like in Spanish\, i.e.\, with a long “a” like in “water”\; regarding the last name: “mara” as in actress mara wilso n + “so” + “veetch”.
DTSTART;TZID=America/New_York:20220228T120000 DTEND;TZID=America/New_York:20220228T131500 LOCATION:Ames Hall 234 - Presented Virtually Via Zoom https://wse.zoom.us/j /96735183473 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Ana Marasović (Allen Institute for AI & University of Washington) “ Self-Explaining for Intuitive Interaction with AI” URL:https://www.clsp.jhu.edu/events/ana-marasovic-allen-institute-for-ai-un iversity-of-washington-self-explaining-for-intuitive-interaction-with-ai/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,February\,Marasovic END:VEVENT BEGIN:VEVENT UID:ai1ec-21621@www.clsp.jhu.edu DTSTAMP:20240328T232333Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nSystems that support expre ssive\, situated natural language interactions are essential for expanding access to complex computing systems\, such as robots and databases\, to n on-experts. Reasoning and learning in such natural language interactions i s a challenging open problem. For example\, resolving sentence meaning req uires reasoning not only about word meaning\, but also about the interacti on context\, including the history of the interaction and the situated env ironment. In addition\, the sequential dynamics that arise between user an d system in and across interactions make learning from static data\, i.e.\ , supervised data\, both challenging and ineffective. However\, these same interaction dynamics result in ample opportunities for learning from impl icit and explicit feedback that arises naturally in the interaction. This lays the foundation for systems that continually learn\, improve\, and ada pt their language use through interaction\, without additional annotation effort. In this talk\, I will focus on these challenges and opportunities. First\, I will describe our work on modeling dependencies between languag e meaning and interaction context when mapping natural language in interac tion to executable code. In the second part of the talk\, I will describe our work on language understanding and generation in collaborative interac tions\, focusing on continual learning from explicit and implicit user fee dback.
\nBiography
\nAlane Suhr is a PhD Cand idate in the Department of Computer Science at Cornell University\, advis ed by Yoav Artzi. Her research spans natural language processing\, machine learning\, and computer vision\, with a focus on building systems that pa rticipate and continually learn in situated natural language interactions with human users. Alane’s work has been recognized by paper awards at ACL and NAACL\, and has been supported by fellowships and grants\, including a n NSF Graduate Research Fellowship\, a Facebook PhD Fellowship\, and resea rch awards from AI2\, ParlAI\, and AWS. Alane has also co-organized multip le workshops and tutorials appearing at NeurIPS\, EMNLP\, NAACL\, and ACL. Previously\, Alane received a BS in Computer Science and Engineering as a n Eminence Fellow at the Ohio State University.
DTSTART;TZID=America/New_York:20220314T120000 DTEND;TZID=America/New_York:20220314T131500 LOCATION:Virtual Seminar SEQUENCE:0 SUMMARY:Alane Suhr (Cornell University) “Reasoning and Learning in Interact ive Natural Language Systems” URL:https://www.clsp.jhu.edu/events/alane-suhr-cornell-university-reasoning -and-learning-in-interactive-natural-language-systems/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,March\,Suhr END:VEVENT END:VCALENDAR