BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20987@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nWhile there is a vast amount of text written about ne arly any topic\, this is often difficult for someone unfamiliar with a spe cific field to understand. Automated text simplification aims to reduce th e complexity of a document\, making it more comprehensible to a broader au dience. Much of the research in this field has traditionally focused on si mplification sub-tasks\, such as lexical\, syntactic\, or sentence-level s implification. However\, current systems struggle to consistently produce high-quality simplifications. Phrase-based models tend to make too many po or transformations\; on the other hand\, recent neural models\, while prod ucing grammatical output\, often do not make all needed changes to the ori ginal text. In this thesis\, I discuss novel approaches for improving lexi cal and sentence-level simplification systems. Regarding sentence simplifi cation models\, after noting that encouraging diversity at inference time leads to significant improvements\, I take a closer look at the idea of di versity and perform an exhaustive comparison of diverse decoding technique s on other generation tasks. I also discuss the limitations in the framing of current simplification tasks\, which prevent these models from yet bei ng practically useful. Thus\, I also propose a retrieval-based reformulati on of the problem. Specifically\, starting with a document\, I identify co ncepts critical to understanding its content\, and then retrieve documents relevant for each concept\, re-ranking them based on the desired complexi ty level.\nBiography\nI’m a research scientist at the HLTCOE at Johns Hopk ins University. My primary research interests are in language generation\, diverse and constrained decoding\, and information retrieval. During my P hD I focused mainly on the task of text simplification\, and now am workin g on formulating structured prediction problems as end-to-end generation t asks. I received my PhD in July 2021 from the University of Pennsylvania w ith Chris Callison-Burch and Marianna Apidianaki. DTSTART;TZID=America/New_York:20211022T120000 DTEND;TZID=America/New_York:20211022T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Reno Kriz (HLTCOE – JHU) “Towards a Practically Useful Text Simplif ication System” URL:https://www.clsp.jhu.edu/events/reno-kriz-hltcoe-jhu-towards-a-practica lly-useful-text-simplification-system/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nWhile there is a vast amount of text written about ne arly any topic\, this is often difficult for someone unfamiliar with a spe cific field to understand. Automated text simplification aims to reduce th e complexity of a document\, making it more comprehensible to a broader au dience. Much of the research in this field has traditionally focused on si mplification sub-tasks\, such as lexical\, syntactic\, or sentence-level s implification. However\, current systems struggle to consistently produce high-quality simplifications. Phrase-based models tend to make too many po or transformations\; on the other hand\, recent neural models\, while prod ucing grammatical output\, often do not make all needed changes to the ori ginal text. In this thesis\, I discuss novel approaches for improving lexi cal and sentence-level simplification systems. Regarding sentence simplifi cation models\, after noting that encouraging diversity at inference time leads to significant improvements\, I take a closer look at the idea of di versity and perform an exhaustive comparison of diverse decoding technique s on other generation tasks. I also discuss the limitations in the framing of current simplification tasks\, which prevent these models from yet bei ng practically useful. Thus\, I also propose a retrieval-based reformulati on of the problem. Specifically\, starting with a document\, I identify co ncepts critical to understanding its content\, and then retrieve documents relevant for each concept\, re-ranking them based on the desired complexi ty level.
\nBiography
\nI ’m a research scientist at the HLTCOE at Johns Hopkins University. My prim ary research interests are in language generation\, diverse and constraine d decoding\, and information retrieval. During my PhD I focused mainly on the task of text simplification\, and now am working on formulating struct ured prediction problems as end-to-end generation tasks. I received my PhD in July 2021 from the University of Pennsylvania with Chris Callison-Burc h and Marianna Apidianaki.
\n\n X-TAGS;LANGUAGE=en-US:2021\,Kriz\,October END:VEVENT BEGIN:VEVENT UID:ai1ec-21023@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nSpeech data is notoriously difficult to work with due to a variety of codecs\, lengths of recordings\, and meta-data formats. W e present Lhotse\, a speech data representation library that draws upon le ssons learned from Kaldi speech recognition toolkit and brings its concept s into the modern deep learning ecosystem. Lhotse provides a common JSON d escription format with corresponding Python classes and data preparation r ecipes for over 30 popular speech corpora. Various datasets can be easily combined together and re-purposed for different tasks. The library handles multi-channel recordings\, long recordings\, local and cloud storage\, la zy and on-the-fly operations amongst other features. We introduce Cut and CutSet concepts\, which simplify common data wrangling tasks for audio and help incorporate acoustic context of speech utterances. Finally\, we show how Lhotse leverages PyTorch data API abstractions and adopts them to han dle speech data for deep learning.\nBiography\nPiotr Zelasko is an assista nt research scientist in the Center for Language and Speech Processing (CL SP) who specializes in automatic speech recognition (ASR) and spoken langu age understanding (SLU). His current research focuses on applying multilin gual and crosslingual speech recognition systems to categorize the phoneti c inventory of a previously unknown language and on improving defenses aga inst adversarial attacks on both speaker identification and automatic spee ch recognition systems. He is also addressing the question of how to struc ture a spontaneous conversation into high-level semantic units such as dia log acts or topics. Finally\, he is working on Lhotse + K2\, the next-gene ration speech processing research software ecosystem. Before joining Johns Hopkins\, Zelasko worked as a machine learning consultant for Avaya (2017 -2019)\, and as a machine learning engineer for Techmo (2015-2017). Zelask o received his PhD (2019) in electronics engineering\, as well as his mast er’s (2014) and undergraduate degrees (2013) in acoustic engineering from AGH University of Science and Technology in Kraków\, Poland. DTSTART;TZID=America/New_York:20211029T120000 DTEND;TZID=America/New_York:20211029T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore MD 21218 SEQUENCE:0 SUMMARY:Piotr Zelasko (CLSP at JHU) “Lhotse: a speech data representation l ibrary for the modern deep learning ecosystem” URL:https://www.clsp.jhu.edu/events/piotr-zelasko-clsp-at-jhu-lhotse-a-spee ch-data-representation-library-for-the-modern-deep-learning-ecosystem/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nSpeech data is notoriously difficult t o work with due to a variety of codecs\, lengths of recordings\, and meta- data formats. We present Lhotse\, a speech data representation library tha t draws upon lessons learned from Kaldi speech recognition toolkit and bri ngs its concepts into the modern deep learning ecosystem. Lhotse provides a common JSON description format with corresponding Python classes and dat a preparation recipes for over 30 popular speech corpora. Various datasets can be easily combined together and re-purposed for different tasks. The library handles multi-channel recordings\, long recordings\, local and clo ud storage\, lazy and on-the-fly operations amongst other features. We int roduce Cut and CutSet concepts\, which simplify common data wrangling task s for audio and help incorporate acoustic context of speech utterances. Fi nally\, we show how Lhotse leverages PyTorch data API abstractions and ado pts them to handle speech data for deep learning.
\nB iography
\nPiotr Zelasko is an assistant research scientist in the Center for Language and Speech Processing (CLSP) who specializes i n automatic speech recognition (ASR) and spoken language understanding (SL U). His current research focuses on applying multilingual and crosslingual speech recognition systems to categorize the phonetic inventory of a prev iously unknown language and on improving defenses against adversarial atta cks on both speaker identification and automatic speech recognition system s. He is also addressing the question of how to structure a spontaneous co nversation into high-level semantic units such as dialog acts or topics. F inally\, he is working on Lhotse + K2\, the next-generation speech process ing research software ecosystem. Before joining Johns Hopkins\, Zelasko wo rked as a machine learning consultant for Avaya (2017-2019)\, and as a mac hine learning engineer for Techmo (2015-2017). Zelasko received his PhD (2 019) in electronics engineering\, as well as his master’s (2014) and under graduate degrees (2013) in acoustic engineering from AGH University of Sci ence and Technology in Kraków\, Poland.
\n X-TAGS;LANGUAGE=en-US:2021\,October\,Zelasko END:VEVENT BEGIN:VEVENT UID:ai1ec-21031@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nMost people take for granted that when they speak\, t hey will be heard and understood. But for the millions who live with speec h impairments caused by physical or neurological conditions\, trying to co mmunicate with others can be difficult and lead to frustration. While ther e have been a great number of recent advances in Automatic Speech Recognit ion (ASR) technologies\, these interfaces can be inaccessible for those wi th speech impairments.\nIn this talk\, we will present Parrotron\, an end- to-end-trained speech-to-speech conversion model that maps an input spectr ogram directly to another spectrogram\, without utilizing any intermediate discrete representation. The system is also trained to emit words in addi tion to a spectrogram\, in parallel. We demonstrate that this model can be trained to normalize speech from any speaker regardless of accent\, pro sody\, and background noise\, into the voice of a single canonical target speaker with a fixed accent and consistent articulation and prosody. We fu rther show that this normalization model can be adapted to normalize highl y atypical speech from speakers with a variety of speech impairments (due to\, ALS\, Cerebral-Palsy\, Deafness\, Stroke\, Brain Injury\, etc.) \, r esulting in significant improvements in intelligibility and naturalness\, measured via a speech recognizer and listening tests. Finally\, demonstrat ing the utility of this model on other speech tasks\, we show that the sam e model architecture can be trained to perform a speech separation task.\n Dimitri will give a brief description of some key moments in development o f speech recognition algorithms that he was involved in and their applicat ions to YouTube closed captions\, Live Transcribe and wearable subtitles. \nFadi will then speak about the development of Parrotron.\nBiographies\nD imitri Kanevsky started his career at Google working on speech recognition algorithms. Prior to joining Google\, Dimitri was a Research staff member in the Speech Algorithms Department at IBM. Prior to IBM\, he worked at a number of centers for higher mathematics\, including Max Planck Institu te in Germany and the Institute for Advanced Studies in Princeton. He curr ently holds 295 US patents and was Master Inventor at IBM. MIT Technology Review recognized Dimitri conversational biometrics based security patent as one of five most influential patents for 2003. In 2012 Dimitri was hono red at the White House as a Champion of Change for his efforts to advance access to science\, technology\, engineering\, and math.\nFadi Biadsy is a senior staff research scientist at Google NY for the past ten years. He h as been exploring and leading multiple projects at Google\, including spee ch recognition\, speech conversion\, language modeling\, and semantic unde rstanding. He received his PhD from Columbia University in 2011. At Colum bia\, he researched a variety of speech and language processing projects i ncluding\, dialect and accent recognition\, speech recognition\, charismat ic speech and question answering. He holds a BSc and MSc in mathematics a nd computer science. He worked on handwriting recognition during his maste rs degree and he worked as a senior software developer for five years at D alet digital media systems building multimedia broadcasting systems. DTSTART;TZID=America/New_York:20211105T120000 DTEND;TZID=America/New_York:20211105T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Fadi Biadsy and Dimitri Kanevsky (Google) “Speech Recognition: From Speaker Dependent to Speaker Independent to Full Personalization” “Parrot ron: A Unified E2E Speech-to Speech Conversion and ASR Model for Atypical Speech” URL:https://www.clsp.jhu.edu/events/fadi-biadsy-and-dimitri-kanevsky-google / X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nMost people take for granted that when they speak\, they will be heard and understood. But for the millions who live with speech impairments caused by physical or neurological condi tions\, trying to communicate with others can be difficult and lead to fru stration. While there have been a great number of recent advances in Autom atic Speech Recognition (ASR) technologies\, these interfaces can be inacc essible for those with speech impairments.
\nIn this talk\, we will present Parrotron\, an end-to-end-trained speech-to-sp eech conversion model that maps an input spectrogram directly to another s pectrogram\, without utilizing any intermediate discrete representation. T he system is also trained to emit words in addition to a spectrogram\, in parallel. We demonstrate that this model can be trained to normalize spe ech from any speaker regardless of accent\, prosody\, and background noise \, into the voice of a single canonical target speaker with a fixed accent and consistent articulation and prosody. We further show that this normal ization model can be adapted to normalize highly atypical speech from spea kers with a variety of speech impairments (due to\, ALS\, Cerebral-Palsy\, Deafness\, Stroke\, Brain Injury\, etc.) \, resulting in significant imp rovements in intelligibility and naturalness\, measured via a speech recog nizer and listening tests. Finally\, demonstrating the utility of this mod el on other speech tasks\, we show that the same model architecture can be trained to perform a speech separation task.
\nDimitri will give a brief description of some key moments in development o f speech recognition algorithms that he was involved in and their applicat ions to YouTube closed captions\, Live Transcribe and wearable subtitles.
\nFadi will then speak about the development of Parrotron.
\nBiographies
\nDimitri K anevsky started his career at Google working on speech recognitio n algorithms. Prior to joining Google\, Dimitri was a Research staff membe r in the Speech Algorithms Department at IBM. Prior to IBM\, he worked a t a number of centers for higher mathematics\, including Max Planck Instit ute in Germany and the Institute for Advanced Studies in Princeton. He cur rently holds 295 US patents and was Master Inventor at IBM. MIT Technology Review recognized Dimitri conversational biometrics based security patent as one of five most influential patents for 2003. In 2012 Dimitri was hon ored at the White House as a Champion of Change for his efforts to advance access to science\, technology\, engineering\, and math.
\nFadi Biadsy is a senior staff research scientist at Google NY for the past ten years. He has been exploring and leading multiple projects a t Google\, including speech recognition\, speech conversion\, language mod eling\, and semantic understanding. He received his PhD from Columbia Uni versity in 2011. At Columbia\, he researched a variety of speech and langu age processing projects including\, dialect and accent recognition\, speec h recognition\, charismatic speech and question answering. He holds a BSc and MSc in mathematics and computer science. He worked on handwriting rec ognition during his masters degree and he worked as a senior software deve loper for five years at Dalet digital media systems building multimedia br oadcasting systems.
\n X-TAGS;LANGUAGE=en-US:2021\,Biadsy and Kanevsky\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-21041@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nNarration is a universal human practice that serves a s a key site of education\, collective memory\, fostering social belief sy stems\, and furthering human creativity. Recent studies in economics (Shil ler\, 2020)\, climate science (Bushell et al.\, 2017)\, political polariza tion (Kubin et al.\, 2021)\, and mental health (Adler et al.\, 2016) sugge st an emerging interdisciplinary consensus that narrative is a central con cept for understanding human behavior and beliefs. For close to half a cen tury\, the field of narratology has developed a rich set of theoretical fr ameworks for understanding narrative. And yet these theories have largely gone untested on large\, heterogenous collections of texts. Scholars conti nue to generate schemas by extrapolating from small numbers of manually ob served documents. In this talk\, I will discuss how we can use machine lea rning to develop data-driven theories of narration to better understand wh at Labov and Waletzky called “the simplest and most fundamental narrative structures.” How can machine learning help us approach what we might call a minimal theory of narrativity?\nBiography\nAndrew Piper is Professor and William Dawson Scholar in the Department of Languages\, Literatures\, and Cultures at McGill University. He is the director of _.txtlab \n_\,\n a l aboratory for cultural analytics\, and editor of the /Journal of Cultural Analytics/\, an open-access journal dedicated to the computational study o f culture. He is the author of numerous books and articles on the relation ship of technology and reading\, including /Book Was There: Reading in Ele ctronic Times/(Chicago 2012)\, /Enumerations: Data and Literary Study/(Chi cago 2018)\, and most recently\, /Can We Be Wrong? The Problem of Textual Evidence in a Time of Data/(Cambridge 2020). DTSTART;TZID=America/New_York:20211112T120000 DTEND;TZID=America/New_York:20211112T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Andrew Piper (McGill University) ” How can we use machine learning to understand narration?” URL:https://www.clsp.jhu.edu/events/andrew-piper-mcgill-university-how-can- we-use-machine-learning-to-understand-narration/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nNarration is a universal human practice that serves a s a key site of education\, collective memory\, fostering social belief sy stems\, and furthering human creativity. Recent studies in economics (Shil ler\, 2020)\, climate science (Bushell et al.\, 2017)\, political polariza tion (Kubin et al.\, 2021)\, and mental health (Adler et al.\, 2016) sugge st an emerging interdisciplinary consensus that narrative is a central con cept for understanding human behavior and beliefs. For close to half a cen tury\, the field of narratology has developed a rich set of theoretical fr ameworks for understanding narrative. And yet these theories have largely gone untested on large\, heterogenous collections of texts. Scholars conti nue to generate schemas by extrapolating from small numbers of manually ob served documents. In this talk\, I will discuss how we can use machine lea rning to develop data-driven theories of narration to better understand wh at Labov and Waletzky called “the simplest and most fundamental narrative structures.” How can machine learning help us approach what we might call a minimal theory of narrativity?
\nBiography
\n< p>Andrew Piper is Professor and William D awson Scholar in the Department of Languages\, Literatures\, and Cultures at McGill University. He is the director of _.txtlab \n\na laboratory for cultural ana lytics\, and editor of the /Journal of Cultural Analytics/\, an open-acces s journal dedicated to the computational study of culture. He is the autho r of numerous books and articles on the relationship of technology and rea ding\, including /Book Was There: Reading in Electronic Times/(Chicago 201 2)\, /Enumerations: Data and Literary Study/(Chicago 2018)\, and most rece ntly\, /Can We Be Wrong? The Problem of Textual Evidence in a Time of Data /(Cambridge 2020).
\n X-TAGS;LANGUAGE=en-US:2021\,November\,Piper END:VEVENT BEGIN:VEVENT UID:ai1ec-21057@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nThis talk will outline the major challenging in porti ng mainstream speech technology to the domain of clinical applications\; i n particular\, the need for personalised systems\, the challenge of workin g in an inherently sparse data domain and developing meaningful collaborat ions with all stakeholders. The talk will give an overview of recent state -of-the-art research from current projects including in the areas of recog nition of disordered speech\, automatic processing of conversations and th e automatic detection and tracking of paralinguistic information at the Un iversity of Sheffield (UK)’s Speech and Hearing (SPandH) & Healthcare lab. \nBiography\nHeidi is a Senior Lecturer (associate professor) in Computer Science at the University of Sheffield\, United Kingdom. Her research inte rests are on the application of AI-based voice technologies to healthcare. In particular\, the detection and monitoring of people’s physical and men tal health including verbal and non-verbal traits for expressions of emoti on\, anxiety\, depression and neurodegenerative conditions in e.g.\, thera peutic or diagnostic settings. DTSTART;TZID=America/New_York:20211119T120000 DTEND;TZID=America/New_York:20211119T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Heidi Christensen (University of Sheffield\, UK) Virtual Seminar “A utomated Processing of Pathological Speech: Recent Work and Ongoing Challe nges” URL:https://www.clsp.jhu.edu/events/heidi-christensen-university-of-sheffie ld-uk-virtual-seminar-automated-processing-of-pathological-speech-recent-w ork-and-ongoing-challenges/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nThis talk will outline the major challenging in porti ng mainstream speech technology to the domain of clinical applications\; i n particular\, the need for personalised systems\, the challenge of workin g in an inherently sparse data domain and developing meaningful collaborat ions with all stakeholders. The talk will give an overview of recent state -of-the-art research from current projects including in the areas of recog nition of disordered speech\, automatic processing of conversations and th e automatic detection and tracking of paralinguistic information at the Un iversity of Sheffield (UK)’s Speech and Hearing (SPandH) & Healthcare lab.
\nBiography
\nHeidi is a Senior Lecturer (as sociate professor) in Computer Science at the University of Sheffield\, Un ited Kingdom. Her research interests are on the application of AI-based vo ice technologies to healthcare. In particular\, the detection and monitori ng of people’s physical and mental health including verbal and non-verbal traits for expressions of emotion\, anxiety\, depression and neurodegenera tive conditions in e.g.\, therapeutic or diagnostic settings.
\n X-TAGS;LANGUAGE=en-US:2021\,Christensen\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-22423@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20221007T120000 DTEND;TZID=America/New_York:20221007T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Ariya Rastrow (Amazon) URL:https://www.clsp.jhu.edu/events/ariya-rastrow-amazon-2/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,October\,Rastrow END:VEVENT BEGIN:VEVENT UID:ai1ec-22394@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\n\nModel robustness and spurious correlations have rec eived increasing attention in the NLP community\, both in methods and eval uation. The term “spurious correlation” is overloaded though and can refer to any undesirable shortcuts learned by the model\, as judged by domain e xperts.\n\n\nWhen designing mitigation algorithms\, we often (implicitly) assume that a spurious feature is irrelevant for prediction. However\, man y features in NLP (e.g. word overlap and negation) are not spurious in the sense that the background is spurious for classifying objects in an image . In contrast\, they carry important information that’s needed to make pre dictions by humans. In this talk\, we argue that it is more productive to characterize features in terms of their necessity and sufficiency for pred iction. We then discuss the implications of this categorization in represe ntation\, learning\, and evaluation.\nBiography\nHe He is an Assistant Pro fessor in the Department of Computer Science and the Center for Data Scien ce at New York University. She obtained her PhD in Computer Science at the University of Maryland\, College Park. Before joining NYU\, she spent a y ear at AWS AI and was a post-doc at Stanford University before that. She i s interested in building robust and trustworthy NLP systems in human-cente red settings. Her recent research focus includes robust language understan ding\, collaborative text generation\, and understanding capabilities and issues of large language models. DTSTART;TZID=America/New_York:20221014T120000 DTEND;TZID=America/New_York:20221014T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:He He (New York University) “What We Talk about When We Talk about Spurious Correlations in NLP” URL:https://www.clsp.jhu.edu/events/he-he-new-york-university/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nModel robustness and spuri ous correlations have received increasing attention in the NLP community\, both in methods and evaluation. The term “spurious correlation” is overlo aded though and can refer to any undesirable shortcuts learned by the mode l\, as judged by domain experts.
\nWhen designing mitigation algorithms\, we often (implicitly) assume that a spurious feature is irrelevant for prediction. However\, many features in NLP (e.g. word overlap and negation) are not spurious in the sense that the background is spurious for classifying objects in an image. In contra st\, they carry important information that’s needed to make predictions by humans. In this talk\, we argue that it is more productive to characteriz e features in terms of their necessity and sufficiency for prediction. We then discuss the implications of this categorization in representation\, l earning\, and evaluation.
\nBiography
\nHe He is an Assistant Professor in the Department of Computer Science and the C enter for Data Science at New York University. She obtained her PhD in Com puter Science at the University of Maryland\, College Park. Before joining NYU\, she spent a year at AWS AI and was a post-doc at Stanford Universit y before that. She is interested in building robust and trustworthy NLP sy stems in human-centered settings. Her recent research focus includes robus t language understanding\, collaborative text generation\, and understandi ng capabilities and issues of large language models.
\nAbstr act
\nAbstr act
\nModern learning architectures for natural language processing have been very successful in incorporating a huge amount of texts into their parameters. However\, by and large\, such models store and use knowledge in distributed and decentralized ways. This proves unreliable and makes the models ill-suited for knowledge-intensive tasks that require reasoning over factual information in linguistic expre ssions. In this talk\, I will give a few examples of exploring alternativ e architectures to tackle those challenges. In particular\, we can improve the performance of such (language) models by representing\, storing and a ccessing knowledge in a dedicated memory component.
\nThis talk is based on several joint works with Yury Zemlyanskiy (Goo gle Research)\, Michiel de Jong (USC and Google Research)\, William Cohen (Google Research and CMU) and our other collaborators in Google Research.< /p>\n
Biography
\nFei is a research scientist at Google Research. Before that\, he was a Professor of Computer Science at U niversity of Southern California. His primary research interests are machi ne learning and its application to various AI problems: speech and languag e processing\, computer vision\, robotics and recently weather forecast an d climate modeling. He has a PhD (2007) from Computer and Information Sc ience from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing\, China).
\n X-TAGS;LANGUAGE=en-US:2022\,October\,Sha END:VEVENT BEGIN:VEVENT UID:ai1ec-22403@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nVoice conversion (VC) is a significant aspect of arti ficial intelligence. It is the study of how to convert one’s voice to soun d like that of another without changing the linguistic content. Voice conv ersion belongs to a general technical field of speech synthesis\, which co nverts text to speech or changes the properties of speech\, for example\, voice identity\, emotion\, and accents. Voice conversion involves multiple speech processing techniques\, such as speech analysis\, spectral convers ion\, prosody conversion\, speaker characterization\, and vocoding. With t he recent advances in theory and practice\, we are now able to produce hum an-like voice quality with high speaker similarity. In this talk\, Dr. Sis man will present the recent advances in voice conversion and discuss their promise and limitations. Dr. Sisman will also provide a summary of the av ailable resources for expressive voice conversion research.\nBiography\nDr . Berrak Sisman (Member\, IEEE) received the Ph.D. degree in electrical an d computer engineering from National University of Singapore in 2020\, ful ly funded by A*STAR Graduate Academy under Singapore International Graduat e Award (SINGA). She is currently working as a tenure-track Assistant Prof essor at the Erik Jonsson School Department of Electrical and Computer Eng ineering at University of Texas at Dallas\, United States. Prior to joinin g UT Dallas\, she was a faculty member at Singapore University of Technolo gy and Design (2020-2022). She was a Postdoctoral Research Fellow at the N ational University of Singapore (2019-2020). She was an exchange doctoral student at the University of Edinburgh and a visiting scholar at The Centr e for Speech Technology Research (CSTR)\, University of Edinburgh (2019). She was a visiting researcher at RIKEN Advanced Intelligence Project in Ja pan (2018). Her research is focused on machine learning\, signal processin g\, emotion\, speech synthesis and voice conversion.\nDr. Sisman has serve d as the Area Chair at INTERSPEECH 2021\, INTERSPEECH 2022\, IEEE SLT 2022 and as the Publication Chair at ICASSP 2022. She has been elected as a me mber of the IEEE Speech and Language Processing Technical Committee (SLTC) in the area of Speech Synthesis for the term from January 2022 to Decembe r 2024. She plays leadership roles in conference organizations and active in technical committees. She has served as the General Coordinator of the Student Advisory Committee (SAC) of International Speech Communication Ass ociation (ISCA). DTSTART;TZID=America/New_York:20221104T120000 DTEND;TZID=America/New_York:20221104T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Berrak Sisman (University of Texas at Dallas) “Speech Synthesis and Voice Conversion: Machine Learning can Mimic Anyone’s Voice” URL:https://www.clsp.jhu.edu/events/berrak-sisman-university-of-texas-at-da llas/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nVoice conversion (VC) is a significant aspect of arti ficial intelligence. It is the study of how to convert one’s voice to soun d like that of another without changing the linguistic content. Voice conv ersion belongs to a general technical field of speech synthesis\, which co nverts text to speech or changes the properties of speech\, for example\, voice identity\, emotion\, and accents. Voice conversion involves multiple speech processing techniques\, such as speech analysis\, spectral convers ion\, prosody conversion\, speaker characterization\, and vocoding. With t he recent advances in theory and practice\, we are now able to produce hum an-like voice quality with high speaker similarity. In this talk\, Dr. Sis man will present the recent advances in voice conversion and discuss their promise and limitations. Dr. Sisman will also provide a summary of the av ailable resources for expressive voice conversion research.
\nDr. Berrak Sisman (Member\, IEEE) received th e Ph.D. degree in electrical and computer engineering from National Univer sity of Singapore in 2020\, fully funded by A*STAR Graduate Academy under Singapore International Graduate Award (SINGA). She is currently working a s a tenure-track Assistant Professor at the Erik Jonsson School Department of Electrical and Computer Engineering at University of Texas at Dallas\, United States. Prior to joining UT Dallas\, she was a faculty member at S ingapore University of Technology and Design (2020-2022). She was a Postdo ctoral Research Fellow at the National University of Singapore (2019-2020) . She was an exchange doctoral student at the University of Edinburgh and a visiting scholar at The Centre for Speech Technology Research (CSTR)\, U niversity of Edinburgh (2019). She was a visiting researcher at RIKEN Adva nced Intelligence Project in Japan (2018). Her research is focused on mach ine learning\, signal processing\, emotion\, speech synthesis and voice co nversion.
\nDr. Sisman has served as the Area Chair at INTERSPEECH 2 021\, INTERSPEECH 2022\, IEEE SLT 2022 and as the Publication Chair at ICA SSP 2022. She has been elected as a member of the IEEE Speech and Language Processing Technical Committee (SLTC) in the area of Speech Synthesis for the term from January 2022 to December 2024. She plays leadership roles i n conference organizations and active in technical committees. She has ser ved as the General Coordinator of the Student Advisory Committee (SAC) of International Speech Communication Association (ISCA).
\n X-TAGS;LANGUAGE=en-US:2022\,November\,Sisman END:VEVENT BEGIN:VEVENT UID:ai1ec-22408@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nAI-powered applications increasingly adopt Deep Neura l Networks (DNNs) for solving many prediction tasks\, leading to more than one DNNs running on resource-constrained devices. Supporting many models simultaneously on a device is challenging due to the linearly increased co mputation\, energy\, and storage costs. An effective approach to address t he problem is multi-task learning (MTL) where a set of tasks are learned j ointly to allow some parameter sharing among tasks. MTL creates multi-task models based on common DNN architectures and has shown significantly redu ced inference costs and improved generalization performance in many machin e learning applications. In this talk\, we will introduce our recent effor ts on leveraging MTL to improve accuracy and efficiency for edge computing . The talk will introduce multi-task architecture design systems that can automatically identify resource-efficient multi-task models with low infer ence costs and high task accuracy.\n\nBiography\n\n\nHui Guan is an Assist ant Professor in the College of Information and Computer Sciences (CICS) a t the University of Massachusetts Amherst\, the flagship campus of the UMa ss system. She received her Ph.D. in Electrical Engineering from North Car olina State University in 2020. Her research lies in the intersection betw een machine learning and systems\, with an emphasis on improving the speed \, scalability\, and reliability of machine learning through innovations i n algorithms and programming systems. Her current research focuses on both algorithm and system optimizations of deep multi-task learning and graph machine learning. DTSTART;TZID=America/New_York:20221111T120000 DTEND;TZID=America/New_York:20221111T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Hui Guan (University of Massachusetts Amherst) “Towards Accurate an d Efficient Edge Computing Via Multi-Task Learning” URL:https://www.clsp.jhu.edu/events/hui-guan-university-of-massachusetts-am herst/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAbstr act
\nDriven by the goal of eradicating language barriers o n a global scale\, machine translation has solidified itself as a key focu s of artificial intelligence research today. However\, such efforts have c oalesced around a small subset of languages\, leaving behind the vast majo rity of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe\, high-quality results\, all while ke eping ethical considerations in mind? In this talk\, I introduce No Langua ge Left Behind\, an initiative to break language barriers for low-resource languages. In No Language Left Behind\, we took on the low-resource langu age translation challenge by first contextualizing the need for translatio n support through exploratory interviews with native speakers. Then\, we c reated datasets and models aimed at narrowing the performance gap between low and high-resource languages. We proposed multiple architectural and tr aining improvements to counteract overfitting while training on thousands of tasks. Critically\, we evaluated the performance of over 40\,000 differ ent translation directions using a human-translated benchmark\, Flores-200 \, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achiev es an improvement of 44% BLEU relative to the previous state-of-the-art\, laying important groundwork towards realizing a universal translation syst em in an open-source manner.
\nBiography
\nAngela is a research scientist at Meta AI Research in Ne w York\, focusing on supporting efforts in speech and language research. R ecent projects include No Language Left Behind (https://ai.facebook.com/research/no-language-left-be hind/) and Universal Speech Translation for Unwritten Languages (https://ai.facebook.com/blog/ai-translation -hokkien/). Before translation\, Angela previously focused on research in on-device models for NLP and computer vision and text generation.
\n\n X-TAGS;LANGUAGE=en-US:2022\,Fan\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-23900@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20231002T120000 DTEND;TZID=America/New_York:20231002T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:CLSP Student Seminar – Anna Favaro URL:https://www.clsp.jhu.edu/events/clsp-student-seminar-anna-favaro/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Favaro\,October END:VEVENT BEGIN:VEVENT UID:ai1ec-24115@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nOur goal is to use AI to automatically find tax minim ization strategies\, an approach which we call “Shelter Check.” It would c ome in two variants. Existing-Authority Shelter Check would aim to find wh ether existing tax law authorities can be combined to create tax minimizat ion strategies\, so the IRS or Congress can shut them down. New-Authority Shelter Check would automate checking whether a new tax law authority – li ke proposed legislation or a draft court decision – would combine with exi sting authorities to create a new tax minimization strategy. We had initia lly had high hopes for GPT-* large language models for implementing Shelte r Check\, but our tests have showed that they do very poorly at basic lega l reasoning and handling legal text. So we are now creating a benchmark an d training data for LLM’s handling legal text\, hoping to spur improvement s. DTSTART;TZID=America/New_York:20231006T120000 DTEND;TZID=America/New_York:20231006T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:CLSP Student Seminar – Andrew Blair-Stanek “Shelter Check and GPT-4 ’s Bad Legal Performance” URL:https://www.clsp.jhu.edu/events/clsp-student-seminar-andrew-blair-stane k-shelter-check-and-gpt-4s-bad-legal-performance/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nOur goal is to use AI to automatically find tax minim ization strategies\, an approach which we call “Shelter Check.” It would c ome in two variants. Existing-Authority Shelter Check would aim to find wh ether existing tax law authorities can be combined to create tax minimizat ion strategies\, so the IRS or Congress can shut them down. New-Authority Shelter Check would automate checking whether a new tax law authority – li ke proposed legislation or a draft court decision – would combine with exi sting authorities to create a new tax minimization strategy. We had initia lly had high hopes for GPT-* large language models for implementing Shelte r Check\, but our tests have showed that they do very poorly at basic lega l reasoning and handling legal text. So we are now creating a benchmark an d training data for LLM’s handling legal text\, hoping to spur improvement s.
\n X-TAGS;LANGUAGE=en-US:2023\,Blair-Stanek\,October END:VEVENT BEGIN:VEVENT UID:ai1ec-24005@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nLarge-scale generative models such as GPT and DALL-E have revolutionized natural language processing and computer vision resear ch. These models not only generate high fidelity text or image outputs\, b ut also demonstrate impressive domain and task generalization capabilities . In contrast\, audio generative models are relatively primitive in scale and generalization.\nIn this talk\, I will start with a brief introduction on conventional neural speech generative models and discuss why they are unfit for scaling to Internet-scale data. Next\, by reviewing the latest l arge-scale generative models for text and image\, I will outline a few lin es of promising approaches to build scalable speech models. Last\, I will present Voicebox\, our latest work to advance this area. Voicebox is the m ost versatile generative model for speech. It is trained with a simple tas k — text conditioned speech infilling — on over 50K hours of multilingual speech with a powerful flow-matching objective. Through in-context learnin g\, Voicebox can perform monolingual/cross-lingual zero-shot TTS\, holisti c style conversion\, transient noise removal\, content editing\, and diver se sample generation. Moreover\, Voicebox achieves state-of-the-art perfor mance and excellent run-time efficiency.\nBiography\nWei-Ning Hsu is a res earch scientist at Meta Foundational AI Research (FAIR) and currently the lead of the audio generation team. His research focuses on self-supervised learning and generative models for speech and audio. His pioneering work includes HuBERT\, AV-HuBERT\, TextlessNLP\, data2vec\, wav2vec-U\, textles s speech translation\, and Voicebox. \nPrior to joining Meta\, Wei-Ning wo rked at MERL and Google Brain as a research intern. He received his Ph.D. and S.M. degrees in Electrical Engineering and Computer Science from Massa chusetts Institute of Technology in 2020 and 2018\, under the supervision of Dr. James Glass. He received his B.S. degree in Electrical Engineering from National Taiwan University in 2014\, under the supervision of Prof. L in-shan Lee and Prof. Hsuan-Tien Lin. DTSTART;TZID=America/New_York:20231009T120000 DTEND;TZID=America/New_York:20231009T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Wei-Ning Hsu (Meta Foundational AI Research) “Large Scale Universal Speech Generative Models” URL:https://www.clsp.jhu.edu/events/wei-ning-hsu-meta-foundational-ai-resea rch-large-scale-universal-speech-generative-models/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nLarge-scale generative models such as GPT and DALL-E have revolutionized natural langu age processing and computer vision research. These models not only generat e high fidelity text or image outputs\, but also demonstrate impressive do main and task generalization capabilities. In contrast\, audio generative models are relatively primitive in scale and generalization.
\nIn this talk\, I will st art with a brief introduction on conventional neural speech generative mod els and discuss why they are unfit for scaling to Internet-scale data. Nex t\, by reviewing the latest large-scale generative models for text and ima ge\, I will outline a few lines of promising approaches to build scalable speech models. Last\, I will present Voicebox\, our latest work to advance this area. Voicebox is the most versatile generative model for speech. It is trained with a simple task — text conditioned speech infilling — on ov er 50K hours of multilingual speech with a powerful flow-matching objectiv e. Through in-context learning\, Voicebox can perform monolingual/cross-li ngual zero-shot TTS\, holistic style conversion\, transient noise removal\ , content editing\, and diverse sample generation. Moreover\, Voicebox ach ieves state-of-the-art performance and excellent run-time efficiency.
\nBiography
\nWei-Ning Hsu is a research scientist at Meta Founda tional AI Research (FAIR) and currently the lead of the audio generation t eam. His research focuses on self-supervised learning and generative model s for speech and audio. His pioneering work includes HuBERT\, AV-HuBERT\, TextlessNLP\, data2vec\, wav2vec-U\, textless speech translation\, and Voi cebox.
\nPri or to joining Meta\, Wei-Ning worked at MERL and Google Brain as a researc h intern. He received his Ph.D. and S.M. degrees in Electrical Engineering and Computer Science from Massachusetts Institute of Technology in 2020 a nd 2018\, under the supervision of Dr. James Glass. He received his B.S. d egree in Electrical Engineering from National Taiwan University in 2014\, under the supervision of Prof. Lin-shan Lee and Prof. Hsuan-Tien Lin.
\n X-TAGS;LANGUAGE=en-US:2023\,Hsu\,October END:VEVENT BEGIN:VEVENT UID:ai1ec-23902@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nPretrained language models (LMs) encode implicit repr esentations of knowledge in their parameters. Despite this observation\, o ur best methods for interpreting these representations yield few actionabl e insights on how to manipulate this parameter space for downstream benefi t. In this talk\, I will present work on methods that simulate machine rea soning by localizing and modifying parametric knowledge representations. F irst\, I will present a method for discovering knowledge-critical subnetwo rks within pretrained language models\, and show that these sparse computa tional subgraphs are responsible for the model’s ability to encode specifi c pieces of knowledge. Then\, I will present a new reasoning algorithm\, R ECKONING\, a bi-level optimisation procedure that dynamically encodes and reasons over new knowledge at test-time using the model’s existing learned knowledge representations as a scratchpad. Finally\, I will discuss next steps and challenges in using internal model mechanisms for reasoning.\n\n Bio\n\nAntoine Bosselut is an assistant professor in the School of Compute r and Communication Sciences at the École Polytechnique Fédéral de Lausann e (EPFL). He was a postdoctoral scholar at Stanford University and a Young Investigator at the Allen Institute for AI (AI2). He completed his PhD at the University of Washington and was a student researcher at Microsoft Re search. His research interests are in building systems that mix knowledge and language representations to solve problems in NLP\, specializing in co mmonsense representation and reasoning. DTSTART;TZID=America/New_York:20231013T120000 DTEND;TZID=America/New_York:20231013T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Antoine Bosselut (EPFL) “From Mechanistic Interpretability to Mecha nistic Reasoning” URL:https://www.clsp.jhu.edu/events/antoine-bosselut-epfl/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAbstr act
\nRecent advances in speech technology make heavy use o f pre-trained models that learn from large quantities of raw (untranscribe d) speech\, using “self-supervised” (ie unsupervised) learning. These mode ls learn to transform the acoustic input into a different representational format that makes supervised learning (for tasks such as transcription or even translation) much easier. However\, *what* and *how* speech-relevant information is encoded in these representations is not well understood. I will talk about some work at various stages of completion in which my gro up is analyzing the structure of these representations\, to gain a more sy stematic understanding of how word-level\, phonetic\, and speaker informat ion is encoded.
\nBiography
\nSharon Goldwate
r is a Professor in the Institute for Language\, Cognition and Computation
at the University of Edinburgh’s School of Informatics. She received her
PhD in 2007 from Brown University and spent two years as a postdoctoral re
searcher at Stanford University before moving to Edinburgh. Her research i
nterests include unsupervised and minimally-supervised learning for speech
and language processing\, computer modelling of language acquisition in c
hildren\, and computational studies of language use. Her main focus withi
n linguistics has been on the lower levels of structure including phonetic
s\, phonology\, and morphology.
Prof. Goldwater has received awards including the 2016 Roger Needha
m Award from the British Computer Society for “distinguished research cont
ribution in computer science by a UK-based researcher who has completed up
to 10 years of post-doctoral research.” She has served on the editorial b
oards of several journals\, including Computational Linguistics\, Transact
ions of the Association for Computational Linguistics\, and the inaugural
board of OPEN MIND: Advances in Cognitive Science. She was a program chair
for the EACL 2014 Conference and chaired the EACL governing board from 20
19-2020.
Abstr act
\nAbstr act
\nMultilingual machine translation has proven immensely useful for both parameter efficiency and overall perf ormance for many language pairs via complete parameter sharing. However\, some language pairs in multilingual models can see worse performance than in bilingual models\, especially in the one-to-many translation setting. M otivated by their empirical differences\, we examine the geometric differe nces in representations from bilingual models versus those from one-to-man y multilingual models. Specifically\, we measure the isotropy of these rep resentations using intrinsic dimensionality and IsoScore\, in order to mea sure how these representations utilize the dimensions in their underlying vector space. We find that for a given language pair\, its multilingual mo del decoder representations are consistently less isotropic than comparabl e bilingual model decoder representations. Additionally\, we show that muc h of this anisotropy in multilingual decoder representations can be attrib uted to modeling language-specific information\, therefore limiting remain ing representational capacity.
\n X-TAGS;LANGUAGE=en-US:2023\,November\,Verma END:VEVENT BEGIN:VEVENT UID:ai1ec-24157@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nIn this talk\, I will present a simple extension of i mage-based Masked Autoencoders (MAE) to self-supervised representation lea rning from audio spectrograms. Following the Transformer encoder-decoder d esign in MAE\, our Audio-MAE first encodes audio spectrogram patches with a high masking ratio\, feeding only the non-masked tokens through encoder layers. The decoder then re-orders and decodes the encoded context padded with mask tokens\, in order to reconstruct the input spectrogram. We find it beneficial to incorporate local window attention in the decoder\, as au dio spectrograms are highly correlated in local time and frequency bands. We then fine-tune the encoder with a lower masking ratio on target dataset s. Empirically\, Audio-MAE sets new state-of-the-art performance on six au dio and speech classification tasks\, outperforming other recent models th at use external supervised pre-training.\nBio\nFlorian Metze is a Research Scientist Manager at Meta AI in New York\, supporting a team of researche rs and engineers working on multi-modal (image\, video\, audio\, text) con tent understanding for Meta’s Family of Apps (Instagram\, Threads\, Facebo ok\, WhatsApp). He used to be an Associate Research Professor at Carnegie Mellon University\, in the School of Computer Science’s Language Technolog ies Institute\, where he still is an Adjunct Professor. He is also a co-fo under of Abridge\, a company working on extracting information from doctor patient conversations. His work covers many areas of speech recognition a nd multi-media analysis with a focus on end-to-end deep learning. Currentl y\, he focuses on multi-modal processing of videos\, and using that inform ation to recommend unconnected content. In the past\, he has worked on low resource and multi-lingual speech processing\, speech recognition with ar ticulatory features\, large-scale multi-media retrieval and summarization\ , information extraction from medical interviews\, and recognition of pers onality or similar meta-data from speech.\nFor more information\, please s ee http://www.cs.cmu.edu/directory/fmetze\n DTSTART;TZID=America/New_York:20231110T120000 DTEND;TZID=America/New_York:20231110T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Florian Metze (CMU) “Masked Autoencoders that Listen” URL:https://www.clsp.jhu.edu/events/florian-metze-cmu/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nIn this talk\, I will present a simple extension of i mage-based Masked Autoencoders (MAE) to self-supervised representation lea rning from audio spectrograms. Following the Transformer encoder-decoder d esign in MAE\, our Audio-MAE first encodes audio spectrogram patches with a high masking ratio\, feeding only the non-masked tokens through encoder layers. The decoder then re-orders and decodes the encoded context padded with mask tokens\, in order to reconstruct the input spectrogram. We find it beneficial to incorporate local window attention in the decoder\, as au dio spectrograms are highly correlated in local time and frequency bands. We then fine-tune the encoder with a lower masking ratio on target dataset s. Empirically\, Audio-MAE sets new state-of-the-art performance on six au dio and speech classification tasks\, outperforming other recent models th at use external supervised pre-training.
\nBio
\nFlorian Metze is a Research Scientist Manager at Meta AI in New York\ , supporting a team of researchers and engineers working on multi-modal (i mage\, video\, audio\, text) content understanding for Meta’s Family of Ap ps (Instagram\, Threads\, Facebook\, WhatsApp). He used to be an Associate Research Professor at Carnegie Mellon University\, in the School of Compu ter Science’s Language Technologies Institute\, where he still is an Adjun ct Professor. He is also a co-founder of Abridge\, a company working on ex tracting information from doctor patient conversations. His work covers ma ny areas of speech recognition and multi-media analysis with a focus on en d-to-end deep learning. Currently\, he focuses on multi-modal processing o f videos\, and using that information to recommend unconnected content. In the past\, he has worked on low resource and multi-lingual speech process ing\, speech recognition with articulatory features\, large-scale multi-me dia retrieval and summarization\, information extraction from medical inte rviews\, and recognition of personality or similar meta-data from speech.< /p>\n
For more information\, please see http://www.cs.cmu.edu/directory/fmetze
\n\n X-TAGS;LANGUAGE=en-US:2023\,Metze\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-24159@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20231113T120000 DTEND;TZID=America/New_York:20231113T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Kate Sanders URL:https://www.clsp.jhu.edu/events/student-seminar-kate-sanders/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,November\,Sanders END:VEVENT BEGIN:VEVENT UID:ai1ec-24163@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nThe almost unlimited multimedia content available on video-sharing websites has opened new challenges and opportunities for bui lding robust multimodal solutions. This seminar will describe our novel mu ltimodal architectures that (1) are robust to missing modalities\, (2) can identify noisy or less discriminative features\, and (3) can leverage unl abeled data. First\, we present a strategy that effectively combines auxil iary networks\, a transformer architecture\, and an optimized training mec hanism for handling missing features. This problem is relevant since it is expected that during inference the multimodal system will face cases with missing features due to noise or occlusion. We implement this approach fo r audiovisual emotion recognition achieving state-of-the-art performance. Second\, we present a multimodal framework for dealing with scenarios char acterized by noisy or less discriminative features. This situation is comm only observed in audiovisual automatic speech recognition (AV-ASR) with cl ean speech\, where the performance often drops compared to a speech-only s olution due to the variability of visual features. The proposed approach i s a deep learning solution with a gating layer that diminishes the effect of noisy or uninformative visual features\, keeping only useful informatio n. The approach improves\, or at least\, maintains performance when visual features are used. Third\, we discuss alternative strategies to leverage unlabeled multimodal data. A promising approach is to use multimodal prete xt tasks that are carefully designed to learn better representations for p redicting a given task\, leveraging the relationship between acoustic and facial features. Another approach is using multimodal ladder networks wher e intermediate representations are predicted across modalities using later al connections. These models offer principled solutions to increase the ge neralization and robustness of common speech-processing tasks when using m ultimodal architectures. \nBio\nCarlos Busso is a Professor at the Univers ity of Texas at Dallas’s Electrical and Computer Engineering Department\, where he is also the director of the Multimodal Signal Processing (MSP) La boratory. His research interest is in human-centered multimodal machine in telligence and application\, with a focus on the broad areas of affective computing\, multimodal human-machine interfaces\, in-vehicle active safety systems\, and machine learning methods for multimodal processing. He has worked on audio-visual emotion recognition\, analysis of emotional modulat ion in gestures and speech\, designing realistic human-like virtual charac ters\, and detection of driver distractions. He is a recipient of an NSF C AREER Award. In 2014\, he received the ICMI Ten-Year Technical Impact Awar d. In 2015\, his student received the third prize IEEE ITSS Best Dissertat ion Award (N. Li). He also received the Hewlett Packard Best Paper Award a t the IEEE ICME 2011 (with J. Jain)\, and the Best Paper Award at the AAAC ACII 2017 (with Yannakakis and Cowie). He received the Best of IEEE Trans actions on Affective Computing Paper Collection in 2021 (with R. Lotfian) and the Best Paper Award from IEEE Transactions on Affective Computing in 2022 (with Yannakakis and Cowie). He received the ACM ICMI Community Servi ce Award in 2023. In 2023\, he received the Distinguished Alumni Award in the Mid-Career/Academia category by the Signal and Image Processing Instit ute (SIPI) at the University of Southern California. He is currently servi ng as an associate editor of the IEEE Transactions on Affective Computing. He is an IEEE Fellow. He is a member of the ISCA\, and AAAC and a senior member of ACM. DTSTART;TZID=America/New_York:20231117T120000 DTEND;TZID=America/New_York:20231117T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Carlos Busso (University of Texas at Dallas) “Multimodal Machine Le arning for Human-Centric Tasks” URL:https://www.clsp.jhu.edu/events/carl-busso-university-of-texas-at-dalla s-multimodal-machine-learning-for-human-centric-tasks/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Abstr act
\nThe almost unlimited multimedia content available on video-sharing websites has opened new challenges and opportun ities for building robust multimodal solutions. This seminar will describe our novel multimodal architectures that (1) are robust to missing modalit ies\, (2) can identify noisy or less discriminative features\, and (3) can leverage unlabeled data. First\, we present a strategy that effectively c ombines auxiliary networks\, a transformer architecture\, and an optimized training mechanism for handling missing features. This problem is relevan t since it is expected that during inference the multimodal system will fa ce cases with missing features due to noise or occlusion. We implement thi s approach for audiovisual emotion recognition achieving state-of-the-art performance. Second\, we present a multimodal framework for dealing with s cenarios characterized by noisy or less discriminative features. This situ ation is commonly observed in audiovisual automatic speech recognition (AV -ASR) with clean speech\, where the performance often drops compared to a speech-only solution due to the variability of visual features. The propos ed approach is a deep learning solution with a gating layer that diminishe s the effect of noisy or uninformative visual features\, keeping only usef ul information. The approach improves\, or at least\, maintains performanc e when visual features are used. Third\, we discuss alternative strategies to leverage unlabeled multimodal data. A promising approach is to use mul timodal pretext tasks that are carefully designed to learn better represen tations for predicting a given task\, leveraging the relationship between acoustic and facial features. Another approach is using multimodal ladder networks where intermediate representations are predicted across modalitie s using lateral connections. These models offer principled solutions to in crease the generalization and robustness of common speech-processing tasks when using multimodal architectures.
\nBio
\nCarlos Busso is a Professor at the University of Tex as at Dallas’s Electrical and Computer Engineering Department\, where he i s also the director of the Multimodal Signal Processing (MSP) Laboratory. His research interest is in human-centered multimodal machine intelligence and application\, with a focus on the broad areas of affective computing\ , multimodal human-machine interfaces\, in-vehicle active safety systems\, and machine learning methods for multimodal processing. He has worked on audio-visual emotion recognition\, analysis of emotional modulation in ges tures and speech\, designing realistic human-like virtual characters\, and detection of driver distractions. He is a recipient of an NSF CAREER Awar d. In 2014\, he received the ICMI Ten-Year Technical Impact Award. In 2015 \, his student received the third prize IEEE ITSS Best Dissertation Award (N. Li). He also received the Hewlett Packard Best Paper Award at the IEEE ICME 2011 (with J. Jain)\, and the Best Paper Award at the AAAC ACII 2017 (with Yannakakis and Cowie). He received the Best of IEEE Transactions on Affective Computing Paper Collection in 2021 (with R. Lotfian) and the Be st Paper Award from IEEE Transactions on Affective Computing in 2022 (with Yannakakis and Cowie). He received the ACM ICMI Community Service Award i n 2023. In 2023\, he received the Distinguished Alumni Award in the Mid-Ca reer/Academia category by the Signal and Image Processing Institute (SIPI) at the University of Southern California. He is currently serving as an a ssociate editor of the IEEE Transactions on Affective Computing. He is an IEEE Fellow. He is a member of the ISCA\, and AAAC and a senior member of ACM.
\n X-TAGS;LANGUAGE=en-US:2023\,Busso\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-24165@www.clsp.jhu.edu DTSTAMP:20240328T115719Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20231127T120000 DTEND;TZID=America/New_York:20231127T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Aleem Khan URL:https://www.clsp.jhu.edu/events/student-seminar-aleem-khan/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Khan\,November END:VEVENT END:VCALENDAR