BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20115@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nData science in small medi cal datasets usually means doing precision guesswork on unreliable data pr ovided by those with high expectations. The first part of this talk will f ocus on issues that data scientists and engineers have to address when wor king with this kind of data (e.g. unreliable labels\, the effect of confou nding factors\, necessity of clinical interpretability\, difficulties with fusing more data sets). The second part of the talk will include some rea l examples of this kind of data science in the field of neurology (predict ion of motor deficits in Parkinson’s disease based on acoustic analysis of speech\, diagnosis of Parkinson’s disease dysgraphia utilising online han dwriting\, exploring the Mozart effect in epilepsy based on the music info rmation retrieval) and psychology (assessment of graphomotor disabilities in children with developmental dysgraphia).
\nBiography
\nAbstract
\nMost p eople take for granted that when they speak\, they will be heard and under stood. But for the millions who live with speech impairments caused by phy sical or neurological conditions\, trying to communicate with others can b e difficult and lead to frustration. While there have been a great number of recent advances in Automatic Speech Recognition (ASR) technologies\, th ese interfaces can be inaccessible for those with speech impairments.
\nIn this talk\, we will present Parrotron\, an end -to-end-trained speech-to-speech conversion model that maps an input spect rogram directly to another spectrogram\, without utilizing any intermediat e discrete representation. The system is also trained to emit words in add ition to a spectrogram\, in parallel. We demonstrate that this model can be trained to normalize speech from any speaker regardless of accent\, pr osody\, and background noise\, into the voice of a single canonical target speaker with a fixed accent and consistent articulation and prosody. We f urther show that this normalization model can be adapted to normalize high ly atypical speech from speakers with a variety of speech impairments (due to\, ALS\, Cerebral-Palsy\, Deafness\, Stroke\, Brain Injury\, etc.) \, resulting in significant improvements in intelligibility and naturalness\, measured via a speech recognizer and listening tests. Finally\, demonstra ting the utility of this model on other speech tasks\, we show that the sa me model architecture can be trained to perform a speech separation task.< /p>\n
Dimitri will give a brief description of some key moments in development of speech recognition algorithms that he was in volved in and their applications to YouTube closed captions\, Live Transc ribe and wearable subtitles.
\nFadi will then sp eak about the development of Parrotron.
\nBiographies
\nDimitri Kanevsky started his career at Google working on speech recognition algorithms. Prior to joining Google\, Dimitr i was a Research staff member in the Speech Algorithms Department at IBM . Prior to IBM\, he worked at a number of centers for higher mathematics\, including Max Planck Institute in Germany and the Institute for Advanced Studies in Princeton. He currently holds 295 US patents and was Master Inv entor at IBM. MIT Technology Review recognized Dimitri conversational biom etrics based security patent as one of five most influential patents for 2 003. In 2012 Dimitri was honored at the White House as a Champion of Chang e for his efforts to advance access to science\, technology\, engineering\ , and math.
\nFadi Biadsy is a senior staff researc h scientist at Google NY for the past ten years. He has been exploring and leading multiple projects at Google\, including speech recognition\, spee ch conversion\, language modeling\, and semantic understanding. He receiv ed his PhD from Columbia University in 2011. At Columbia\, he researched a variety of speech and language processing projects including\, dialect an d accent recognition\, speech recognition\, charismatic speech and questio n answering. He holds a BSc and MSc in mathematics and computer science. He worked on handwriting recognition during his masters degree and he work ed as a senior software developer for five years at Dalet digital media sy stems building multimedia broadcasting systems.
DTSTART;TZID=America/New_York:20211105T120000 DTEND;TZID=America/New_York:20211105T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Fadi Biadsy and Dimitri Kanevsky (Google) “Speech Recognition: From Speaker Dependent to Speaker Independent to Full Personalization” “Parrot ron: A Unified E2E Speech-to Speech Conversion and ASR Model for Atypical Speech” URL:https://www.clsp.jhu.edu/events/fadi-biadsy-and-dimitri-kanevsky-google / X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,Biadsy and Kanevsky\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-21041@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nNarration is a universal h uman practice that serves as a key site of education\, collective memory\, fostering social belief systems\, and furthering human creativity. Recent studies in economics (Shiller\, 2020)\, climate science (Bushell et al.\, 2017)\, political polarization (Kubin et al.\, 2021)\, and mental health (Adler et al.\, 2016) suggest an emerging interdisciplinary consensus that narrative is a central concept for understanding human behavior and belie fs. For close to half a century\, the field of narratology has developed a rich set of theoretical frameworks for understanding narrative. And yet t hese theories have largely gone untested on large\, heterogenous collectio ns of texts. Scholars continue to generate schemas by extrapolating from s mall numbers of manually observed documents. In this talk\, I will discuss how we can use machine learning to develop data-driven theories of narrat ion to better understand what Labov and Waletzky called “the simplest and most fundamental narrative structures.” How can machine learning help us a pproach what we might call a minimal theory of narrativity?
\nAndrew Piper is Professor and William Dawson Scholar in the Department of Languages\, Literatures\, and Cultures at McGill University. He is the director of _.t xtlab
\n\na laboratory for cultural analytics\, and editor of the /Journal of Cultural Analytics/\, an open-access journal dedicated to the computational study of culture. He is the author of numerous books and articles on the relatio nship of technology and reading\, including /Book Was There: Reading in El ectronic Times/(Chicago 2012)\, /Enumerations: Data and Literary Study/(Ch icago 2018)\, and most recently\, /Can We Be Wrong? The Problem of Textual Evidence in a Time of Data/(Cambridge 2020).
DTSTART;TZID=America/New_York:20211112T120000 DTEND;TZID=America/New_York:20211112T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Andrew Piper (McGill University) ” How can we use machine learning to understand narration?” URL:https://www.clsp.jhu.edu/events/andrew-piper-mcgill-university-how-can- we-use-machine-learning-to-understand-narration/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,November\,Piper END:VEVENT BEGIN:VEVENT UID:ai1ec-21057@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nThis talk will outline the major challenging in porting mainstream speech technology to the domain o f clinical applications\; in particular\, the need for personalised system s\, the challenge of working in an inherently sparse data domain and devel oping meaningful collaborations with all stakeholders. The talk will give an overview of recent state-of-the-art research from current projects incl uding in the areas of recognition of disordered speech\, automatic process ing of conversations and the automatic detection and tracking of paralingu istic information at the University of Sheffield (UK)’s Speech and Hearing (SPandH) & Healthcare lab.
\nBiography
\nHei di is a Senior Lecturer (associate professor) in Computer Science at the U niversity of Sheffield\, United Kingdom. Her research interests are on the application of AI-based voice technologies to healthcare. In particular\, the detection and monitoring of people’s physical and mental health inclu ding verbal and non-verbal traits for expressions of emotion\, anxiety\, d epression and neurodegenerative conditions in e.g.\, therapeutic or diagno stic settings.
DTSTART;TZID=America/New_York:20211119T120000 DTEND;TZID=America/New_York:20211119T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Heidi Christensen (University of Sheffield\, UK) Virtual Seminar “A utomated Processing of Pathological Speech: Recent Work and Ongoing Challe nges” URL:https://www.clsp.jhu.edu/events/heidi-christensen-university-of-sheffie ld-uk-virtual-seminar-automated-processing-of-pathological-speech-recent-w ork-and-ongoing-challenges/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,Christensen\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-21494@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract
\nAdversarial atta cks deceive neural network systems by adding carefully crafted perturbatio ns to benign signals. Being almost imperceptible to humans\, these attacks pose a severe security threat to the state-of-the-art speech and speaker recognition systems\, making it vital to propose countermeasures against t hem. In this talk\, we focus on 1) classification of a given adversarial a ttack into attack algorithm type\, threat model type\, and signal-to-adver sarial-noise ratios\, 2) developing a novel speech denoising solution to f urther improve the classification performance.
\nO ur proposed approach uses an x-vector network as a signature extractor to get embeddings\, which we call signatures. These signatures contain inform ation about the attack and can help classify different attack algorithms\, threat models\, and signal-to-adversarial-noise ratios. We demonstrate th e transferability of such signatures to other tasks. In particular\, a sig nature extractor trained to classify attacks against speaker identificatio n can also be used to classify attacks against speaker verification and sp eech recognition. We also show that signatures can be used to detect unkno wn attacks i.e. attacks not included during training. Lastly\, we propose to improve the signature extractor by making the job of the signature ext ractor easier by removing the clean signal from the adversarial example (w hich consists of clean signal+perturbation). We train our signature extrac tor using adversarial perturbation. At inference time\, we use a time-doma in denoiser to obtain adversarial perturbation from adversarial examples. Using our improved approach\, we show that common attacks in the literatur e (Fast Gradient Sign Method (FGSM)\, Projected Gradient Descent (PGD)\, C arlini-Wagner (CW) ) can be classified with accuracy as high as 96%. We al so detect unknown attacks with an equal error rate (EER) of about 9%\, whi ch is very promising.
DTSTART;TZID=America/New_York:20220304T120000 DTEND;TZID=America/New_York:20220304T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Sonal Joshi “Classify and Detect Adversarial Atta cks Against Speaker and Speech Recognition Systems” URL:https://www.clsp.jhu.edu/events/student-seminar-sonal-joshi/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,Joshi\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-21615@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract
\nDTSTART;TZID=America/New_York:20220311T120000 DTEND;TZID=America/New_York:20220311T131500 LOCATION:Virtual Seminar SEQUENCE:0 SUMMARY:Student Seminar – Anton Belyy “Systems for Human-AI Cooperation on Collecting Semantic Annotations” URL:https://www.clsp.jhu.edu/events/student-seminar-anton-belyy-systems-for -human-ai-cooperation-on-collecting-semantic-annotations/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,Belyy\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-21621@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nSystems that support expre ssive\, situated natural language interactions are essential for expanding access to complex computing systems\, such as robots and databases\, to n on-experts. Reasoning and learning in such natural language interactions i s a challenging open problem. For example\, resolving sentence meaning req uires reasoning not only about word meaning\, but also about the interacti on context\, including the history of the interaction and the situated env ironment. In addition\, the sequential dynamics that arise between user an d system in and across interactions make learning from static data\, i.e.\ , supervised data\, both challenging and ineffective. However\, these same interaction dynamics result in ample opportunities for learning from impl icit and explicit feedback that arises naturally in the interaction. This lays the foundation for systems that continually learn\, improve\, and ada pt their language use through interaction\, without additional annotation effort. In this talk\, I will focus on these challenges and opportunities. First\, I will describe our work on modeling dependencies between languag e meaning and interaction context when mapping natural language in interac tion to executable code. In the second part of the talk\, I will describe our work on language understanding and generation in collaborative interac tions\, focusing on continual learning from explicit and implicit user fee dback.
\nBiography
\nAlane Suhr is a PhD Cand idate in the Department of Computer Science at Cornell University\, advis ed by Yoav Artzi. Her research spans natural language processing\, machine learning\, and computer vision\, with a focus on building systems that pa rticipate and continually learn in situated natural language interactions with human users. Alane’s work has been recognized by paper awards at ACL and NAACL\, and has been supported by fellowships and grants\, including a n NSF Graduate Research Fellowship\, a Facebook PhD Fellowship\, and resea rch awards from AI2\, ParlAI\, and AWS. Alane has also co-organized multip le workshops and tutorials appearing at NeurIPS\, EMNLP\, NAACL\, and ACL. Previously\, Alane received a BS in Computer Science and Engineering as a n Eminence Fellow at the Ohio State University.
DTSTART;TZID=America/New_York:20220314T120000 DTEND;TZID=America/New_York:20220314T131500 LOCATION:Virtual Seminar SEQUENCE:0 SUMMARY:Alane Suhr (Cornell University) “Reasoning and Learning in Interact ive Natural Language Systems” URL:https://www.clsp.jhu.edu/events/alane-suhr-cornell-university-reasoning -and-learning-in-interactive-natural-language-systems/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,March\,Suhr END:VEVENT BEGIN:VEVENT UID:ai1ec-21616@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract
\nSocial media allows resear chers to track societal and cultural changes over time based on language a nalysis tools. Many of these tools rely on statistical algorithms which ne ed to be tuned to specific types of language. Recent studies have shown th e absence of appropriate tuning\, specifically in the presence of semantic shift\, can hinder robustness of the underlying methods. However\, little is known about the practical effect this sensitivity may have on downstre am longitudinal analyses. We explore this gap in the literature through a timely case study: understanding shifts in depression during the course of the COVID-19 pandemic. We find that inclusion of only a small number of s emantically-unstable features can promote significant changes in longitudi nal estimates of our target outcome. At the same time\, we demonstrate tha t a recently-introduced method for measuring semantic shift may be used to proactively identify failure points of language-based models and\, in tur n\, improve predictive generalization.
DTSTART;TZID=America/New_York:20220318T120000 DTEND;TZID=America/New_York:20220318T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Keith Harrigian “The Problem of Semantic Shift in Longitudinal Monitoring of Social Media” URL:https://www.clsp.jhu.edu/events/student-seminar-keith-harrigian-the-pro blem-of-semantic-shift-in-longitudinal-monitoring-of-social-media/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,Harrigian\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-21497@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nWhile the “deep learning t sunami” continues to define the state of the art in speech and language pr ocessing\, finite-state transducer grammars developed by linguists and eng ineers are still widely used in industrial\, highly-multilingual settings\ , particularly for symbolic\, “front-end” speech applications. In this tal k\, I will first briefly review the current state of the OpenFst and OpenG rm finite-state transducer libraries. I then review two “late-breaking” al gorithms found in these libraries. The first is a heuristic but highly-eff ective general-purpose optimization routine for weighted transducers. The second is an algorithm for computing the single shortest string of non-det erministic weighted acceptors which lack certain properties required by cl assic shortest-path algorithms. I will then illustrate how the OpenGrm too ls can be used to induce a finite-state string-to-string transduction mode l known as a pair n-gram model. This model has been applied to grapheme-to -phoneme conversion\, loanword detection\, abbreviation expansion\, and ba ck-transliteration\, among other tasks.
\nBiography
\nKyle Gorman is an assistant professor of linguistics at the Gradu ate Center\, City University of New York\, and director of the master’s pr ogram in computational linguistics\; he is also a software engineer in the speech and language algorithms group at Google. With Richard Sproat\, he is the coauthor of Finite-State Text Processing (Morgan & Claypool\ , 2021) and the creator of Pynini\, a finite-state text processing library for Python. He has also published on statistical methods for comparing co mputational models\, text normalization\, grapheme-to-phoneme conversion\, and morphological analysis\, as well as many topics in linguistic theory.
DTSTART;TZID=America/New_York:20220401T120000 DTEND;TZID=America/New_York:20220401T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Kyle Gorman (City University of New York) ” Weighted Finite-State T ransducers: The Later Years” URL:https://www.clsp.jhu.edu/events/kyle-gorman-city-university-of-new-york -weighted-finite-state-transducers-the-later-years/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,Gorman\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-22403@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nVoice conversion (VC) is a significant aspect of artificial intelligence. It is the study of how to convert one’s voice to sound like that of another without changing the lin guistic content. Voice conversion belongs to a general technical field of speech synthesis\, which converts text to speech or changes the properties of speech\, for example\, voice identity\, emotion\, and accents. Voice c onversion involves multiple speech processing techniques\, such as speech analysis\, spectral conversion\, prosody conversion\, speaker characteriza tion\, and vocoding. With the recent advances in theory and practice\, we are now able to produce human-like voice quality with high speaker similar ity. In this talk\, Dr. Sisman will present the recent advances in voice c onversion and discuss their promise and limitations. Dr. Sisman will also provide a summary of the available resources for expressive voice conversi on research.
\nBiography
\nDr. Berrak Sisman (Member\, IEEE) received the Ph.D. degree in electrical and computer engin eering from National University of Singapore in 2020\, fully funded by A*S TAR Graduate Academy under Singapore International Graduate Award (SINGA). She is currently working as a tenure-track Assistant Professor at the Eri k Jonsson School Department of Electrical and Computer Engineering at Univ ersity of Texas at Dallas\, United States. Prior to joining UT Dallas\, sh e was a faculty member at Singapore University of Technology and Design (2 020-2022). She was a Postdoctoral Research Fellow at the National Universi ty of Singapore (2019-2020). She was an exchange doctoral student at the U niversity of Edinburgh and a visiting scholar at The Centre for Speech Tec hnology Research (CSTR)\, University of Edinburgh (2019). She was a visiti ng researcher at RIKEN Advanced Intelligence Project in Japan (2018). Her research is focused on machine learning\, signal processing\, emotion\, sp eech synthesis and voice conversion.
\nDr. Sisman has served as the Area Chair at INTERSPEECH 2021\, INTERSPEECH 2022\, IEEE SLT 2022 and as t he Publication Chair at ICASSP 2022. She has been elected as a member of t he IEEE Speech and Language Processing Technical Committee (SLTC) in the a rea of Speech Synthesis for the term from January 2022 to December 2024. S he plays leadership roles in conference organizations and active in techni cal committees. She has served as the General Coordinator of the Student A dvisory Committee (SAC) of International Speech Communication Association (ISCA).
DTSTART;TZID=America/New_York:20221104T120000 DTEND;TZID=America/New_York:20221104T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Berrak Sisman (University of Texas at Dallas) “Speech Synthesis and Voice Conversion: Machine Learning can Mimic Anyone’s Voice” URL:https://www.clsp.jhu.edu/events/berrak-sisman-university-of-texas-at-da llas/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,November\,Sisman END:VEVENT BEGIN:VEVENT UID:ai1ec-22408@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nAbstract
\nDriven by the goal of erad icating language barriers on a global scale\, machine translation has soli dified itself as a key focus of artificial intelligence research today. Ho wever\, such efforts have coalesced around a small subset of languages\, l eaving behind the vast majority of mostly low-resource languages. What doe s it take to break the 200 language barrier while ensuring safe\, high-qua lity results\, all while keeping ethical considerations in mind? In this t alk\, I introduce No Language Left Behind\, an initiative to break languag e barriers for low-resource languages. In No Language Left Behind\, we too k on the low-resource language translation challenge by first contextualiz ing the need for translation support through exploratory interviews with n ative speakers. Then\, we created datasets and models aimed at narrowing t he performance gap between low and high-resource languages. We proposed mu ltiple architectural and training improvements to counteract overfitting w hile training on thousands of tasks. Critically\, we evaluated the perform ance of over 40\,000 different translation directions using a human-transl ated benchmark\, Flores-200\, and combined human evaluation with a novel t oxicity benchmark covering all languages in Flores-200 to assess translati on safety. Our model achieves an improvement of 44% BLEU relative to the p revious state-of-the-art\, laying important groundwork towards realizing a universal translation system in an open-source manner.
\nBi ography
\nAngela is a research scientis t at Meta AI Research in New York\, focusing on supporting efforts in spee ch and language research. Recent projects include No Language Left Behind (https://ai.facebook.com/r esearch/no-language-left-behind/) and Universal Speech Translation for Unwritten Languages (https://ai.faceb ook.com/blog/ai-translation-hokkien/). Before translation\, Angela pre viously focused on research in on-device models for NLP and computer visio n and text generation.
\nDTSTART;TZID=America/New_York:20221118T120000 DTEND;TZID=America/New_York:20221118T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Angela Fan (Meta AI Research) “No Language Left Behind: Scaling Hu man-Centered Machine Translation” URL:https://www.clsp.jhu.edu/events/angela-fan-facebook/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,Fan\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-23320@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nSpeech communications repr esents a core domain for education\, team problem solving\, social engagem ent\, and business interactions. The ability for Speech Technology to extr act layers of knowledge and assess engagement content represents the next generation of advanced speech solutions. Today\, the emergence of BIG DATA \, Machine Learning\, as well as voice enabled speech systems have require d the need for effective voice capture and automatic speech/speaker recogn ition. The ability to employ speech and language technology to assess huma n-to-human interactions offers new research paradigms having profound impa ct on assessing human interaction. In this talk\, we will focus on big dat a naturalistic audio processing relating to (i) child learning spaces\, an d (ii) the NASA APOLLO lunar missions. ML based technology advancements in clude automatic audio diarization\, speech recognition\, and speaker recog nition. Child-Teacher based assessment of conversational interactions are explored\, including keyword and “WH-word” (e.g.\, who\, what\, etc.). Dia rization processing solutions are applied to both classroom/learning space child speech\, as well as massive APOLLO data. CRSS-UTDallas is expanding our original Apollo-11 corpus\, resulting in a massive multi-track audio processing challenge to make available 150\,000hrs of Apollo mission data to be shared with science communities: (i) speech/language technology\, (i i) STEM/science and team-based researchers\, and (iii) education/historica l/archiving specialists. Our goals here are to provide resources which all ow to better understand how people work/learn collaboratively together. Fo r Apollo\, to accomplish one of mankind’s greatest scientific/technologica l challenges in the last century.
\nBiography
\nJohn H.L. Hansen\, received Ph.D. & M.S. degrees from Georgia Institute of Technology\, and B.S.E.E. from Rutgers Univ. He joined Univ. of Texas at Dallas (UTDallas) in 2005\, where he currently serves as Associate Dean for Research\, Prof. of ECE\, Distinguished Univ. Chair in Telecom. Engin eering\, and directs Center for Robust Speech Systems (CRSS). He is an ISC A Fellow\, IEEE Fellow\, and has served as Member and TC-Chair of IEEE Sig nal Proc. Society\, Speech & Language Proc. Tech. Comm.(SLTC)\, and Techni cal Advisor to U.S. Delegate for NATO (IST/TG-01). He served as ISCA Presi dent (2017-21)\, continues to serve on ISCA Board (2015-23) as Treasurer\, has supervised 99 PhD/MS thesis candidates (EE\,CE\,BME\,TE\,CS\,Ling.\,C og.Sci.\,Spch.Sci.\,Hear.Sci)\, was recipient of 2020 UT-Dallas Provost’s Award for Grad. PhD Research Mentoring\; author/co-author of 865 journal/c onference papers including 14 textbooks in the field of speech/language/he aring processing & technology including coauthor of textbook Discrete-Time Processing of Speech Signals\, (IEEE Press\, 2000)\, and lead author of t he report “The Impact of Speech Under ‘Stress’ on Military Speech Technolo gy\,” (NATO RTO-TR-10\, 2000). He served as Organizer\, Chair/Co-Chair/Tec h.Chair for ISCA INTERSPEECH-2022\, IEEE ICASSP-2010\, IEEE SLT-2014\, ISC A INTERSPEECH-2002\, and Tech. Chair for IEEE ICASSP-2024. He received the 2022 IEEE Signal Processing Society Leo Beranek MERITORIOUS SERVICE Award .
\nDTSTART;TZID=America/New_York:20230303T120000 DTEND;TZID=America/New_York:20230303T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:John Hansen (University of Texas at Dallas) “Challenges and Advance ments in Speaker Diarization & Recognition for Naturalistic Data Streams” URL:https://www.clsp.jhu.edu/events/john-hansen-university-of-texas-at-dall as/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Hansen\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-23439@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nAs data-based technologies proliferate\, it is increasingly important for researchers to be aware of their work’s wider impact. Concerns like navigating the IRB and figuring out copyright and licensing issues are still key\, but the current focus s hift to matters like inclusivity\, fairness\, and transparency and their i mpact on the research/development life cycle have added complexity to the research task. In this talk\, we will take a broad look at the various way s ethics intersects with natural language processing\, machine learning\, and artificial intelligence research and discuss strategies and resources for managing these concerns within the broader research framework.
\nBiography
\nDenise is responsible for the overall operation of LDC’s External Relations group which includes intellectual pr operty management\, licensing\, regulatory matters\, publications\, member ship and communications. Before joining LDC\, she practiced law for over 2 0 years in the areas of international trade\, intellectual property and co mmercial litigation. She has an A.B. in Political Science from Bryn Mawr C ollege and a Juris Doctor degree from the University of Miami School of La w.
DTSTART;TZID=America/New_York:20230310T120000 DTEND;TZID=America/New_York:20230310T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street SEQUENCE:0 SUMMARY:Denise DiPersio (Linguistic Data Consortium\, University of Pennsyl vania) “Data and Ethics: Where Does the Twain Meet?” URL:https://www.clsp.jhu.edu/events/denise-dipersio-linguistic-data-consort ium-university-of-pennsylvania-data-and-ethics-where-does-the-twain-meet/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,DiPersio\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-23505@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nRecent advances in large pretrained language models have unlocked new exciting a pplications for Natural Language Generation for creative tasks\, such as l yrics or humour generation. In this talk we will discuss recent works by o ur team at Alexa AI and discuss current challenges: (1) Pun understanding and generation: We release new datasets for pun understanding and the nove l task of context-situated pun generation\, and demonstrate the value of o ur annotations for pun classification and generation tasks. (2) Song lyric generation: we design a hierarchical lyric generation framework that enab les us to generate pleasantly-singable lyrics without training on melody-l yric aligned data\, and show that our approach is competitive with strong baselines supervised on parallel data. (3) Create with Alexa: a multimodal story creation experience recently launched on Alexa devices\, which leve rages story text generation models in tandem with story visualization and background music generation models to produce multimodal stories for kids.
\nBiography
\nAlessandra Cervone is an Appli ed Scientist in the Natural Understanding team at Amazon Alexa AI. Alessan dra holds an MSc in Speech and Language Processing from University of Edin burgh and a PhD in CS from University of Trento (Italy). During her PhD\, Alessandra worked on computational models of coherence in open-domain dial ogue advised by Giuseppe Riccardi. In the first year of the PhD\, she was the team leader of one of the teams selected to compete in the first editi on of the Alexa Prize. More recently\, her research interests have been fo cused on natural language generation and its evaluation\, in particular in the context of creative AI applications.
\nDTSTART;TZID=America/New_York:20230317T120000 DTEND;TZID=America/New_York:20230317T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Alessandra Cervone (Amazon) “Controllable Text Generation for Creat ive Applications URL:https://www.clsp.jhu.edu/events/alexxandra-cervone-amazon/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Cervone\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-23555@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20230327T120000 DTEND;TZID=America/New_York:20230327T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Desh Raj URL:https://www.clsp.jhu.edu/events/student-seminar-desh-raj-2/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,March\,Raj END:VEVENT BEGIN:VEVENT UID:ai1ec-23513@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nDespite many recent advanc es in automatic speech recognition (ASR)\, linguists and language communit ies engaged in language documentation projects continue to face the obstac le of the “transcription bottleneck”. Researchers in NLP typically do not distinguish between widely spoken languages that currently happen to have few training resources and endangered languages that will never have abund ant data. As a result\, we often fail to thoroughly explore when ASR is he lpful for language documentation\, what architectures work best for the so rts of languages that are in need of documentation\, and how data can be c ollected and organized to produce optimal results. In this talk I describe several projects that attempt to bridge the gap between the promise of AS R for language documentation and the reality of using this technology in r eal-world settings.
\nBiography
\nAbstract
\nAbstract
\nMultil ingual machine translation has proven immensely useful for both parameter efficiency and overall performance for many language pairs via complete pa rameter sharing. However\, some language pairs in multilingual models can see worse performance than in bilingual models\, especially in the one-to- many translation setting. Motivated by their empirical differences\, we ex amine the geometric differences in representations from bilingual models v ersus those from one-to-many multilingual models. Specifically\, we measur e the isotropy of these representations using intrinsic dimensionality and IsoScore\, in order to measure how these representations utilize the dime nsions in their underlying vector space. We find that for a given language pair\, its multilingual model decoder representations are consistently le ss isotropic than comparable bilingual model decoder representations. Addi tionally\, we show that much of this anisotropy in multilingual decoder re presentations can be attributed to modeling language-specific information\ , therefore limiting remaining representational capacity.
DTSTART;TZID=America/New_York:20231106T120000 DTEND;TZID=America/New_York:20231106T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Neha Verma “Exploring Geometric Representational Disparities Between Multilingual and Bilingual Translation Models” URL:https://www.clsp.jhu.edu/events/student-seminar-neha-verma-exploring-ge ometric-representational-disparities-between-multilingual-and-bilingual-tr anslation-models/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,November\,Verma END:VEVENT BEGIN:VEVENT UID:ai1ec-24157@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nIn this talk\, I will pres ent a simple extension of image-based Masked Autoencoders (MAE) to self-su pervised representation learning from audio spectrograms. Following the Tr ansformer encoder-decoder design in MAE\, our Audio-MAE first encodes audi o spectrogram patches with a high masking ratio\, feeding only the non-mas ked tokens through encoder layers. The decoder then re-orders and decodes the encoded context padded with mask tokens\, in order to reconstruct the input spectrogram. We find it beneficial to incorporate local window atten tion in the decoder\, as audio spectrograms are highly correlated in local time and frequency bands. We then fine-tune the encoder with a lower mask ing ratio on target datasets. Empirically\, Audio-MAE sets new state-of-th e-art performance on six audio and speech classification tasks\, outperfor ming other recent models that use external supervised pre-training.
\n< p>Bio\nFlorian Metze is a Research Scientist Manag er at Meta AI in New York\, supporting a team of researchers and engineers working on multi-modal (image\, video\, audio\, text) content understandi ng for Meta’s Family of Apps (Instagram\, Threads\, Facebook\, WhatsApp). He used to be an Associate Research Professor at Carnegie Mellon Universit y\, in the School of Computer Science’s Language Technologies Institute\, where he still is an Adjunct Professor. He is also a co-founder of Abridge \, a company working on extracting information from doctor patient convers ations. His work covers many areas of speech recognition and multi-media a nalysis with a focus on end-to-end deep learning. Currently\, he focuses o n multi-modal processing of videos\, and using that information to recomme nd unconnected content. In the past\, he has worked on low resource and mu lti-lingual speech processing\, speech recognition with articulatory featu res\, large-scale multi-media retrieval and summarization\, information ex traction from medical interviews\, and recognition of personality or simil ar meta-data from speech.
\nFor more information\, please see http://www.cs.cmu.edu/directory /fmetze
\nDTSTART;TZID=America/New_York:20231110T120000 DTEND;TZID=America/New_York:20231110T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Florian Metze (CMU) “Masked Autoencoders that Listen” URL:https://www.clsp.jhu.edu/events/florian-metze-cmu/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Metze\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-24159@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20231113T120000 DTEND;TZID=America/New_York:20231113T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Kate Sanders URL:https://www.clsp.jhu.edu/events/student-seminar-kate-sanders/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,November\,Sanders END:VEVENT BEGIN:VEVENT UID:ai1ec-24163@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nThe almost un limited multimedia content available on video-sharing websites has opened new challenges and opportunities for building robust multimodal solutions. This seminar will describe our novel multimodal architectures that (1) ar e robust to missing modalities\, (2) can identify noisy or less discrimina tive features\, and (3) can leverage unlabeled data. First\, we present a strategy that effectively combines auxiliary networks\, a transformer arch itecture\, and an optimized training mechanism for handling missing featur es. This problem is relevant since it is expected that during inference th e multimodal system will face cases with missing features due to noise or occlusion. We implement this approach for audiovisual emotion recognition achieving state-of-the-art performance. Second\, we present a multimodal f ramework for dealing with scenarios characterized by noisy or less discrim inative features. This situation is commonly observed in audiovisual autom atic speech recognition (AV-ASR) with clean speech\, where the performance often drops compared to a speech-only solution due to the variability of visual features. The proposed approach is a deep learning solution with a gating layer that diminishes the effect of noisy or uninformative visual f eatures\, keeping only useful information. The approach improves\, or at l east\, maintains performance when visual features are used. Third\, we dis cuss alternative strategies to leverage unlabeled multimodal data. A promi sing approach is to use multimodal pretext tasks that are carefully design ed to learn better representations for predicting a given task\, leveragin g the relationship between acoustic and facial features. Another approach is using multimodal ladder networks where intermediate representations are predicted across modalities using lateral connections. These models offer principled solutions to increase the generalization and robustness of com mon speech-processing tasks when using multimodal architectures. p>\n
Bio
\nCarlos Busso is a Profess or at the University of Texas at Dallas’s Electrical and Computer Engineer ing Department\, where he is also the director of the Multimodal Signal Pr ocessing (MSP) Laboratory. His research interest is in human-centered mult imodal machine intelligence and application\, with a focus on the broad ar eas of affective computing\, multimodal human-machine interfaces\, in-vehi cle active safety systems\, and machine learning methods for multimodal pr ocessing. He has worked on audio-visual emotion recognition\, analysis of emotional modulation in gestures and speech\, designing realistic human-li ke virtual characters\, and detection of driver distractions. He is a reci pient of an NSF CAREER Award. In 2014\, he received the ICMI Ten-Year Tech nical Impact Award. In 2015\, his student received the third prize IEEE IT SS Best Dissertation Award (N. Li). He also received the Hewlett Packard B est Paper Award at the IEEE ICME 2011 (with J. Jain)\, and the Best Paper Award at the AAAC ACII 2017 (with Yannakakis and Cowie). He received the B est of IEEE Transactions on Affective Computing Paper Collection in 2021 ( with R. Lotfian) and the Best Paper Award from IEEE Transactions on Affect ive Computing in 2022 (with Yannakakis and Cowie). He received the ACM ICM I Community Service Award in 2023. In 2023\, he received the Distinguished Alumni Award in the Mid-Career/Academia category by the Signal and Image Processing Institute (SIPI) at the University of Southern California. He i s currently serving as an associate editor of the IEEE Transactions on Aff ective Computing. He is an IEEE Fellow. He is a member of the ISCA\, and A AAC and a senior member of ACM.
DTSTART;TZID=America/New_York:20231117T120000 DTEND;TZID=America/New_York:20231117T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Carlos Busso (University of Texas at Dallas) “Multimodal Machine Le arning for Human-Centric Tasks” URL:https://www.clsp.jhu.edu/events/carl-busso-university-of-texas-at-dalla s-multimodal-machine-learning-for-human-centric-tasks/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Busso\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-24165@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20231127T120000 DTEND;TZID=America/New_York:20231127T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Aleem Khan URL:https://www.clsp.jhu.edu/events/student-seminar-aleem-khan/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Khan\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-24459@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20240301T120000 DTEND;TZID=America/New_York:20240301T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Mohit Iyyer “Improving\, Evaluating and Detecting Long-Form LLM-Gen erated Text” URL:https://www.clsp.jhu.edu/events/mohit-iyyer-improving-evaluating-and-de tecting-long-form-llm-generated-text/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,Iyyer\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-24461@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract
\nMost machine translation s ystems operate on the sentence-level while humans write and translate with in a given context. Operating on individual sentences forces error-prone s entence segmentation into the machine translation pipeline. This limits th e upper-bound performance of these systems by creating noisy training bite xt. Further\, many grammatical features necessitate inter-sentential conte xt in order to translate which makes perfect sentence-level machine transl ation an impossible task. In this talk\, we will cover the inherent limits of sentence-level machine translation. Following this\, we will explore a key obstacle in the way of true context-aware machine translation—an abje ct lack of data. Finally\, we will cover recent work that provides (1) a new evaluation dataset that specifically addresses the translation of cont ext-dependent discourse phenomena and (2) reconstructed documents from lar ge-scale sentence-level bitext that can be used to improve performance whe n translating these types of phenomena.
DTSTART;TZID=America/New_York:20240304T120000 DTEND;TZID=America/New_York:20240304T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Rachel Wicks (JHU) “To Sentences and Beyond: Paving the Way for Con text-Aware Machine Translation” URL:https://www.clsp.jhu.edu/events/rachel-wicks-jhu/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,March\,Wicks END:VEVENT BEGIN:VEVENT UID:ai1ec-24465@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nLarge Language Models (LLM s) have demonstrated remarkable capabilities across various domains. Howev er\, it is still very challenging to build highly-reliable applications wi th LLMs that support specialized use cases. LLMs trained on web data often excel at capturing general language patterns\, but they could struggle to support specialized domains and personalized user needs. Moreover\, LLMs can produce errors that are deceptively plausible\, making them potentiall y dangerous for high-trust scenarios. In this talk\, I will discuss some o f our recent efforts in addressing these challenges with data-efficient tu ning methods and a novel factuality evaluation framework. Specifically\, m y talk will focus on building multilingual applications\, one crucial use case often characterized by limited tuning and evaluation data.
\nBio
Xinyi(Cindy) Wang is a research scientist at Go ogle DeepMind working on Large Language Models(LLM) and its application to generative question-answering. She has worked on multilingual instruction -tuning for Gemini and multilingual generative models used in Google searc h. Before Google DeepMind\, Cindy Wang obtained her PhD degree in Language Technologies at Carnegie Mellon University. During her PhD\, she mainly w orked on developing data-efficient natural language processing~(NLP) syste ms. She has made several contributions in data selection\, data representa tion\, and model adaptation for multilingual NLP.
DTSTART;TZID=America/New_York:20240308T120000 DTEND;TZID=America/New_York:20240308T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Cindy Wang (Google DeepMind) “Building Data-Efficient and Reliable Applications with Large Language Models” URL:https://www.clsp.jhu.edu/events/cindy-wang-google-deepmind-building-dat a-efficient-and-reliable-applications-with-large-language-models/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,March\,Wang END:VEVENT BEGIN:VEVENT UID:ai1ec-24479@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract
\nT he speech field is evolving to solve more challenging scenarios\, such as multi-channel recordings with multiple simultaneous talkers. Given the man y types of microphone setups out there\, we present the UniX-Encoder. It’s a universal encoder designed for multiple tasks\, and worked with any mic rophone array\, in both solo and multi-talker environments. Our research e nhances previous multichannel speech processing efforts in four key areas: 1) Adaptability: Contrasting traditional models constrained to certain mi crophone array configurations\, our encoder is universally compatible. 2) MultiTask Capability: Beyond the single-task focus of previous systems\, U niX-Encoder acts as a robust upstream model\, adeptly extracting features for diverse tasks including ASR and speaker recognition. 3) Self-Supervise d Training: The encoder is trained without requiring labeled multi-channel data. 4) End-to-End Integration: In contrast to models that first beamfor m then process single-channels\, our encoder offers an end-to-end solution \, bypassing explicit beamforming or separation. To validate its effective ness\, we tested the UniXEncoder on a synthetic multi-channel dataset from the LibriSpeech corpus. Across tasks like speech recognition and speaker diarization\, our encoder consistently outperformed combinations like the WavLM model with the BeamformIt frontend.
DTSTART;TZID=America/New_York:20240311T200500 DTEND;TZID=America/New_York:20240311T210500 SEQUENCE:0 SUMMARY:Zili Huang (JHU) “Unix-Encoder: A Universal X-Channel Speech Encode r for Ad-Hoc Microphone Array Speech Processing” URL:https://www.clsp.jhu.edu/events/zili-huang-jhu-unix-encoder-a-universal -x-channel-speech-encoder-for-ad-hoc-microphone-array-speech-processing/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,Huang\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-24481@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nNatural language provides an intuitive and powerful interface to access knowledge at scale. Modern l anguage systems draw information from two rich knowledge sources: (1) info rmation stored in their parameters during massive pretraining and (2) docu ments retrieved at inference time. Yet\, we are far from building systems that can reliably provide information from such knowledge sources. In this talk\, I will discuss paths for more robust systems. In the first part of the talk\, I will present a module for scaling retrieval-based knowledge augmentation. We learn a compressor that maps retrieved documents into tex tual summaries prior to in-context integration. This not only reduces the computational costs but also filters irrelevant or incorrect information. In the second half of the talk\, I will discuss the challenges of updating knowledge stored in model parameters and propose a method to prevent mode ls from reciting outdated information by identifying facts that are prone to rapid change. I will conclude my talk by proposing an interactive syste m that can elicit information from users when needed.
\nBiog raphy
\nEunsol Choi is an assistant pro fessor in the Computer Science department at the University of Texas at Au stin. Prior to UT\, she spent a year at Google AI as a visiting researcher . Her research area spans natural language processing and machine learning . She is particularly interested in interpreting and reasoning about text in a dynamic real world context. She is a recipient of a Facebook research fellowship\, Google faculty research award\, Sony faculty award\, and an outstanding paper award at EMNLP. She received a Ph.D. in computer science and engineering from University of Washington and B.A in mathematics and computer science from Cornell University.
\nDTSTART;TZID=America/New_York:20240315T120000 DTEND;TZID=America/New_York:20240315T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21209 SEQUENCE:0 SUMMARY:Eunsol Choi (University of Texas at Austin) “Knowledge-Rich Languag e Systems in a Dynamic World” URL:https://www.clsp.jhu.edu/events/eunsol-choi-university-of-texas-at-aust in-knowledge-rich-language-systems-in-a-dynamic-world/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,Choi\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-24489@www.clsp.jhu.edu DTSTAMP:20240328T114258Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nOver the past decade\, the field of Speech Generation has seen significant progress in enhancing spe ech quality and naturalness. Despite these advancements\, persistent chall enges such as speech noise\, limited high-quality data availability\, and the lack of robustness in speech generation systems remain. Additionally\, the evaluation of speech presents a significant obstacle for comprehensiv e assessment at scale. Concurrently\, recent breakthroughs in Large Langua ge Models (LLMs) have revolutionized text generation and natural language processing. However\, the complexity of spoken language introduces unique hurdles\, including managing long speech waveform sequences. In this prese ntation\, I will explore recent innovations in speech synthesis with spoke n language modeling\, evaluation for generative speech systems and high-fi delity speech enhancement. Finally\, I will discuss prospective avenues fo r future research aimed at addressing these challenges.
\nBi o
\nSoumi Maiti is a postdoctoral researcher at Language Te chnologies Institute\, Carnegie Mellon University\, where she works on spe ech and language processing. Her research broadly focuses on building inte lligent systems that can communicate with humans naturally. She earned a Ph.D. from the Graduate Center\, City University of New York (CUNY) with t he Graduate Center Fellowship advised by Prof Michael Mandel. She earned h er B.Tech. in Computer Science from the Indian Institute of Engineering Sc ience and Technology\, Shibpur. Previously\, she has worked in the Text-To -Speech team at Apple. She has also worked at Google and Interactions LLC as a student researcher and research intern. She has worked as an adjunct lecturer at Brooklyn College\, CUNY\, for three years and served as a Math Fellow at Hunter College. She has served as session chair in ICASSP 2024\ , ICASSP 2023\, SLT 2023 and others\, and area chair at EMNLP 2023.
\n< p> DTSTART;TZID=America/New_York:20240329T120000 DTEND;TZID=America/New_York:20240329T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Soumi Maiti (CMU) “Towards Robust Speech Generation” URL:https://www.clsp.jhu.edu/events/soumi-maiti/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,Maiti\,March END:VEVENT END:VCALENDAR