BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20117@www.clsp.jhu.edu DTSTAMP:20240330T034441Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nNeural sequence generation systems oftentimes generate sequences by searching for the most likely se quence under the learnt probability distribution. This assumes that the mo st likely sequence\, i.e. the mode\, under such a model must also be the b est sequence it has to offer (often in a given context\, e.g. conditioned on a source sentence in translation). Recent findings in neural machine tr anslation (NMT) show that the true most likely sequence oftentimes is empt y under many state-of-the-art NMT models. This follows a large list of oth er pathologies and biases observed in NMT and other sequence generation mo dels: a length bias\, larger beams degrading performance\, exposure bias\, and many more. Many of these works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this : it is mode-seeking search\, e.g. beam search\, that introduces many of t hese pathologies and biases\, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass over many translations\, and that the most likely translation oftentimes is a rare event. We further show that translation d istributions do capture important aspects of translation well in expectati on. Therefore\, we advocate for decision rules that take into account the entire probability distribution and not just its mode. We provide one exam ple of such a decision rule\, and show that this is a fruitful research di rection.
\nBiography
\nI am an assistant professor (UD) in natural language processing at the Institute for Logic\, Language and Computation where I lead the Probabilistic Language L earning group.
\nMy work concerns the design of models and algor ithms that learn to represent\, understand\, and generate language data. E xamples of specific problems I am interested in include language modelling \, machine translation\, syntactic parsing\, textual entailment\, text cla ssification\, and question answering.
\nI also develop techniques to approach general machine learning problems such as probabilistic inferenc e\, gradient and density estimation.
\nMy interests sit at the inter section of disciplines such as statistics\, machine learning\, approximate inference\, global optimization\, formal languages\, and computational li nguistics.
\n\n
DTSTART;TZID=America/New_York:20210419T120000 DTEND;TZID=America/New_York:20210419T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Wilker Aziz (University of Amsterdam) “The Inadequacy of the Mode in Neural Machine Translation” URL:https://www.clsp.jhu.edu/events/wilker-aziz-university-of-amsterdam/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,April\,Aziz END:VEVENT BEGIN:VEVENT UID:ai1ec-20120@www.clsp.jhu.edu DTSTAMP:20240330T034441Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nRobotics@Google’s mission is to make robots useful in the real world through machine learning. We a re excited about a new model for robotics\, designed for generalization ac ross diverse environments and instructions. This model is focused on scala ble data-driven learning\, which is task-agnostic\, leverages simulation\, learns from past experience\, and can be quickly adapted to work in the r eal-world through limited interactions. In this talk\, we’ll share some of our recent work in this direction in both manipulation and locomotion app lications.
\nBiography
\nCarolina
Abstract
\nOver the last few years\, deep neural models have taken over the field of natural language processin g (NLP)\, brandishing great improvements on many of its sequence-level tas ks. But the end-to-end nature of these models makes it hard to figure out whether the way they represent individual words aligns with how language b uilds itself from the bottom up\, or how lexical changes in register and d omain can affect the untested aspects of such representations.
\nIn this talk\, I will present NYTWIT\, a dataset created to challenge large l anguage models at the lexical level\, tasking them with identification of processes leading to the formation of novel English words\, as well as wit h segmentation and recovery of the specific subclass of novel blends. I wi ll then present XRayEmb\, a method which alleviates the hardships of proce ssing these novelties by fitting a character-level encoder to the existing models’ subword tokenizers\; and conclude with a discussion of the drawba cks of current tokenizers’ vocabulary creation schemes.
\nBi ography
\nYuval Pinter is a Senior Lecturer in the Department of Comp uter Science at Ben-Gurion University of the Negev\, focusing on natural l anguage processing. Yuval got his PhD at t he Georgia Institute of Technology School of Interactive Computing as a Bl oomberg Data Science PhD Fellow. Before that\, he worked as a Research Eng ineer at Yahoo Labs and as a Computational Linguist at Ginger Software\, a nd obtained an MA in Linguistics and a BSc in CS and Mathematics\, both fr om Tel Aviv University. Yuval blogs (in He brew) about language matters on Dagesh Kal.
DTSTART;TZID=America/New_York:20210910T120000 DTEND;TZID=America/New_York:20210910T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD SEQUENCE:0 SUMMARY:Yuval Pinter (Ben-Gurion University – Virtual Visit) “Challenging a nd Adapting NLP Models to Lexical Phenomena” URL:https://www.clsp.jhu.edu/events/yuval-pinter/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,Pinter\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-20723@www.clsp.jhu.edu DTSTAMP:20240330T034442Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nText simplification aims t o help audiences read and understand a piece of text through lexical\, syn tactic\, and discourse modifications\, while remaining faithful to its cen tral idea and meaning. Thanks to large-scale parallel corpora derived from Wikipedia and News\, much of modern-day text simplification research focu ses on sentence simplification\, transforming original\, more complex sent ences into simplified versions. In this talk\, I present new frontiers tha t focus on discourse operations. First\, we consider the challenging task of simplifying highly technical language\, in our case\, medical texts. We introduce a new corpus of parallel texts in English comprising technical and lay summaries of all published evidence pertaining to different clinic al topics. We then propose a new metric to quantify stylistic differentiat es between the two\, and models for paragraph-level simplification. Second \, we present the first data-driven study of inserting elaborations and ex planations during simplification\, and illustrate the richness and complex ities of this phenomenon.
\nBiography
\nAbstract
\nRaytheon BBN participated in the IARPA MATERIAL program\, whose objective is to enable rapid develop ment of language-independent methods for cross-lingual information retriev al (CLIR). The challenging CLIR task of retrieving documents written (or s poken) in one language so that they satisfy an information need expressed in a different language is exacerbated by unique challenges posed by the M ATERIAL program: limited training data for automatic speech recognition an d machine translation\, scant lexical resources\, non-standardized orthogr aphy\, etc. Furthermore\, the format of the queries and the “Query-Weighte d Value” performance measure are non-standard and not previously studied i n the IR community. In this talk\, we will describe the Raytheon BBN CLIR system\, which was successful at addressing the above challenges and uniqu e characteristics of the program.
\nBiography
\nDamianos Karakos has been at Raytheon BBN f or the past nine years\, where he is currently a Senior Principal Engineer \, Research. Before that\, he was research faculty at Johns Hopkins Univer sity. He has worked on several Government projects (e.g.\, DARPA GALE\, DA RPA RATS\, IARPA BABEL\, IARPA MATERIAL\, IARPA BETTER) and on a variety o f HLT-related topics (e.g.\, speech recognition\, speech activity detectio n\, keyword search\, information retrieval). He has published more than 60 peer-reviewed papers. His research interests lie at the intersection of h uman language technology and machine learning\, with an emphasis on statis tical methods. He obtained a PhD in Electrical Engineering from the Univer sity of Maryland\, College Park\, in 2002.
\n\n
Abstract
\nWhile there is a vast amou nt of text written about nearly any topic\, this is often difficult for so meone unfamiliar with a specific field to understand. Automated text simpl ification aims to reduce the complexity of a document\, making it more com prehensible to a broader audience. Much of the research in this field has traditionally focused on simplification sub-tasks\, such as lexical\, synt actic\, or sentence-level simplification. However\, current systems strugg le to consistently produce high-quality simplifications. Phrase-based mode ls tend to make too many poor transformations\; on the other hand\, recent neural models\, while producing grammatical output\, often do not make al l needed changes to the original text. In this thesis\, I discuss novel ap proaches for improving lexical and sentence-level simplification systems. Regarding sentence simplification models\, after noting that encouraging d iversity at inference time leads to significant improvements\, I take a cl oser look at the idea of diversity and perform an exhaustive comparison of diverse decoding techniques on other generation tasks. I also discuss the limitations in the framing of current simplification tasks\, which preven t these models from yet being practically useful. Thus\, I also propose a retrieval-based reformulation of the problem. Specifically\, starting with a document\, I identify concepts critical to understanding its content\, and then retrieve documents relevant for each concept\, re-ranking them ba sed on the desired complexity level.
\nBiography
\nI’m a research scientist at the HLTCOE at Johns Hopkins University. My primary research interests are in language generati on\, diverse and constrained decoding\, and information retrieval. During my PhD I focused mainly on the task of text simplification\, and now am wo rking on formulating structured prediction problems as end-to-end generati on tasks. I received my PhD in July 2021 from the University of Pennsylvan ia with Chris Callison-Burch and Marianna Apidianaki.
\nDTSTART;TZID=America/New_York:20211022T120000 DTEND;TZID=America/New_York:20211022T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Reno Kriz (HLTCOE – JHU) “Towards a Practically Useful Text Simplif ication System” URL:https://www.clsp.jhu.edu/events/reno-kriz-hltcoe-jhu-towards-a-practica lly-useful-text-simplification-system/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,Kriz\,October END:VEVENT BEGIN:VEVENT UID:ai1ec-21023@www.clsp.jhu.edu DTSTAMP:20240330T034442Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nSpeech data is notoriously difficult to work with due to a variety of codecs\, length s of recordings\, and meta-data formats. We present Lhotse\, a speech data representation library that draws upon lessons learned from Kaldi speech recognition toolkit and brings its concepts into the modern deep learning ecosystem. Lhotse provides a common JSON description format with correspon ding Python classes and data preparation recipes for over 30 popular speec h corpora. Various datasets can be easily combined together and re-purpose d for different tasks. The library handles multi-channel recordings\, long recordings\, local and cloud storage\, lazy and on-the-fly operations amo ngst other features. We introduce Cut and CutSet concepts\, which simplify common data wrangling tasks for audio and help incorporate acoustic conte xt of speech utterances. Finally\, we show how Lhotse leverages PyTorch da ta API abstractions and adopts them to handle speech data for deep learnin g.
\nBiography
\nPiotr Zelasko is an a ssistant research scientist in the Center for Language and Speech Processi ng (CLSP) who specializes in automatic speech recognition (ASR) and spoken language understanding (SLU). His current research focuses on applying mu ltilingual and crosslingual speech recognition systems to categorize the p honetic inventory of a previously unknown language and on improving defens es against adversarial attacks on both speaker identification and automati c speech recognition systems. He is also addressing the question of how to structure a spontaneous conversation into high-level semantic units such as dialog acts or topics. Finally\, he is working on Lhotse + K2\, the nex t-generation speech processing research software ecosystem. Before joining Johns Hopkins\, Zelasko worked as a machine learning consultant for Avaya (2017-2019)\, and as a machine learning engineer for Techmo (2015-2017). Zelasko received his PhD (2019) in electronics engineering\, as well as hi s master’s (2014) and undergraduate degrees (2013) in acoustic engineering from AGH University of Science and Technology in Kraków\, Poland.
DTSTART;TZID=America/New_York:20211029T120000 DTEND;TZID=America/New_York:20211029T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore MD 21218 SEQUENCE:0 SUMMARY:Piotr Zelasko (CLSP at JHU) “Lhotse: a speech data representation l ibrary for the modern deep learning ecosystem” URL:https://www.clsp.jhu.edu/events/piotr-zelasko-clsp-at-jhu-lhotse-a-spee ch-data-representation-library-for-the-modern-deep-learning-ecosystem/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,October\,Zelasko END:VEVENT BEGIN:VEVENT UID:ai1ec-21031@www.clsp.jhu.edu DTSTAMP:20240330T034442Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nMost p eople take for granted that when they speak\, they will be heard and under stood. But for the millions who live with speech impairments caused by phy sical or neurological conditions\, trying to communicate with others can b e difficult and lead to frustration. While there have been a great number of recent advances in Automatic Speech Recognition (ASR) technologies\, th ese interfaces can be inaccessible for those with speech impairments.
\nIn this talk\, we will present Parrotron\, an end -to-end-trained speech-to-speech conversion model that maps an input spect rogram directly to another spectrogram\, without utilizing any intermediat e discrete representation. The system is also trained to emit words in add ition to a spectrogram\, in parallel. We demonstrate that this model can be trained to normalize speech from any speaker regardless of accent\, pr osody\, and background noise\, into the voice of a single canonical target speaker with a fixed accent and consistent articulation and prosody. We f urther show that this normalization model can be adapted to normalize high ly atypical speech from speakers with a variety of speech impairments (due to\, ALS\, Cerebral-Palsy\, Deafness\, Stroke\, Brain Injury\, etc.) \, resulting in significant improvements in intelligibility and naturalness\, measured via a speech recognizer and listening tests. Finally\, demonstra ting the utility of this model on other speech tasks\, we show that the sa me model architecture can be trained to perform a speech separation task.< /p>\n
Dimitri will give a brief description of some key moments in development of speech recognition algorithms that he was in volved in and their applications to YouTube closed captions\, Live Transc ribe and wearable subtitles.
\nFadi will then sp eak about the development of Parrotron.
\nBiographies
\nDimitri Kanevsky started his career at Google working on speech recognition algorithms. Prior to joining Google\, Dimitr i was a Research staff member in the Speech Algorithms Department at IBM . Prior to IBM\, he worked at a number of centers for higher mathematics\, including Max Planck Institute in Germany and the Institute for Advanced Studies in Princeton. He currently holds 295 US patents and was Master Inv entor at IBM. MIT Technology Review recognized Dimitri conversational biom etrics based security patent as one of five most influential patents for 2 003. In 2012 Dimitri was honored at the White House as a Champion of Chang e for his efforts to advance access to science\, technology\, engineering\ , and math.
\nFadi Biadsy is a senior staff researc h scientist at Google NY for the past ten years. He has been exploring and leading multiple projects at Google\, including speech recognition\, spee ch conversion\, language modeling\, and semantic understanding. He receiv ed his PhD from Columbia University in 2011. At Columbia\, he researched a variety of speech and language processing projects including\, dialect an d accent recognition\, speech recognition\, charismatic speech and questio n answering. He holds a BSc and MSc in mathematics and computer science. He worked on handwriting recognition during his masters degree and he work ed as a senior software developer for five years at Dalet digital media sy stems building multimedia broadcasting systems.
DTSTART;TZID=America/New_York:20211105T120000 DTEND;TZID=America/New_York:20211105T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Fadi Biadsy and Dimitri Kanevsky (Google) “Speech Recognition: From Speaker Dependent to Speaker Independent to Full Personalization” “Parrot ron: A Unified E2E Speech-to Speech Conversion and ASR Model for Atypical Speech” URL:https://www.clsp.jhu.edu/events/fadi-biadsy-and-dimitri-kanevsky-google / X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,Biadsy and Kanevsky\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-21041@www.clsp.jhu.edu DTSTAMP:20240330T034442Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nNarration is a universal h uman practice that serves as a key site of education\, collective memory\, fostering social belief systems\, and furthering human creativity. Recent studies in economics (Shiller\, 2020)\, climate science (Bushell et al.\, 2017)\, political polarization (Kubin et al.\, 2021)\, and mental health (Adler et al.\, 2016) suggest an emerging interdisciplinary consensus that narrative is a central concept for understanding human behavior and belie fs. For close to half a century\, the field of narratology has developed a rich set of theoretical frameworks for understanding narrative. And yet t hese theories have largely gone untested on large\, heterogenous collectio ns of texts. Scholars continue to generate schemas by extrapolating from s mall numbers of manually observed documents. In this talk\, I will discuss how we can use machine learning to develop data-driven theories of narrat ion to better understand what Labov and Waletzky called “the simplest and most fundamental narrative structures.” How can machine learning help us a pproach what we might call a minimal theory of narrativity?
\nAndrew Piper is Professor and William Dawson Scholar in the Department of Languages\, Literatures\, and Cultures at McGill University. He is the director of _.t xtlab
\n\na laboratory for cultural analytics\, and editor of the /Journal of Cultural Analytics/\, an open-access journal dedicated to the computational study of culture. He is the author of numerous books and articles on the relatio nship of technology and reading\, including /Book Was There: Reading in El ectronic Times/(Chicago 2012)\, /Enumerations: Data and Literary Study/(Ch icago 2018)\, and most recently\, /Can We Be Wrong? The Problem of Textual Evidence in a Time of Data/(Cambridge 2020).
DTSTART;TZID=America/New_York:20211112T120000 DTEND;TZID=America/New_York:20211112T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Andrew Piper (McGill University) ” How can we use machine learning to understand narration?” URL:https://www.clsp.jhu.edu/events/andrew-piper-mcgill-university-how-can- we-use-machine-learning-to-understand-narration/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,November\,Piper END:VEVENT BEGIN:VEVENT UID:ai1ec-21057@www.clsp.jhu.edu DTSTAMP:20240330T034442Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nThis talk will outline the major challenging in porting mainstream speech technology to the domain o f clinical applications\; in particular\, the need for personalised system s\, the challenge of working in an inherently sparse data domain and devel oping meaningful collaborations with all stakeholders. The talk will give an overview of recent state-of-the-art research from current projects incl uding in the areas of recognition of disordered speech\, automatic process ing of conversations and the automatic detection and tracking of paralingu istic information at the University of Sheffield (UK)’s Speech and Hearing (SPandH) & Healthcare lab.
\nBiography
\nHei di is a Senior Lecturer (associate professor) in Computer Science at the U niversity of Sheffield\, United Kingdom. Her research interests are on the application of AI-based voice technologies to healthcare. In particular\, the detection and monitoring of people’s physical and mental health inclu ding verbal and non-verbal traits for expressions of emotion\, anxiety\, d epression and neurodegenerative conditions in e.g.\, therapeutic or diagno stic settings.
DTSTART;TZID=America/New_York:20211119T120000 DTEND;TZID=America/New_York:20211119T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Heidi Christensen (University of Sheffield\, UK) Virtual Seminar “A utomated Processing of Pathological Speech: Recent Work and Ongoing Challe nges” URL:https://www.clsp.jhu.edu/events/heidi-christensen-university-of-sheffie ld-uk-virtual-seminar-automated-processing-of-pathological-speech-recent-w ork-and-ongoing-challenges/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,Christensen\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-21068@www.clsp.jhu.edu DTSTAMP:20240330T034442Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20211203T120000 DTEND;TZID=America/New_York:20211203T131500 LOCATION:Hackerman HallB17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Eric Ringger (Zillow Group) URL:https://www.clsp.jhu.edu/events/eric-ringger-zillow-group/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,December\,Ringger END:VEVENT BEGIN:VEVENT UID:ai1ec-21072@www.clsp.jhu.edu DTSTAMP:20240330T034442Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nAbstract
\nModern learning architectures for natural language processing have been very suc cessful in incorporating a huge amount of texts into their parameters. How ever\, by and large\, such models store and use knowledge in distributed a nd decentralized ways. This proves unreliable and makes the models ill-sui ted for knowledge-intensive tasks that require reasoning over factual info rmation in linguistic expressions. In this talk\, I will give a few examp les of exploring alternative architectures to tackle those challenges. In particular\, we can improve the performance of such (language) models by r epresenting\, storing and accessing knowledge in a dedicated memory compon ent.
\nThis talk is based on several joint works with Yury Zemlyanskiy (Google Research)\, Michiel de Jong (USC and Google Research)\, William Cohen (Google Research and CMU) and our other collabo rators in Google Research.
\nBiography
\nFei is a research scientist at Google Research. Before that\, he was a Profess or of Computer Science at University of Southern California. His primary r esearch interests are machine learning and its application to various AI p roblems: speech and language processing\, computer vision\, robotics and r ecently weather forecast and climate modeling. He has a PhD (2007) from Computer and Information Science from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing\, China).
DTSTART;TZID=America/New_York:20221024T120000 DTEND;TZID=America/New_York:20221024T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Fei Sha (University of Southern California) “Extracting Information from Text into Memory for Knowledge-Intensive Tasks” URL:https://www.clsp.jhu.edu/events/fei-sha-university-of-southern-californ ia/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,October\,Sha END:VEVENT BEGIN:VEVENT UID:ai1ec-22403@www.clsp.jhu.edu DTSTAMP:20240330T034442Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nVoice conversion (VC) is a significant aspect of artificial intelligence. It is the study of how to convert one’s voice to sound like that of another without changing the lin guistic content. Voice conversion belongs to a general technical field of speech synthesis\, which converts text to speech or changes the properties of speech\, for example\, voice identity\, emotion\, and accents. Voice c onversion involves multiple speech processing techniques\, such as speech analysis\, spectral conversion\, prosody conversion\, speaker characteriza tion\, and vocoding. With the recent advances in theory and practice\, we are now able to produce human-like voice quality with high speaker similar ity. In this talk\, Dr. Sisman will present the recent advances in voice c onversion and discuss their promise and limitations. Dr. Sisman will also provide a summary of the available resources for expressive voice conversi on research.
\nBiography
\nDr. Berrak Sisman (Member\, IEEE) received the Ph.D. degree in electrical and computer engin eering from National University of Singapore in 2020\, fully funded by A*S TAR Graduate Academy under Singapore International Graduate Award (SINGA). She is currently working as a tenure-track Assistant Professor at the Eri k Jonsson School Department of Electrical and Computer Engineering at Univ ersity of Texas at Dallas\, United States. Prior to joining UT Dallas\, sh e was a faculty member at Singapore University of Technology and Design (2 020-2022). She was a Postdoctoral Research Fellow at the National Universi ty of Singapore (2019-2020). She was an exchange doctoral student at the U niversity of Edinburgh and a visiting scholar at The Centre for Speech Tec hnology Research (CSTR)\, University of Edinburgh (2019). She was a visiti ng researcher at RIKEN Advanced Intelligence Project in Japan (2018). Her research is focused on machine learning\, signal processing\, emotion\, sp eech synthesis and voice conversion.
\nDr. Sisman has served as the Area Chair at INTERSPEECH 2021\, INTERSPEECH 2022\, IEEE SLT 2022 and as t he Publication Chair at ICASSP 2022. She has been elected as a member of t he IEEE Speech and Language Processing Technical Committee (SLTC) in the a rea of Speech Synthesis for the term from January 2022 to December 2024. S he plays leadership roles in conference organizations and active in techni cal committees. She has served as the General Coordinator of the Student A dvisory Committee (SAC) of International Speech Communication Association (ISCA).
DTSTART;TZID=America/New_York:20221104T120000 DTEND;TZID=America/New_York:20221104T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Berrak Sisman (University of Texas at Dallas) “Speech Synthesis and Voice Conversion: Machine Learning can Mimic Anyone’s Voice” URL:https://www.clsp.jhu.edu/events/berrak-sisman-university-of-texas-at-da llas/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,November\,Sisman END:VEVENT BEGIN:VEVENT UID:ai1ec-22408@www.clsp.jhu.edu DTSTAMP:20240330T034442Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nAbstract
\nDriven by the goal of erad icating language barriers on a global scale\, machine translation has soli dified itself as a key focus of artificial intelligence research today. Ho wever\, such efforts have coalesced around a small subset of languages\, l eaving behind the vast majority of mostly low-resource languages. What doe s it take to break the 200 language barrier while ensuring safe\, high-qua lity results\, all while keeping ethical considerations in mind? In this t alk\, I introduce No Language Left Behind\, an initiative to break languag e barriers for low-resource languages. In No Language Left Behind\, we too k on the low-resource language translation challenge by first contextualiz ing the need for translation support through exploratory interviews with n ative speakers. Then\, we created datasets and models aimed at narrowing t he performance gap between low and high-resource languages. We proposed mu ltiple architectural and training improvements to counteract overfitting w hile training on thousands of tasks. Critically\, we evaluated the perform ance of over 40\,000 different translation directions using a human-transl ated benchmark\, Flores-200\, and combined human evaluation with a novel t oxicity benchmark covering all languages in Flores-200 to assess translati on safety. Our model achieves an improvement of 44% BLEU relative to the p revious state-of-the-art\, laying important groundwork towards realizing a universal translation system in an open-source manner.
\nBi ography
\nAngela is a research scientis t at Meta AI Research in New York\, focusing on supporting efforts in spee ch and language research. Recent projects include No Language Left Behind (https://ai.facebook.com/r esearch/no-language-left-behind/) and Universal Speech Translation for Unwritten Languages (https://ai.faceb ook.com/blog/ai-translation-hokkien/). Before translation\, Angela pre viously focused on research in on-device models for NLP and computer visio n and text generation.
\nDTSTART;TZID=America/New_York:20221118T120000 DTEND;TZID=America/New_York:20221118T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Angela Fan (Meta AI Research) “No Language Left Behind: Scaling Hu man-Centered Machine Translation” URL:https://www.clsp.jhu.edu/events/angela-fan-facebook/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,Fan\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-23910@www.clsp.jhu.edu DTSTAMP:20240330T034442Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nAbstract
\nMultil ingual machine translation has proven immensely useful for both parameter efficiency and overall performance for many language pairs via complete pa rameter sharing. However\, some language pairs in multilingual models can see worse performance than in bilingual models\, especially in the one-to- many translation setting. Motivated by their empirical differences\, we ex amine the geometric differences in representations from bilingual models v ersus those from one-to-many multilingual models. Specifically\, we measur e the isotropy of these representations using intrinsic dimensionality and IsoScore\, in order to measure how these representations utilize the dime nsions in their underlying vector space. We find that for a given language pair\, its multilingual model decoder representations are consistently le ss isotropic than comparable bilingual model decoder representations. Addi tionally\, we show that much of this anisotropy in multilingual decoder re presentations can be attributed to modeling language-specific information\ , therefore limiting remaining representational capacity.
DTSTART;TZID=America/New_York:20231106T120000 DTEND;TZID=America/New_York:20231106T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Neha Verma “Exploring Geometric Representational Disparities Between Multilingual and Bilingual Translation Models” URL:https://www.clsp.jhu.edu/events/student-seminar-neha-verma-exploring-ge ometric-representational-disparities-between-multilingual-and-bilingual-tr anslation-models/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,November\,Verma END:VEVENT BEGIN:VEVENT UID:ai1ec-24157@www.clsp.jhu.edu DTSTAMP:20240330T034442Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nIn this talk\, I will pres ent a simple extension of image-based Masked Autoencoders (MAE) to self-su pervised representation learning from audio spectrograms. Following the Tr ansformer encoder-decoder design in MAE\, our Audio-MAE first encodes audi o spectrogram patches with a high masking ratio\, feeding only the non-mas ked tokens through encoder layers. The decoder then re-orders and decodes the encoded context padded with mask tokens\, in order to reconstruct the input spectrogram. We find it beneficial to incorporate local window atten tion in the decoder\, as audio spectrograms are highly correlated in local time and frequency bands. We then fine-tune the encoder with a lower mask ing ratio on target datasets. Empirically\, Audio-MAE sets new state-of-th e-art performance on six audio and speech classification tasks\, outperfor ming other recent models that use external supervised pre-training.
\n< p>Bio\nFlorian Metze is a Research Scientist Manag er at Meta AI in New York\, supporting a team of researchers and engineers working on multi-modal (image\, video\, audio\, text) content understandi ng for Meta’s Family of Apps (Instagram\, Threads\, Facebook\, WhatsApp). He used to be an Associate Research Professor at Carnegie Mellon Universit y\, in the School of Computer Science’s Language Technologies Institute\, where he still is an Adjunct Professor. He is also a co-founder of Abridge \, a company working on extracting information from doctor patient convers ations. His work covers many areas of speech recognition and multi-media a nalysis with a focus on end-to-end deep learning. Currently\, he focuses o n multi-modal processing of videos\, and using that information to recomme nd unconnected content. In the past\, he has worked on low resource and mu lti-lingual speech processing\, speech recognition with articulatory featu res\, large-scale multi-media retrieval and summarization\, information ex traction from medical interviews\, and recognition of personality or simil ar meta-data from speech.
\nFor more information\, please see http://www.cs.cmu.edu/directory /fmetze
\nDTSTART;TZID=America/New_York:20231110T120000 DTEND;TZID=America/New_York:20231110T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Florian Metze (CMU) “Masked Autoencoders that Listen” URL:https://www.clsp.jhu.edu/events/florian-metze-cmu/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Metze\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-24159@www.clsp.jhu.edu DTSTAMP:20240330T034442Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20231113T120000 DTEND;TZID=America/New_York:20231113T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Kate Sanders URL:https://www.clsp.jhu.edu/events/student-seminar-kate-sanders/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,November\,Sanders END:VEVENT BEGIN:VEVENT UID:ai1ec-24163@www.clsp.jhu.edu DTSTAMP:20240330T034442Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nThe almost un limited multimedia content available on video-sharing websites has opened new challenges and opportunities for building robust multimodal solutions. This seminar will describe our novel multimodal architectures that (1) ar e robust to missing modalities\, (2) can identify noisy or less discrimina tive features\, and (3) can leverage unlabeled data. First\, we present a strategy that effectively combines auxiliary networks\, a transformer arch itecture\, and an optimized training mechanism for handling missing featur es. This problem is relevant since it is expected that during inference th e multimodal system will face cases with missing features due to noise or occlusion. We implement this approach for audiovisual emotion recognition achieving state-of-the-art performance. Second\, we present a multimodal f ramework for dealing with scenarios characterized by noisy or less discrim inative features. This situation is commonly observed in audiovisual autom atic speech recognition (AV-ASR) with clean speech\, where the performance often drops compared to a speech-only solution due to the variability of visual features. The proposed approach is a deep learning solution with a gating layer that diminishes the effect of noisy or uninformative visual f eatures\, keeping only useful information. The approach improves\, or at l east\, maintains performance when visual features are used. Third\, we dis cuss alternative strategies to leverage unlabeled multimodal data. A promi sing approach is to use multimodal pretext tasks that are carefully design ed to learn better representations for predicting a given task\, leveragin g the relationship between acoustic and facial features. Another approach is using multimodal ladder networks where intermediate representations are predicted across modalities using lateral connections. These models offer principled solutions to increase the generalization and robustness of com mon speech-processing tasks when using multimodal architectures. p>\n
Bio
\nCarlos Busso is a Profess or at the University of Texas at Dallas’s Electrical and Computer Engineer ing Department\, where he is also the director of the Multimodal Signal Pr ocessing (MSP) Laboratory. His research interest is in human-centered mult imodal machine intelligence and application\, with a focus on the broad ar eas of affective computing\, multimodal human-machine interfaces\, in-vehi cle active safety systems\, and machine learning methods for multimodal pr ocessing. He has worked on audio-visual emotion recognition\, analysis of emotional modulation in gestures and speech\, designing realistic human-li ke virtual characters\, and detection of driver distractions. He is a reci pient of an NSF CAREER Award. In 2014\, he received the ICMI Ten-Year Tech nical Impact Award. In 2015\, his student received the third prize IEEE IT SS Best Dissertation Award (N. Li). He also received the Hewlett Packard B est Paper Award at the IEEE ICME 2011 (with J. Jain)\, and the Best Paper Award at the AAAC ACII 2017 (with Yannakakis and Cowie). He received the B est of IEEE Transactions on Affective Computing Paper Collection in 2021 ( with R. Lotfian) and the Best Paper Award from IEEE Transactions on Affect ive Computing in 2022 (with Yannakakis and Cowie). He received the ACM ICM I Community Service Award in 2023. In 2023\, he received the Distinguished Alumni Award in the Mid-Career/Academia category by the Signal and Image Processing Institute (SIPI) at the University of Southern California. He i s currently serving as an associate editor of the IEEE Transactions on Aff ective Computing. He is an IEEE Fellow. He is a member of the ISCA\, and A AAC and a senior member of ACM.
DTSTART;TZID=America/New_York:20231117T120000 DTEND;TZID=America/New_York:20231117T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Carlos Busso (University of Texas at Dallas) “Multimodal Machine Le arning for Human-Centric Tasks” URL:https://www.clsp.jhu.edu/events/carl-busso-university-of-texas-at-dalla s-multimodal-machine-learning-for-human-centric-tasks/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Busso\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-24165@www.clsp.jhu.edu DTSTAMP:20240330T034442Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20231127T120000 DTEND;TZID=America/New_York:20231127T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Aleem Khan URL:https://www.clsp.jhu.edu/events/student-seminar-aleem-khan/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Khan\,November END:VEVENT END:VCALENDAR