BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-22403@www.clsp.jhu.edu DTSTAMP:20240328T100713Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nVoice conversion (VC) is a significant aspect of artificial intelligence. It is the study of how to convert one’s voice to sound like that of another without changing the lin guistic content. Voice conversion belongs to a general technical field of speech synthesis\, which converts text to speech or changes the properties of speech\, for example\, voice identity\, emotion\, and accents. Voice c onversion involves multiple speech processing techniques\, such as speech analysis\, spectral conversion\, prosody conversion\, speaker characteriza tion\, and vocoding. With the recent advances in theory and practice\, we are now able to produce human-like voice quality with high speaker similar ity. In this talk\, Dr. Sisman will present the recent advances in voice c onversion and discuss their promise and limitations. Dr. Sisman will also provide a summary of the available resources for expressive voice conversi on research.
\nBiography
\nDr. Berrak Sisman (Member\, IEEE) received the Ph.D. degree in electrical and computer engin eering from National University of Singapore in 2020\, fully funded by A*S TAR Graduate Academy under Singapore International Graduate Award (SINGA). She is currently working as a tenure-track Assistant Professor at the Eri k Jonsson School Department of Electrical and Computer Engineering at Univ ersity of Texas at Dallas\, United States. Prior to joining UT Dallas\, sh e was a faculty member at Singapore University of Technology and Design (2 020-2022). She was a Postdoctoral Research Fellow at the National Universi ty of Singapore (2019-2020). She was an exchange doctoral student at the U niversity of Edinburgh and a visiting scholar at The Centre for Speech Tec hnology Research (CSTR)\, University of Edinburgh (2019). She was a visiti ng researcher at RIKEN Advanced Intelligence Project in Japan (2018). Her research is focused on machine learning\, signal processing\, emotion\, sp eech synthesis and voice conversion.
\nDr. Sisman has served as the Area Chair at INTERSPEECH 2021\, INTERSPEECH 2022\, IEEE SLT 2022 and as t he Publication Chair at ICASSP 2022. She has been elected as a member of t he IEEE Speech and Language Processing Technical Committee (SLTC) in the a rea of Speech Synthesis for the term from January 2022 to December 2024. S he plays leadership roles in conference organizations and active in techni cal committees. She has served as the General Coordinator of the Student A dvisory Committee (SAC) of International Speech Communication Association (ISCA).
DTSTART;TZID=America/New_York:20221104T120000 DTEND;TZID=America/New_York:20221104T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Berrak Sisman (University of Texas at Dallas) “Speech Synthesis and Voice Conversion: Machine Learning can Mimic Anyone’s Voice” URL:https://www.clsp.jhu.edu/events/berrak-sisman-university-of-texas-at-da llas/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,November\,Sisman END:VEVENT BEGIN:VEVENT UID:ai1ec-22412@www.clsp.jhu.edu DTSTAMP:20240328T100713Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nDriven by the goal of erad icating language barriers on a global scale\, machine translation has soli dified itself as a key focus of artificial intelligence research today. Ho wever\, such efforts have coalesced around a small subset of languages\, l eaving behind the vast majority of mostly low-resource languages. What doe s it take to break the 200 language barrier while ensuring safe\, high-qua lity results\, all while keeping ethical considerations in mind? In this t alk\, I introduce No Language Left Behind\, an initiative to break languag e barriers for low-resource languages. In No Language Left Behind\, we too k on the low-resource language translation challenge by first contextualiz ing the need for translation support through exploratory interviews with n ative speakers. Then\, we created datasets and models aimed at narrowing t he performance gap between low and high-resource languages. We proposed mu ltiple architectural and training improvements to counteract overfitting w hile training on thousands of tasks. Critically\, we evaluated the perform ance of over 40\,000 different translation directions using a human-transl ated benchmark\, Flores-200\, and combined human evaluation with a novel t oxicity benchmark covering all languages in Flores-200 to assess translati on safety. Our model achieves an improvement of 44% BLEU relative to the p revious state-of-the-art\, laying important groundwork towards realizing a universal translation system in an open-source manner.
\nBi ography
\nAngela is a research scientis t at Meta AI Research in New York\, focusing on supporting efforts in spee ch and language research. Recent projects include No Language Left Behind (https://ai.facebook.com/r esearch/no-language-left-behind/) and Universal Speech Translation for Unwritten Languages (https://ai.faceb ook.com/blog/ai-translation-hokkien/). Before translation\, Angela pre viously focused on research in on-device models for NLP and computer visio n and text generation.
\nDTSTART;TZID=America/New_York:20221118T120000 DTEND;TZID=America/New_York:20221118T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Angela Fan (Meta AI Research) “No Language Left Behind: Scaling Hu man-Centered Machine Translation” URL:https://www.clsp.jhu.edu/events/angela-fan-facebook/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,Fan\,November END:VEVENT END:VCALENDAR