BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-22412@www.clsp.jhu.edu DTSTAMP:20240328T184412Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nDriven by the goal of eradicating language barriers o n a global scale\, machine translation has solidified itself as a key focu s of artificial intelligence research today. However\, such efforts have c oalesced around a small subset of languages\, leaving behind the vast majo rity of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe\, high-quality results\, all while ke eping ethical considerations in mind? In this talk\, I introduce No Langua ge Left Behind\, an initiative to break language barriers for low-resource languages. In No Language Left Behind\, we took on the low-resource langu age translation challenge by first contextualizing the need for translatio n support through exploratory interviews with native speakers. Then\, we c reated datasets and models aimed at narrowing the performance gap between low and high-resource languages. We proposed multiple architectural and tr aining improvements to counteract overfitting while training on thousands of tasks. Critically\, we evaluated the performance of over 40\,000 differ ent translation directions using a human-translated benchmark\, Flores-200 \, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achiev es an improvement of 44% BLEU relative to the previous state-of-the-art\, laying important groundwork towards realizing a universal translation syst em in an open-source manner.\nBiography\nAngela is a research scientist at Meta AI Research in New York\, focusing on supporting efforts in speech a nd language research. Recent projects include No Language Left Behind (htt ps://ai.facebook.com/research/no-language-left-behind/) and Universal Spee ch Translation for Unwritten Languages (https://ai.facebook.com/blog/ai-tr anslation-hokkien/). Before translation\, Angela previously focused on res earch in on-device models for NLP and computer vision and text generation. DTSTART;TZID=America/New_York:20221118T120000 DTEND;TZID=America/New_York:20221118T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Angela Fan (Meta AI Research) “No Language Left Behind: Scaling Hu man-Centered Machine Translation” URL:https://www.clsp.jhu.edu/events/angela-fan-facebook/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nDriven by the goal of eradicating language barriers o n a global scale\, machine translation has solidified itself as a key focu s of artificial intelligence research today. However\, such efforts have c oalesced around a small subset of languages\, leaving behind the vast majo rity of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe\, high-quality results\, all while ke eping ethical considerations in mind? In this talk\, I introduce No Langua ge Left Behind\, an initiative to break language barriers for low-resource languages. In No Language Left Behind\, we took on the low-resource langu age translation challenge by first contextualizing the need for translatio n support through exploratory interviews with native speakers. Then\, we c reated datasets and models aimed at narrowing the performance gap between low and high-resource languages. We proposed multiple architectural and tr aining improvements to counteract overfitting while training on thousands of tasks. Critically\, we evaluated the performance of over 40\,000 differ ent translation directions using a human-translated benchmark\, Flores-200 \, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achiev es an improvement of 44% BLEU relative to the previous state-of-the-art\, laying important groundwork towards realizing a universal translation syst em in an open-source manner.
\nBiography
\nAngela is a research scientist at Meta AI Research in Ne w York\, focusing on supporting efforts in speech and language research. R ecent projects include No Language Left Behind (https://ai.facebook.com/research/no-language-left-be hind/) and Universal Speech Translation for Unwritten Languages (https://ai.facebook.com/blog/ai-translation -hokkien/). Before translation\, Angela previously focused on research in on-device models for NLP and computer vision and text generation.
\n\n X-TAGS;LANGUAGE=en-US:2022\,Fan\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-22417@www.clsp.jhu.edu DTSTAMP:20240328T184412Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nOne of the keys to success in machine learning applic ations is to improve each user’s personal experience via personalized mode ls. A personalized model can be a more resource-efficient solution than a general-purpose model\, too\, because it focuses on a particular sub-probl em\, for which a smaller model architecture can be good enough. However\, training a personalized model requires data from the particular test-time user\, which are not always available due to their private nature and tech nical challenges. Furthermore\, such data tend to be unlabeled as they can be collected only during the test time\, once after the system is deploye d to user devices. One could rely on the generalization power of a generic model\, but such a model can be too computationally/spatially complex for real-time processing in a resource-constrained device. In this talk\, I w ill present some techniques to circumvent the lack of labeled personal dat a in the context of speech enhancement. Our machine learning models will r equire zero or few data samples from the test-time users\, while they can still achieve the personalization goal. To this end\, we will investigate modularized speech enhancement models as well as the potential of self-sup ervised learning for personalized speech enhancement. Because our research achieves the personalization goal in a data- and resource-efficient way\, it is a step towards a more available and affordable AI for society.\nBio graphy\nMinje Kim is an associate professor in the Dept. of Intelligent Sy stems Engineering at Indiana University\, where he leads his research grou p\, Signals and AI Group in Engineering (SAIGE). He is also an Amazon Visi ting Academic\, consulting for Amazon Lab126. At IU\, he is affiliated wit h various programs and labs such as Data Science\, Cognitive Science\, Dep t. of Statistics\, and Center for Machine Learning. He earned his Ph.D. in the Dept. of Computer Science at the University of Illinois at Urbana-Cha mpaign. Before joining UIUC\, He worked as a researcher at ETRI\, a nation al lab in Korea\, from 2006 to 2011. Before then\, he received his Master’ s and Bachelor’s degrees in the Dept. of Computer Science and Engineering at POSTECH (Summa Cum Laude) and in the Division of Information and Comput er Engineering at Ajou University (with honor) in 2006 and 2004\, respecti vely. He is a recipient of various awards including NSF Career Award (2021 )\, IU Trustees Teaching Award (2021)\, IEEE SPS Best Paper Award (2020)\, and Google and Starkey’s grants for outstanding student papers in ICASSP 2013 and 2014\, respectively. He is an IEEE Senior Member and also a membe r of the IEEE Audio and Acoustic Signal Processing Technical Committee (20 18-2023). He is serving as an Associate Editor for EURASIP Journal of Audi o\, Speech\, and Music Processing\, and as a Consulting Associate Editor f or IEEE Open Journal of Signal Processing. He is also a reviewer\, program committee member\, or area chair for the major machine learning and signa l processing. He filed more than 50 patent applications as an inventor. DTSTART;TZID=America/New_York:20221202T120000 DTEND;TZID=America/New_York:20221202T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Minje Kim (Indiana University) “Personalized Speech Enhancement: Da ta- and Resource-Efficient Machine Learning” URL:https://www.clsp.jhu.edu/events/minje-kim-indiana-university/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nOne of the keys to success in machine learning applic ations is to improve each user’s personal experience via personalized mode ls. A personalized model can be a more resource-efficient solution than a general-purpose model\, too\, because it focuses on a particular sub-probl em\, for which a smaller model architecture can be good enough. However\, training a personalized model requires data from the particular test-time user\, which are not always available due to their private nature and tech nical challenges. Furthermore\, such data tend to be unlabeled as they can be collected only during the test time\, once after the system is deploye d to user devices. One could rely on the generalization power of a generic model\, but such a model can be too computationally/spatially complex for real-time processing in a resource-constrained device. In this talk\, I will present some techniques to circumvent the lack of labeled personal data in the context of speech enhancement. Ou r machine learning models will require zero or few data samples from the t est-time users\, while they can still achieve the personalization goal. To this end\, we will investigate modularized speech enhancement models as w ell as the potential of self-supervised learning for personalized speech e nhancement. Because our research achieves the personalization goal in a da ta- and resource-efficient way\, it is a step towards a more available and affordable AI for society.
\nBiography
\n