BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20716@www.clsp.jhu.edu DTSTAMP:20240328T093750Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nOver the last few years\, deep neural models have taken over the field of natural language processin g (NLP)\, brandishing great improvements on many of its sequence-level tas ks. But the end-to-end nature of these models makes it hard to figure out whether the way they represent individual words aligns with how language b uilds itself from the bottom up\, or how lexical changes in register and d omain can affect the untested aspects of such representations.
\nIn this talk\, I will present NYTWIT\, a dataset created to challenge large l anguage models at the lexical level\, tasking them with identification of processes leading to the formation of novel English words\, as well as wit h segmentation and recovery of the specific subclass of novel blends. I wi ll then present XRayEmb\, a method which alleviates the hardships of proce ssing these novelties by fitting a character-level encoder to the existing models’ subword tokenizers\; and conclude with a discussion of the drawba cks of current tokenizers’ vocabulary creation schemes.
\nBi ography
\nYuval Pinter is a Senior Lecturer in the Department of Comp uter Science at Ben-Gurion University of the Negev\, focusing on natural l anguage processing. Yuval got his PhD at t he Georgia Institute of Technology School of Interactive Computing as a Bl oomberg Data Science PhD Fellow. Before that\, he worked as a Research Eng ineer at Yahoo Labs and as a Computational Linguist at Ginger Software\, a nd obtained an MA in Linguistics and a BSc in CS and Mathematics\, both fr om Tel Aviv University. Yuval blogs (in He brew) about language matters on Dagesh Kal.
DTSTART;TZID=America/New_York:20210910T120000 DTEND;TZID=America/New_York:20210910T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD SEQUENCE:0 SUMMARY:Yuval Pinter (Ben-Gurion University – Virtual Visit) “Challenging a nd Adapting NLP Models to Lexical Phenomena” URL:https://www.clsp.jhu.edu/events/yuval-pinter/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,Pinter\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-22417@www.clsp.jhu.edu DTSTAMP:20240328T093750Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nOne of the keys to success in machine learning applications is to improve each user’s personal exper ience via personalized models. A personalized model can be a more resource -efficient solution than a general-purpose model\, too\, because it focuse s on a particular sub-problem\, for which a smaller model architecture can be good enough. However\, training a personalized model requires data fro m the particular test-time user\, which are not always available due to th eir private nature and technical challenges. Furthermore\, such data tend to be unlabeled as they can be collected only during the test time\, once after the system is deployed to user devices. One could rely on the genera lization power of a generic model\, but such a model can be too computatio nally/spatially complex for real-time processing in a resource-constrained device. In this talk\, I will present som e techniques to circumvent the lack of labeled personal data in the contex t of speech enhancement. Our machine learning models will require zero or few data samples from the test-time users\, while they can still achieve t he personalization goal. To this end\, we will investigate modularized spe ech enhancement models as well as the potential of self-supervised learnin g for personalized speech enhancement. Because our research achieves the p ersonalization goal in a data- and resource-efficient way\, it is a step t owards a more available and affordable AI for society.
\nBio graphy
\nMinje Kim is an associate professor in the Dept. of Intellig ent Systems Engineering at Indiana University\, where he leads his researc h group\, Signals and AI Group in Engineering (SAIGE). He is also an Amazo n Visiting Academic\, consulting for Amazon Lab126. At IU\, he is affiliat ed with various programs and labs such as Data Science\, Cognitive Science \, Dept. of Statistics\, and Center for Machine Learning. He earned his Ph .D. in the Dept. of Computer Science at the University of Illinois at Urba na-Champaign. Before joining UIUC\, He worked as a researcher at ETRI\, a national lab in Korea\, from 2006 to 2011. Before then\, he received his M aster’s and Bachelor’s degrees in the Dept. of Computer Science and Engine ering at POSTECH (Summa Cum Laude) and in the Division of Information and Computer Engineering at Ajou University (w ith honor) in 2006 and 2004\, respectively. He is a recipient of various a wards including NSF Career Award (2021)\, IU Trustees Teaching Award (2021 )\, IEEE SPS Best Paper Award (2020)\, and Google and Starkey’s grants for outstanding student papers in ICASSP 2013 and 2014\, respectively. He is an IEEE Senior Member and also a member of the IEEE Audio and Acoustic Sig nal Processing Technical Committee (2018-2023). He is serving as an Associ ate Editor for EURASIP Journal of Audio\, Speech\, and Music Processing\, and as a Consulting Associate Editor for IEEE Open Journal of Signal Proce ssing. He is also a reviewer\, program committee member\, or area chair fo r the major machine learning and signal processing. He filed more than 50 patent applications as an inventor.
DTSTART;TZID=America/New_York:20221202T120000 DTEND;TZID=America/New_York:20221202T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Minje Kim (Indiana University) “Personalized Speech Enhancement: Da ta- and Resource-Efficient Machine Learning” URL:https://www.clsp.jhu.edu/events/minje-kim-indiana-university/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,December\,Kim END:VEVENT END:VCALENDAR