BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-22417@www.clsp.jhu.edu DTSTAMP:20240329T155451Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nOne of the keys to success in machine learning applic ations is to improve each user’s personal experience via personalized mode ls. A personalized model can be a more resource-efficient solution than a general-purpose model\, too\, because it focuses on a particular sub-probl em\, for which a smaller model architecture can be good enough. However\, training a personalized model requires data from the particular test-time user\, which are not always available due to their private nature and tech nical challenges. Furthermore\, such data tend to be unlabeled as they can be collected only during the test time\, once after the system is deploye d to user devices. One could rely on the generalization power of a generic model\, but such a model can be too computationally/spatially complex for real-time processing in a resource-constrained device. In this talk\, I w ill present some techniques to circumvent the lack of labeled personal dat a in the context of speech enhancement. Our machine learning models will r equire zero or few data samples from the test-time users\, while they can still achieve the personalization goal. To this end\, we will investigate modularized speech enhancement models as well as the potential of self-sup ervised learning for personalized speech enhancement. Because our research achieves the personalization goal in a data- and resource-efficient way\, it is a step towards a more available and affordable AI for society.\nBio graphy\nMinje Kim is an associate professor in the Dept. of Intelligent Sy stems Engineering at Indiana University\, where he leads his research grou p\, Signals and AI Group in Engineering (SAIGE). He is also an Amazon Visi ting Academic\, consulting for Amazon Lab126. At IU\, he is affiliated wit h various programs and labs such as Data Science\, Cognitive Science\, Dep t. of Statistics\, and Center for Machine Learning. He earned his Ph.D. in the Dept. of Computer Science at the University of Illinois at Urbana-Cha mpaign. Before joining UIUC\, He worked as a researcher at ETRI\, a nation al lab in Korea\, from 2006 to 2011. Before then\, he received his Master’ s and Bachelor’s degrees in the Dept. of Computer Science and Engineering at POSTECH (Summa Cum Laude) and in the Division of Information and Comput er Engineering at Ajou University (with honor) in 2006 and 2004\, respecti vely. He is a recipient of various awards including NSF Career Award (2021 )\, IU Trustees Teaching Award (2021)\, IEEE SPS Best Paper Award (2020)\, and Google and Starkey’s grants for outstanding student papers in ICASSP 2013 and 2014\, respectively. He is an IEEE Senior Member and also a membe r of the IEEE Audio and Acoustic Signal Processing Technical Committee (20 18-2023). He is serving as an Associate Editor for EURASIP Journal of Audi o\, Speech\, and Music Processing\, and as a Consulting Associate Editor f or IEEE Open Journal of Signal Processing. He is also a reviewer\, program committee member\, or area chair for the major machine learning and signa l processing. He filed more than 50 patent applications as an inventor. DTSTART;TZID=America/New_York:20221202T120000 DTEND;TZID=America/New_York:20221202T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Minje Kim (Indiana University) “Personalized Speech Enhancement: Da ta- and Resource-Efficient Machine Learning” URL:https://www.clsp.jhu.edu/events/minje-kim-indiana-university/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nOne of the keys to success in machine learning applic ations is to improve each user’s personal experience via personalized mode ls. A personalized model can be a more resource-efficient solution than a general-purpose model\, too\, because it focuses on a particular sub-probl em\, for which a smaller model architecture can be good enough. However\, training a personalized model requires data from the particular test-time user\, which are not always available due to their private nature and tech nical challenges. Furthermore\, such data tend to be unlabeled as they can be collected only during the test time\, once after the system is deploye d to user devices. One could rely on the generalization power of a generic model\, but such a model can be too computationally/spatially complex for real-time processing in a resource-constrained device. In this talk\, I will present some techniques to circumvent the lack of labeled personal data in the context of speech enhancement. Ou r machine learning models will require zero or few data samples from the t est-time users\, while they can still achieve the personalization goal. To this end\, we will investigate modularized speech enhancement models as w ell as the potential of self-supervised learning for personalized speech e nhancement. Because our research achieves the personalization goal in a da ta- and resource-efficient way\, it is a step towards a more available and affordable AI for society.
\nBiography
\nAbstr act
\nAdvanced neural language models have grown ever large
r and more complex\, pushing forward the limits of language understanding
and generation\, while diminishing interpretability. The black-box nature
of deep neural networks blocks humans from understanding them\, as well as
trusting and using them in real-world applications. This talk will introd
uce interpretation techniques that bridge the gap between humans and model
s for developing trustworthy natural language processing
(NLP). I will first show how to explain black-box models and evalua
te their explanations for understanding their prediction behavior. Then I
will introduce how to improve the interpretability of neural language mode
ls by making their decision-making transparent and rationalized. Finally\,
I will discuss how to diagnose and improve models (e.g.\, robustness) thr
ough the lens of explanations. I will conclude with future research direct
ions that are centered around model interpretability and committed to faci
litating communications and interactions between intelligent machines\, sy
stem developers\, and end users for long-term trustworthy AI.
Hanjie Chen is a Ph.D. candidate in Compute r Science at the University of Virginia\, advised by Prof. Yangfeng Ji. He r research interests lie in Trustworthy AI\, Natural Language Processing ( NLP)\, and
\n X-TAGS;LANGUAGE=en-US:2023\,Chen\,February END:VEVENT END:VCALENDAR Interpretable Machine Learning. She dev elops interpretation techniques to explain neural language models and make their prediction behavior transparent and reliable. She is a recipient of the Carlos and Esther Farrar Fellowship and the Best Poster Award at the ACM CAPWIC 2021. Her work has been published at top-tier NLP/AI conference s (e.g.\, ACL\, AAAI\, EMNLP\, NAACL) and selected by the National Center for Women & Information Technology (NCWIT) Collegiate Award Finalist 2021. She (as the primary instructor) co-designed and taught the course\, Inter pretable Machine Learning\, and was awarded the UVA CS Outstanding Graduat e Teaching Award and University-wide Graduate Teaching Awards Nominee (top 5% of graduate instructors). More details can be found at https://www.cs.virginia.edu/~hc9mx