BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21275@www.clsp.jhu.edu DTSTAMP:20240328T232656Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:
Abstract
\n\n\n\n\nAutomatic discovery of phon e or word-like units is one of the core objectives in zero-resource speech processing. Recent attempts employ contrastive predictive coding (CPC)\, where the model learns representations by predicting the next frame given past context. However\, CPC only looks at the audio signal’s structure at the frame level. The speech structure exists beyond frame-level\, i.e.\, a t phone level or even higher. We propose a segmental contrastive predictiv e coding (SCPC) framework to learn from the signal structure at both the f rame and phone levels.\n\n\nSCPC is a hierarchical model with three stages trained in an end-to-end m anner. In the first stage\, the model predicts future feature frames and e xtracts frame-level representation from the raw waveform. In the second st age\, a differentiable boundary detector finds variable-length segments. I n the last stage\, the model predicts future segments to learn segment rep resentations. Experiments show that our model outperforms existing phone a nd word segmentation methods on TIMIT and Buckeye datasets.
Abstract
\nOne of the keys to success in machine learning applications is to improve each user’s personal exper ience via personalized models. A personalized model can be a more resource -efficient solution than a general-purpose model\, too\, because it focuse s on a particular sub-problem\, for which a smaller model architecture can be good enough. However\, training a personalized model requires data fro m the particular test-time user\, which are not always available due to th eir private nature and technical challenges. Furthermore\, such data tend to be unlabeled as they can be collected only during the test time\, once after the system is deployed to user devices. One could rely on the genera lization power of a generic model\, but such a model can be too computatio nally/spatially complex for real-time processing in a resource-constrained device. In this talk\, I will present som e techniques to circumvent the lack of labeled personal data in the contex t of speech enhancement. Our machine learning models will require zero or few data samples from the test-time users\, while they can still achieve t he personalization goal. To this end\, we will investigate modularized spe ech enhancement models as well as the potential of self-supervised learnin g for personalized speech enhancement. Because our research achieves the p ersonalization goal in a data- and resource-efficient way\, it is a step t owards a more available and affordable AI for society.
\nBio graphy
\nMinje Kim is an associate professor in the Dept. of Intellig ent Systems Engineering at Indiana University\, where he leads his researc h group\, Signals and AI Group in Engineering (SAIGE). He is also an Amazo n Visiting Academic\, consulting for Amazon Lab126. At IU\, he is affiliat ed with various programs and labs such as Data Science\, Cognitive Science \, Dept. of Statistics\, and Center for Machine Learning. He earned his Ph .D. in the Dept. of Computer Science at the University of Illinois at Urba na-Champaign. Before joining UIUC\, He worked as a researcher at ETRI\, a national lab in Korea\, from 2006 to 2011. Before then\, he received his M aster’s and Bachelor’s degrees in the Dept. of Computer Science and Engine ering at POSTECH (Summa Cum Laude) and in the Division of Information and Computer Engineering at Ajou University (w ith honor) in 2006 and 2004\, respectively. He is a recipient of various a wards including NSF Career Award (2021)\, IU Trustees Teaching Award (2021 )\, IEEE SPS Best Paper Award (2020)\, and Google and Starkey’s grants for outstanding student papers in ICASSP 2013 and 2014\, respectively. He is an IEEE Senior Member and also a member of the IEEE Audio and Acoustic Sig nal Processing Technical Committee (2018-2023). He is serving as an Associ ate Editor for EURASIP Journal of Audio\, Speech\, and Music Processing\, and as a Consulting Associate Editor for IEEE Open Journal of Signal Proce ssing. He is also a reviewer\, program committee member\, or area chair fo r the major machine learning and signal processing. He filed more than 50 patent applications as an inventor.
DTSTART;TZID=America/New_York:20221202T120000 DTEND;TZID=America/New_York:20221202T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Minje Kim (Indiana University) “Personalized Speech Enhancement: Da ta- and Resource-Efficient Machine Learning” URL:https://www.clsp.jhu.edu/events/minje-kim-indiana-university/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,December\,Kim END:VEVENT BEGIN:VEVENT UID:ai1ec-24507@www.clsp.jhu.edu DTSTAMP:20240328T232656Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nHistory repeats itself\, s ometimes in a bad way. Preventing natural or man-made disasters requires b eing aware of these patterns and taking pre-emptive action to address and reduce them\, or ideally\, eliminate them. Emerging events\, such as the C OVID pandemic and the Ukraine Crisis\, require a time-sensitive comprehens ive understanding of the situation to allow for appropriate decision-makin g and effective action response. Automated generation of situation reports can significantly reduce the time\, effort\, and cost for domain experts when preparing their official human-curated reports. However\, AI research toward this goal has been very limited\, and no successful trials have ye t been conducted to automate such report generation and “what-if” disaster forecasting. Pre-existing natural language processing and information ret rieval techniques are insufficient to identify\, locate\, and summarize im portant information\, and lack detailed\, structured\, and strategic aware ness. In this talk I will present SmartBook\, a novel framework that canno t be solved by large language models alone\, to consume large volumes of m ultimodal multilingual news data and produce a structured situation report with multiple hypotheses (claims) summarized and grounded with rich links to factual evidence through multimodal knowledge extraction\, claim detec tion\, fact checking\, misinformation detection and factual error correcti on. Furthermore\, SmartBook can also serve as a novel news event simulator \, or an intelligent prophetess. Given “What-if” conditions and dimension s elicited from a domain expert user concerning a disaster scenario\, Smar tBook will induce schemas from historical events\, and automatically gener ate a complex event graph along with a timeline of news articles that desc ribe new simulated events and character-centric stories based on a new Λ-s haped attention mask that can generate text with infinite length. By effec tively simulating disaster scenarios in both event graph and natural langu age format\, we expect SmartBook will greatly assist humanitarian workers and policymakers to exercise reality checks\, and thus better prevent and respond to future disasters.
\nBio
\nHeng Ji is a professor at Computer Science Department\, and an affiliated faculty member at Electrical and Computer Engineering Department and Coordinated S cience Laboratory of University of Illinois Urbana-Champaign. She is an Am azon Scholar. She is the Founding Director of Amazon-Illinois Center on AI for Interactive Conversational Experiences (AICE). She received her B.A. and M. A. in Computational Linguistics from Tsinghua University\, and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing\, especially on Multimedia Multilingual Information Extraction\, Knowledge-enhanced Large Language Mo dels\, Knowledge-driven Generation and Conversational AI. She was selected as a Young Scientist to attend the 6th World Laureates Association Forum\ , and selected to participate in DARPA AI Forward in 2023. She was selecte d as “Young Scientist” and a member of the Global Future Council on the Fu ture of Computing by the World Economic Forum in 2016 and 2017. The awards she received include Women Leaders of Conversational AI (Class of 2023) b y Project Voice\, “AI’s 10 to Watch” Award by IEEE Intelligent Systems in 2013\, NSF CAREER award in 2009\, PACLIC2012 Best paper runner-up\, “Best of ICDM2013” paper award\, “Best of SDM2013” paper award\, ACL2018 Best De mo paper nomination\, ACL2020 Best Demo Paper Award\, NAACL2021 Best Demo Paper Award\, Google Research Award in 2009 and 2014\, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-2018. She was invi ted to testify to the U.S. House Cybersecurity\, Data Analytics\, & IT Com mittee as an AI expert in 2023. She was invited by the Secretary of the U. S. Air Force and AFRL to join Air Force Data Analytics Expert Panel to inf orm the Air Force Strategy 2030\, and invited to speak at the Federal Info rmation Integrity R&D Interagency Working Group (IIRD IWG) briefing in 202 3. She is the lead of many multi-institution projects and tasks\, includin g the U.S. ARL projects on information fusion and knowledge networks const ruction\, DARPA ECOLE MIRACLE team\, DARPA KAIROS RESIN team and DARPA DEF T Tinker Bell team. She has coordinated the NIST TAC Knowledge Base Popula tion task 2010-2022. She was the associate editor for IEEE/ACM Transaction on Audio\, Speech\, and Language Processing\, and served as the Program C ommittee Co-Chair of many conferences including NAACL-HLT2018 and AACL-IJC NLP2022. She is elected as the North American Chapter of the Association f or Computational Linguistics (NAACL) secretary 2020-2023. Her research has been widely supported by the U.S. government agencies (DARPA\, NSF\, DoE\ , ARL\, IARPA\, AFRL\, DHS) and industry (Apple\, Amazon\, Google\, Facebo ok\, Bosch\, IBM\, Disney).
DTSTART;TZID=America/New_York:20240405T120000 DTEND;TZID=America/New_York:20240405T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, Maryland 21218 SEQUENCE:0 SUMMARY:Heng Ji (University of Illinois Urbana-Champaign) “SmartBook: an AI Prophetess for Disaster Reporting and Forecasting” URL:https://www.clsp.jhu.edu/events/heng-ji-university-of-illinois-urbana-c hampaign-smartbook-an-ai-prophetess-for-disaster-reporting-and-forecasting / X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,April\,Ji END:VEVENT END:VCALENDAR