BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21068@www.clsp.jhu.edu DTSTAMP:20240328T232308Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20211203T120000 DTEND;TZID=America/New_York:20211203T131500 LOCATION:Hackerman HallB17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Eric Ringger (Zillow Group) URL:https://www.clsp.jhu.edu/events/eric-ringger-zillow-group/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,December\,Ringger END:VEVENT BEGIN:VEVENT UID:ai1ec-21072@www.clsp.jhu.edu DTSTAMP:20240328T232308Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nEmotion has intrigued researchers for generations. Th is fascination has permeated the engineering community\, motivating the de velopment of affective computing methods. However\, human emotion remains notoriously difficult to accurately detect. As a result\, emotion classifi cation techniques are not always effective when deployed. This is a probl em because we are missing out on the potential that emotion recognition pr ovides: the opportunity to automatically measure an aspect of behavior tha t provides critical insight into our health and wellbeing\, insight that i s not always easily accessible. In this talk\, I will discuss our efforts in developing emotion recognition approaches that are effective in natura l environments and demonstrate how these approaches can be used to support mental health.\n\nBiography\n\nEmily Mower Provost is an Associate Profes sor in Computer Science and Engineering and Toyota Faculty Scholar at the University of Michigan. She received her Ph.D. in Electrical Engineering f rom the University of Southern California (USC)\, Los Angeles\, CA in 2010 . She has been awarded a National Science Foundation CAREER Award (2017)\, the Oscar Stern Award for Depression Research (2015)\, a National Science Foundation Graduate Research Fellowship (2004-2007). She is a co-author o n the paper\, “Say Cheese vs. Smile: Reducing Speech-Related Variability f or Facial Emotion Recognition\,” winner of Best Student Paper at ACM Multi media\, 2014\, and a co-author of the winner of the Classifier Sub-Challen ge event at the Interspeech 2009 emotion challenge. Her research interests are in human-centered speech and video processing\, multimodal interfaces design\, and speech-based assistive technology. The goals of her research are motivated by the complexities of the perception and expression of hum an behavior. DTSTART;TZID=America/New_York:20211206T120000 DTEND;TZID=America/New_York:20211206T131500 LOCATION:Maryland Hall 110 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Emily Mower-Provost (University of Michigan) “Automatically Measuri ng Emotion from Speech: New Methods to Move from the Lab to the Real World ” URL:https://www.clsp.jhu.edu/events/emily-mower-provost-university-of-michi gan/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nAbstr act
\n\n\n\n\nAutomatic discovery of phone or word-like units is one of the core objectives in zero-resource speech processing. Recent attempts employ contrastive predictive coding (CPC)\, where the model learns repre sentations by predicting the next frame given past context. However\, CPC only looks at the audio signal’s structure at the frame level. The speech structure exists beyond frame-level\, i.e.\, at phone level or even higher . We propose a segmental contrastive predictive coding (SCPC) framework to learn from the signal structure at both the frame and phone levels.\n\n\nSCPC is a hierarchical mode l with three stages trained in an end-to-end manner. In the first stage\, the model predicts future feature frames and extracts frame-level represen tation from the raw waveform. In the second stage\, a differentiable bound ary detector finds variable-length segments. In the last stage\, the model predicts future segments to learn segment representations. Experiments sh ow that our model outperforms existing phone and word segmentation methods on TIMIT and Buckeye datasets.
Abstr act
\nAs AI-driven language interfaces (such as c hat-bots) become more integrated into our lives\, they need to become more versatile and reliable in their communication with human users. How can w e make progress toward building more “general” models that are capable of understanding a broader spectrum of language commands\, given practical co nstraints such as the limited availability of labeled data?
\nIn this talk\, I will describe my research toward addressing this ques tion along two dimensions of generality. First I will discuss progress in “breadth” — models that address a wider variety of tasks and abilities\, d rawing inspiration from existing statistical learning techniques such as m ulti-task learning. In particular\, I will showcase a system that works we ll on several QA benchmarks\, resulting in state-of-the-art results on 10 benchmarks. Furthermore\, I will show its extension to tasks beyond QA (su ch as text generation or classification) that can be “defined” via natural language. In the second part\, I will focus on progress in “depth” — mod els that can handle complex inputs such as compositional questions. I will introduce Text Modular Networks\, a general framework that casts problem- solving as natural language communication among simpler “modules.” Applyin g this framework to compositional questions by leveraging discrete optimiz ation and existing non-compositional closed-box QA models results in a mod el with strong empirical performance on multiple complex QA benchmarks whi le providing human-readable reasoning.
\nI will conclude w ith future research directions toward broader NLP systems by addressing th e limitations of the presented ideas and other missing elements needed to move toward more general-purpose interactive language understanding system s.
\nBiography
\nDaniel Khashabi is a postdoctoral researcher at the Allen Institute for Artificia l Intelligence (AI2)\, Seattle. Previously\, he completed his Ph.D. in Com puter and Information Sciences at the University of Pennsylvania in 2019. His interests lie at the intersection of artificial intelligence and natur al language processing\, with a vision toward more general systems through unified algorithms and theories.
\n X-TAGS;LANGUAGE=en-US:2022\,February\,Khashabi END:VEVENT BEGIN:VEVENT UID:ai1ec-22417@www.clsp.jhu.edu DTSTAMP:20240328T232308Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nOne of the keys to success in machine learning applic ations is to improve each user’s personal experience via personalized mode ls. A personalized model can be a more resource-efficient solution than a general-purpose model\, too\, because it focuses on a particular sub-probl em\, for which a smaller model architecture can be good enough. However\, training a personalized model requires data from the particular test-time user\, which are not always available due to their private nature and tech nical challenges. Furthermore\, such data tend to be unlabeled as they can be collected only during the test time\, once after the system is deploye d to user devices. One could rely on the generalization power of a generic model\, but such a model can be too computationally/spatially complex for real-time processing in a resource-constrained device. In this talk\, I w ill present some techniques to circumvent the lack of labeled personal dat a in the context of speech enhancement. Our machine learning models will r equire zero or few data samples from the test-time users\, while they can still achieve the personalization goal. To this end\, we will investigate modularized speech enhancement models as well as the potential of self-sup ervised learning for personalized speech enhancement. Because our research achieves the personalization goal in a data- and resource-efficient way\, it is a step towards a more available and affordable AI for society.\nBio graphy\nMinje Kim is an associate professor in the Dept. of Intelligent Sy stems Engineering at Indiana University\, where he leads his research grou p\, Signals and AI Group in Engineering (SAIGE). He is also an Amazon Visi ting Academic\, consulting for Amazon Lab126. At IU\, he is affiliated wit h various programs and labs such as Data Science\, Cognitive Science\, Dep t. of Statistics\, and Center for Machine Learning. He earned his Ph.D. in the Dept. of Computer Science at the University of Illinois at Urbana-Cha mpaign. Before joining UIUC\, He worked as a researcher at ETRI\, a nation al lab in Korea\, from 2006 to 2011. Before then\, he received his Master’ s and Bachelor’s degrees in the Dept. of Computer Science and Engineering at POSTECH (Summa Cum Laude) and in the Division of Information and Comput er Engineering at Ajou University (with honor) in 2006 and 2004\, respecti vely. He is a recipient of various awards including NSF Career Award (2021 )\, IU Trustees Teaching Award (2021)\, IEEE SPS Best Paper Award (2020)\, and Google and Starkey’s grants for outstanding student papers in ICASSP 2013 and 2014\, respectively. He is an IEEE Senior Member and also a membe r of the IEEE Audio and Acoustic Signal Processing Technical Committee (20 18-2023). He is serving as an Associate Editor for EURASIP Journal of Audi o\, Speech\, and Music Processing\, and as a Consulting Associate Editor f or IEEE Open Journal of Signal Processing. He is also a reviewer\, program committee member\, or area chair for the major machine learning and signa l processing. He filed more than 50 patent applications as an inventor. DTSTART;TZID=America/New_York:20221202T120000 DTEND;TZID=America/New_York:20221202T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Minje Kim (Indiana University) “Personalized Speech Enhancement: Da ta- and Resource-Efficient Machine Learning” URL:https://www.clsp.jhu.edu/events/minje-kim-indiana-university/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nOne of the keys to success in machine learning applic ations is to improve each user’s personal experience via personalized mode ls. A personalized model can be a more resource-efficient solution than a general-purpose model\, too\, because it focuses on a particular sub-probl em\, for which a smaller model architecture can be good enough. However\, training a personalized model requires data from the particular test-time user\, which are not always available due to their private nature and tech nical challenges. Furthermore\, such data tend to be unlabeled as they can be collected only during the test time\, once after the system is deploye d to user devices. One could rely on the generalization power of a generic model\, but such a model can be too computationally/spatially complex for real-time processing in a resource-constrained device. In this talk\, I will present some techniques to circumvent the lack of labeled personal data in the context of speech enhancement. Ou r machine learning models will require zero or few data samples from the t est-time users\, while they can still achieve the personalization goal. To this end\, we will investigate modularized speech enhancement models as w ell as the potential of self-supervised learning for personalized speech e nhancement. Because our research achieves the personalization goal in a da ta- and resource-efficient way\, it is a step towards a more available and affordable AI for society.
\nBiography
\nAbstr act
\nZipf’s law is commonly glossed by the aphorism “infre quent words are frequent\,” but in practice\, it has often meant that ther e are three types of words: frequent\, infrequent\, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (wi th dynamic time warping). Hidden Markov models worked well for moderately infrequent words\, but the problem of OOV words was not solved until sequ ence-to-sequence neural nets de-reified the concept of a word. Many other social phenomena follow power-law distributions. The number of native sp eakers of the N’th most spoken language\, for example\, is 1.44 billion ov er N to the 1.09. In languages with sufficient data\, we have shown that monolingual pre-training outperforms multilingual pre-training. In less-f requent languages\, multilingual knowledge transfer can significantly redu ce phone error rates. In languages with no training data\, unsupervised A SR methods can be proven to converge\, as long as the eigenvalues of the l anguage model are sufficiently well separated to be measurable. Other syst ems of social categorization may follow similar power-law distributions. Disability\, for example\, can cause speech patterns that were never seen in the training database\, but not all disabilities need do so. The inabi lity of speech technology to work for people with even common disabilities is probably caused by a lack of data\, and can probably be solved by find ing better modes of interaction between technology researchers and the com munities served by technology.
\nBiography
\nMark Hasegawa-Johnson is a William L. Everitt Faculty Fellow of Electrical and Computer Engineering at the University of Illinois in Urbana-Champaig n. He has published research in speech production and perception\, source separation\, voice conversion\, and low-resource automatic speech recogni tion.
\n X-TAGS;LANGUAGE=en-US:2022\,December\,Hasegawa-Johnson END:VEVENT BEGIN:VEVENT UID:ai1ec-23886@www.clsp.jhu.edu DTSTAMP:20240328T232308Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nThe arms race to build increasingly larger\, powerful language models (LMs) in the past year has been remarkable. Yet incorpora ting LMs effectively into practical applications that facilitate manual wo rkflows remains challenging. I will discuss LMs’ limiting factors and our efforts to overcome them. I will start with challenges surrounding efficie nt and robust LM alignment. I will share insights from our recent paper “S elf-Instruct” (ACL 2023)\, where we used vanilla (unaligned) LMs for align ing itself\, an approach that has yielded some success. Then\, I will move on to the challenge of tracing the output of LMs to reliable sources\, a weakness that makes them prone to hallucinations. I will discuss our recen t approach of ‘according-to’ prompting\, which steers LMs to quote directl y from sources observed in its pre-training. If time permits\, I will disc uss our ongoing project to adapt LMs to interact with web pages. Throughou t the presentation\, I will highlight our progress\, and end with question s about our future progress.\nBiography\nDaniel Khashabi is an assistant p rofessor in computer science at Johns Hopkins University and the Center fo r Language and Speech Processing (CLSP) member. He is interested in buildi ng reasoning-driven modular NLP systems that are robust\, transparent\, an d communicative\, particularly those that use natural language as the comm unication medium. Khashabi has published over 40 papers on natural languag e processing and AI in top-tier venues. His work touches upon developing. His research has won the ACL 2023 Outstanding Paper Award\, NAACL 2022 Bes t Paper Award\, research gifts from the Allen Institute for AI\, and an Am azon Research Award 2023. Before joining Hopkins\, he was a postdoctoral f ellow at the Allen Institute for AI (2019-2022) and obtained a Ph.D. from the University of Pennsylvania in 2019. DTSTART;TZID=America/New_York:20230908T120000 DTEND;TZID=America/New_York:20230908T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Daniel Khashabi (Johns Hopkins University) “Building More Helpful L anguage Models” URL:https://www.clsp.jhu.edu/events/daniel-khashabi-johns-hopkins-universit y/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nThe arms race to build increasingly larger\, powerful language models (LMs) in the past year has been remarkable. Yet incorpora ting LMs effectively into practical applications that facilitate manual wo rkflows remains challenging. I will discuss LMs’ limiting factors and our efforts to overcome them. I will start with challenges surrounding efficie nt and robust LM alignment. I will share insights from our recent paper “Self-Instruct” (ACL 2023)\, where we used vanilla (unaligned) LMs for aligning itself\, an approach that has yielded some success. Then\, I will move on to the challenge of t racing the output of LMs to reliable sources\, a weakness that makes them prone to hallucinations. I will discuss our recent approach of ‘according-to’ prompting\, which steers LM s to quote directly from sources observed in its pre-training. If time per mits\, I will discuss our ongoing project to adapt LMs to interact with we b pages. Throughout the presentation\, I will highlight our progress\, and end with questions about our future progress.
\nBiography strong>
\nDaniel Khashabi is an assistant professor in computer science at Johns Hopkins University and the Center for Language and Speech Pr ocessing (CLSP) member. He is interested in building reasoning-driven modu lar NLP systems that are robust\, transparent\, and communicative\, partic ularly those that use natural language as the communication medium. Khasha bi has published over 40 papers on natural language processing and AI in t op-tier venues. His work touches upon developing. His research has won the ACL 2023 Outstanding Paper Award\, NAACL 2022 Best Paper Award\, research gifts from the Allen Institute for AI\, and an Amazon Research Award 2023 . Before joining Hopkins\, he was a postdoctoral fellow at the Allen Insti tute for AI (2019-2022) and obtained a Ph.D. from the University of Pennsy lvania in 2019.
\n X-TAGS;LANGUAGE=en-US:2023\,Khashabi\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-24167@www.clsp.jhu.edu DTSTAMP:20240328T232308Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nPre-trained speech representation models have become ubiquitous in speech processing over the past few years. They have both i mproved the state of the art and made it feasible to learn task-specific m odels with very little labeled data. However\, it is not well understood what linguistic information is encoded in pre-trained models and how best to apply them to downstream tasks. In this talk I will describe recent wor k that begins to build an understanding of the layer-wise information lear ned by pre-trained speech models. We consider a number of popular pre-tra ined models and investigate the extent to which their layers encode spectr al\, phonetic\, and word-level information. The results of these analyses also suggest some ways to improve or simplify the application of pre-trai ned models for downstream tasks. Finally\, I will describe our efforts to benchmark model performance on a variety of spoken language understanding tasks\, in order to broaden our understanding of the capabilities of stat e-of-the-art models.\nThis talk is based in part on work presented in\nA. Pasad et al.\, “Comparative layer-wise analysis of self-supervised speech models\,”ICASSP 2023.\nA. Pasad et al.\, “What do self-supervised speech m odels know about words?\,” arXiv:2307.00162\, 2023.\nS. Shon et al.\, “SLU E Phase-2: A Benchmark Suite of Diverse Spoken Language Understanding Task s\,” ACL 2023.\nBio\nKaren Livescu is a Professor at TTI-Chicago. She comp leted her PhD at MIT in 2005. She is an ISCA Fellow and a recent IEEE Dist inguished Lecturer. She has served as a program chair/co-chair for ICLR\, Interspeech\, and ASRU\, and is an Associate Editor for TACL and IEEE T-P AMI. Her group’s work spans a variety of topics in spoken\, written\, and signed language processing. DTSTART;TZID=America/New_York:20231201T120000 DTEND;TZID=America/New_York:20231201T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Karen Livescu (Toyota Technological Institute at Chicago) “What Do Pre-Trained Speech Representation Models Know? Layer-Wise Analysis and Ben chmarking” URL:https://www.clsp.jhu.edu/events/karen-livescu-toyota-technological-inst itute-at-chicago/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nPre-trained speech representation models have become ubiquitous in speech processing over the past few years. They have both i mproved the state of the art and made it feasible to learn task-specific m odels with very little labeled data. However\, it is not well understood what linguistic information is encoded in pre-trained models and how best to apply them to downstream tasks. In this talk I will describe recent wor k that begins to build an understanding of the layer-wise information lear ned by pre-trained speech models. We consider a number of popular pre-tra ined models and investigate the extent to which their layers encode spectr al\, phonetic\, and word-level information. The results of these analyses also suggest some ways to improve or simplify the application of pre-trai ned models for downstream tasks. Finally\, I will describe our efforts to benchmark model performance on a variety of spoken language understanding tasks\, in order to broaden our understanding of the capabilities of stat e-of-the-art models.
\nThis talk is based in part on work presented in
\nA. Pasad et al.\, “C omparative layer-wise analysis of self-supervised speech models\,”ICAS SP 2023.
\nA. Pasad et al.\, “What do self-supervised speech models know about words?\,” ar Xiv:2307.00162\, 2023.
\nS. Shon et al.\, “SLUE Phase-2: A Benchmark Suite of Diverse Spo ken Language Understanding Tasks\,” ACL 2023.
\nBio
\nKaren Livescu is a Professor at TTI-Chicago. She completed he r PhD at MIT in 2005. She is an ISCA Fellow and a recent IEEE Distinguishe d Lecturer. She has served as a program chair/co-chair for ICLR\, Intersp eech\, and ASRU\, and is an Associate Editor for TACL and IEEE T-PAMI. He r group’s work spans a variety of topics in spoken\, written\, and signed language processing.
\n X-TAGS;LANGUAGE=en-US:2023\,December\,Livescu END:VEVENT BEGIN:VEVENT UID:ai1ec-24169@www.clsp.jhu.edu DTSTAMP:20240328T232308Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nFoundation models\, including Chat-GPT and its many v ariants\, have come into prominence in the natural language processing (NL P) community thanks the ubiquity of text data readily available on the int ernet and the design of modern transformer architectures that can effectiv ely learn from such data. However\, the development of a foundation model for sequential decision-making (e.g.\, reinforcement learning\, planning) is faced with additional challenges not present in NLP. In this talk\, we discuss some of these challenges with the hope of informing future investm ents that funding agencies and the academic community should engage in. Th e problem of transfer learning in the context of sequential decision-makin g is also discussed and constitutes one of the challenges that foundation models must address.\nBio\nAlvaro Velasquez a program manager at the Defen se Advanced Research Projects Agency (DARPA)\, where he currently leads pr ograms on neuro-symbolic AI. Before that\, Alvaro oversaw the machine inte lligence portfolio for the Information Directorate of the Air Force Resear ch Laboratory (AFRL). Alvaro is a recipient of the distinguished paper awa rd from AAAI and best paper and patent awards from AFRL\, the National Sci ence Foundation Graduate Research Fellowship. He has authored over 70 pape rs and two patents and serves as Associate Editor of the IEEE Transactions on Artificial Intelligence. DTSTART;TZID=America/New_York:20231204T120000 DTEND;TZID=America/New_York:20231204T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Alvaro Velasquez (DARPA) “Foundation Models and the Transfer of Emb odied Autonomy” URL:https://www.clsp.jhu.edu/events/alvaro-velasquez/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nFoundation models\, including Chat-GPT and its many v ariants\, have come into prominence in the natural language processing (NL P) community thanks the ubiquity of text data readily available on the int ernet and the design of modern transformer architectures that can effectiv ely learn from such data. However\, the development of a foundation model for sequential decision-making (e.g.\, reinforcement learning\, planning) is faced with additional challenges not present in NLP. In this talk\, we discuss some of these challenges with the hope of informing future investm ents that funding agencies and the academic community should engage in. Th e problem of transfer learning in the context of sequential decision-makin g is also discussed and constitutes one of the challenges that foundation models must address.
\nBio
\nAlvaro Velasquez a program manager at the Defense Advanced Research Pr ojects Agency (DARPA)\, where he currently leads programs on neuro-symboli c AI. Before that\, Alvaro oversaw the machine intelligence portfolio for the Information Directorate of the Air Force Research Laboratory (AFRL). A lvaro is a recipient of the distinguished paper award from AAAI and best p aper and patent awards from AFRL\, the National Science Foundation Graduat e Research Fellowship. He has authored over 70 papers and two patents and serves as Associate Editor of the IEEE Transactions on Artificial Intellig ence.
\n X-TAGS;LANGUAGE=en-US:2023\,December\,Velasquez END:VEVENT END:VCALENDAR