BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20730@www.clsp.jhu.edu DTSTAMP:20240328T213903Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nRaytheon BBN participated in the IARPA MATERIAL progr am\, whose objective is to enable rapid development of language-independen t methods for cross-lingual information retrieval (CLIR). The challenging CLIR task of retrieving documents written (or spoken) in one language so t hat they satisfy an information need expressed in a different language is exacerbated by unique challenges posed by the MATERIAL program: limited tr aining data for automatic speech recognition and machine translation\, sca nt lexical resources\, non-standardized orthography\, etc. Furthermore\, t he format of the queries and the “Query-Weighted Value” performance measur e are non-standard and not previously studied in the IR community. In this talk\, we will describe the Raytheon BBN CLIR system\, which was successf ul at addressing the above challenges and unique characteristics of the pr ogram.\nBiography\n\nDamianos Karakos has been at Raytheon BBN for the pas t nine years\, where he is currently a Senior Principal Engineer\, Researc h. Before that\, he was research faculty at Johns Hopkins University. He h as worked on several Government projects (e.g.\, DARPA GALE\, DARPA RATS\, IARPA BABEL\, IARPA MATERIAL\, IARPA BETTER) and on a variety of HLT-rela ted topics (e.g.\, speech recognition\, speech activity detection\, keywor d search\, information retrieval). He has published more than 60 peer-revi ewed papers. His research interests lie at the intersection of human langu age technology and machine learning\, with an emphasis on statistical meth ods. He obtained a PhD in Electrical Engineering from the University of Ma ryland\, College Park\, in 2002. DTSTART;TZID=America/New_York:20210924T120000 DTEND;TZID=America/New_York:20210924T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Damianos Karakos (Raytheon BBN) “The Raytheon BBN Cross-lingual Inf ormation Retrieval System developed under the IARPA MATERIAL Program” URL:https://www.clsp.jhu.edu/events/damianos-karakos/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nRaytheon BBN participated in the IARPA MATERIAL progr am\, whose objective is to enable rapid development of language-independen t methods for cross-lingual information retrieval (CLIR). The challenging CLIR task of retrieving documents written (or spoken) in one language so t hat they satisfy an information need expressed in a different language is exacerbated by unique challenges posed by the MATERIAL program: limited tr aining data for automatic speech recognition and machine translation\, sca nt lexical resources\, non-standardized orthography\, etc. Furthermore\, t he format of the queries and the “Query-Weighted Value” performance measur e are non-standard and not previously studied in the IR community. In this talk\, we will describe the Raytheon BBN CLIR system\, which was successf ul at addressing the above challenges and unique characteristics of the pr ogram.
\nBiography
\nDamianos Karakos has been at Raytheon BBN for the past nine years\, wh ere he is currently a Senior Principal Engineer\, Research. Before that\, he was research faculty at Johns Hopkins University. He has worked on seve ral Government projects (e.g.\, DARPA GALE\, DARPA RATS\, IARPA BABEL\, IA RPA MATERIAL\, IARPA BETTER) and on a variety of HLT-related topics (e.g.\ , speech recognition\, speech activity detection\, keyword search\, inform ation retrieval). He has published more than 60 peer-reviewed papers. His research interests lie at the intersection of human language technology an d machine learning\, with an emphasis on statistical methods. He obtained a PhD in Electrical Engineering from the University of Maryland\, College Park\, in 2002.
\n\n
Abstr act
\nAbstr act
\nOne of the keys to success in machine learning applic ations is to improve each user’s personal experience via personalized mode ls. A personalized model can be a more resource-efficient solution than a general-purpose model\, too\, because it focuses on a particular sub-probl em\, for which a smaller model architecture can be good enough. However\, training a personalized model requires data from the particular test-time user\, which are not always available due to their private nature and tech nical challenges. Furthermore\, such data tend to be unlabeled as they can be collected only during the test time\, once after the system is deploye d to user devices. One could rely on the generalization power of a generic model\, but such a model can be too computationally/spatially complex for real-time processing in a resource-constrained device. In this talk\, I will present some techniques to circumvent the lack of labeled personal data in the context of speech enhancement. Ou r machine learning models will require zero or few data samples from the t est-time users\, while they can still achieve the personalization goal. To this end\, we will investigate modularized speech enhancement models as w ell as the potential of self-supervised learning for personalized speech e nhancement. Because our research achieves the personalization goal in a da ta- and resource-efficient way\, it is a step towards a more available and affordable AI for society.
\nBiography
\nAbstr act
\nZipf’s law is commonly glossed by the aphorism “infre quent words are frequent\,” but in practice\, it has often meant that ther e are three types of words: frequent\, infrequent\, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (wi th dynamic time warping). Hidden Markov models worked well for moderately infrequent words\, but the problem of OOV words was not solved until sequ ence-to-sequence neural nets de-reified the concept of a word. Many other social phenomena follow power-law distributions. The number of native sp eakers of the N’th most spoken language\, for example\, is 1.44 billion ov er N to the 1.09. In languages with sufficient data\, we have shown that monolingual pre-training outperforms multilingual pre-training. In less-f requent languages\, multilingual knowledge transfer can significantly redu ce phone error rates. In languages with no training data\, unsupervised A SR methods can be proven to converge\, as long as the eigenvalues of the l anguage model are sufficiently well separated to be measurable. Other syst ems of social categorization may follow similar power-law distributions. Disability\, for example\, can cause speech patterns that were never seen in the training database\, but not all disabilities need do so. The inabi lity of speech technology to work for people with even common disabilities is probably caused by a lack of data\, and can probably be solved by find ing better modes of interaction between technology researchers and the com munities served by technology.
\nBiography
\nMark Hasegawa-Johnson is a William L. Everitt Faculty Fellow of Electrical and Computer Engineering at the University of Illinois in Urbana-Champaig n. He has published research in speech production and perception\, source separation\, voice conversion\, and low-resource automatic speech recogni tion.
\n X-TAGS;LANGUAGE=en-US:2022\,December\,Hasegawa-Johnson END:VEVENT BEGIN:VEVENT UID:ai1ec-24167@www.clsp.jhu.edu DTSTAMP:20240328T213903Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nPre-trained speech representation models have become ubiquitous in speech processing over the past few years. They have both i mproved the state of the art and made it feasible to learn task-specific m odels with very little labeled data. However\, it is not well understood what linguistic information is encoded in pre-trained models and how best to apply them to downstream tasks. In this talk I will describe recent wor k that begins to build an understanding of the layer-wise information lear ned by pre-trained speech models. We consider a number of popular pre-tra ined models and investigate the extent to which their layers encode spectr al\, phonetic\, and word-level information. The results of these analyses also suggest some ways to improve or simplify the application of pre-trai ned models for downstream tasks. Finally\, I will describe our efforts to benchmark model performance on a variety of spoken language understanding tasks\, in order to broaden our understanding of the capabilities of stat e-of-the-art models.\nThis talk is based in part on work presented in\nA. Pasad et al.\, “Comparative layer-wise analysis of self-supervised speech models\,”ICASSP 2023.\nA. Pasad et al.\, “What do self-supervised speech m odels know about words?\,” arXiv:2307.00162\, 2023.\nS. Shon et al.\, “SLU E Phase-2: A Benchmark Suite of Diverse Spoken Language Understanding Task s\,” ACL 2023.\nBio\nKaren Livescu is a Professor at TTI-Chicago. She comp leted her PhD at MIT in 2005. She is an ISCA Fellow and a recent IEEE Dist inguished Lecturer. She has served as a program chair/co-chair for ICLR\, Interspeech\, and ASRU\, and is an Associate Editor for TACL and IEEE T-P AMI. Her group’s work spans a variety of topics in spoken\, written\, and signed language processing. DTSTART;TZID=America/New_York:20231201T120000 DTEND;TZID=America/New_York:20231201T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Karen Livescu (Toyota Technological Institute at Chicago) “What Do Pre-Trained Speech Representation Models Know? Layer-Wise Analysis and Ben chmarking” URL:https://www.clsp.jhu.edu/events/karen-livescu-toyota-technological-inst itute-at-chicago/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nPre-trained speech representation models have become ubiquitous in speech processing over the past few years. They have both i mproved the state of the art and made it feasible to learn task-specific m odels with very little labeled data. However\, it is not well understood what linguistic information is encoded in pre-trained models and how best to apply them to downstream tasks. In this talk I will describe recent wor k that begins to build an understanding of the layer-wise information lear ned by pre-trained speech models. We consider a number of popular pre-tra ined models and investigate the extent to which their layers encode spectr al\, phonetic\, and word-level information. The results of these analyses also suggest some ways to improve or simplify the application of pre-trai ned models for downstream tasks. Finally\, I will describe our efforts to benchmark model performance on a variety of spoken language understanding tasks\, in order to broaden our understanding of the capabilities of stat e-of-the-art models.
\nThis talk is based in part on work presented in
\nA. Pasad et al.\, “C omparative layer-wise analysis of self-supervised speech models\,”ICAS SP 2023.
\nA. Pasad et al.\, “What do self-supervised speech models know about words?\,” ar Xiv:2307.00162\, 2023.
\nS. Shon et al.\, “SLUE Phase-2: A Benchmark Suite of Diverse Spo ken Language Understanding Tasks\,” ACL 2023.
\nBio
\nKaren Livescu is a Professor at TTI-Chicago. She completed he r PhD at MIT in 2005. She is an ISCA Fellow and a recent IEEE Distinguishe d Lecturer. She has served as a program chair/co-chair for ICLR\, Intersp eech\, and ASRU\, and is an Associate Editor for TACL and IEEE T-PAMI. He r group’s work spans a variety of topics in spoken\, written\, and signed language processing.
\n X-TAGS;LANGUAGE=en-US:2023\,December\,Livescu END:VEVENT BEGIN:VEVENT UID:ai1ec-24169@www.clsp.jhu.edu DTSTAMP:20240328T213903Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nFoundation models\, including Chat-GPT and its many v ariants\, have come into prominence in the natural language processing (NL P) community thanks the ubiquity of text data readily available on the int ernet and the design of modern transformer architectures that can effectiv ely learn from such data. However\, the development of a foundation model for sequential decision-making (e.g.\, reinforcement learning\, planning) is faced with additional challenges not present in NLP. In this talk\, we discuss some of these challenges with the hope of informing future investm ents that funding agencies and the academic community should engage in. Th e problem of transfer learning in the context of sequential decision-makin g is also discussed and constitutes one of the challenges that foundation models must address.\nBio\nAlvaro Velasquez a program manager at the Defen se Advanced Research Projects Agency (DARPA)\, where he currently leads pr ograms on neuro-symbolic AI. Before that\, Alvaro oversaw the machine inte lligence portfolio for the Information Directorate of the Air Force Resear ch Laboratory (AFRL). Alvaro is a recipient of the distinguished paper awa rd from AAAI and best paper and patent awards from AFRL\, the National Sci ence Foundation Graduate Research Fellowship. He has authored over 70 pape rs and two patents and serves as Associate Editor of the IEEE Transactions on Artificial Intelligence. DTSTART;TZID=America/New_York:20231204T120000 DTEND;TZID=America/New_York:20231204T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Alvaro Velasquez (DARPA) “Foundation Models and the Transfer of Emb odied Autonomy” URL:https://www.clsp.jhu.edu/events/alvaro-velasquez/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nFoundation models\, including Chat-GPT and its many v ariants\, have come into prominence in the natural language processing (NL P) community thanks the ubiquity of text data readily available on the int ernet and the design of modern transformer architectures that can effectiv ely learn from such data. However\, the development of a foundation model for sequential decision-making (e.g.\, reinforcement learning\, planning) is faced with additional challenges not present in NLP. In this talk\, we discuss some of these challenges with the hope of informing future investm ents that funding agencies and the academic community should engage in. Th e problem of transfer learning in the context of sequential decision-makin g is also discussed and constitutes one of the challenges that foundation models must address.
\nBio
\nAlvaro Velasquez a program manager at the Defense Advanced Research Pr ojects Agency (DARPA)\, where he currently leads programs on neuro-symboli c AI. Before that\, Alvaro oversaw the machine intelligence portfolio for the Information Directorate of the Air Force Research Laboratory (AFRL). A lvaro is a recipient of the distinguished paper award from AAAI and best p aper and patent awards from AFRL\, the National Science Foundation Graduat e Research Fellowship. He has authored over 70 papers and two patents and serves as Associate Editor of the IEEE Transactions on Artificial Intellig ence.
\n X-TAGS;LANGUAGE=en-US:2023\,December\,Velasquez END:VEVENT END:VCALENDAR