BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20730@www.clsp.jhu.edu DTSTAMP:20240328T115856Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nRaytheon BBN participated in the IARPA MATERIAL progr am\, whose objective is to enable rapid development of language-independen t methods for cross-lingual information retrieval (CLIR). The challenging CLIR task of retrieving documents written (or spoken) in one language so t hat they satisfy an information need expressed in a different language is exacerbated by unique challenges posed by the MATERIAL program: limited tr aining data for automatic speech recognition and machine translation\, sca nt lexical resources\, non-standardized orthography\, etc. Furthermore\, t he format of the queries and the “Query-Weighted Value” performance measur e are non-standard and not previously studied in the IR community. In this talk\, we will describe the Raytheon BBN CLIR system\, which was successf ul at addressing the above challenges and unique characteristics of the pr ogram.\nBiography\n\nDamianos Karakos has been at Raytheon BBN for the pas t nine years\, where he is currently a Senior Principal Engineer\, Researc h. Before that\, he was research faculty at Johns Hopkins University. He h as worked on several Government projects (e.g.\, DARPA GALE\, DARPA RATS\, IARPA BABEL\, IARPA MATERIAL\, IARPA BETTER) and on a variety of HLT-rela ted topics (e.g.\, speech recognition\, speech activity detection\, keywor d search\, information retrieval). He has published more than 60 peer-revi ewed papers. His research interests lie at the intersection of human langu age technology and machine learning\, with an emphasis on statistical meth ods. He obtained a PhD in Electrical Engineering from the University of Ma ryland\, College Park\, in 2002. DTSTART;TZID=America/New_York:20210924T120000 DTEND;TZID=America/New_York:20210924T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Damianos Karakos (Raytheon BBN) “The Raytheon BBN Cross-lingual Inf ormation Retrieval System developed under the IARPA MATERIAL Program” URL:https://www.clsp.jhu.edu/events/damianos-karakos/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nRaytheon BBN participated in the IARPA MATERIAL progr am\, whose objective is to enable rapid development of language-independen t methods for cross-lingual information retrieval (CLIR). The challenging CLIR task of retrieving documents written (or spoken) in one language so t hat they satisfy an information need expressed in a different language is exacerbated by unique challenges posed by the MATERIAL program: limited tr aining data for automatic speech recognition and machine translation\, sca nt lexical resources\, non-standardized orthography\, etc. Furthermore\, t he format of the queries and the “Query-Weighted Value” performance measur e are non-standard and not previously studied in the IR community. In this talk\, we will describe the Raytheon BBN CLIR system\, which was successf ul at addressing the above challenges and unique characteristics of the pr ogram.
\nBiography
\nDamianos Karakos has been at Raytheon BBN for the past nine years\, wh ere he is currently a Senior Principal Engineer\, Research. Before that\, he was research faculty at Johns Hopkins University. He has worked on seve ral Government projects (e.g.\, DARPA GALE\, DARPA RATS\, IARPA BABEL\, IA RPA MATERIAL\, IARPA BETTER) and on a variety of HLT-related topics (e.g.\ , speech recognition\, speech activity detection\, keyword search\, inform ation retrieval). He has published more than 60 peer-reviewed papers. His research interests lie at the intersection of human language technology an d machine learning\, with an emphasis on statistical methods. He obtained a PhD in Electrical Engineering from the University of Maryland\, College Park\, in 2002.
\n\n
Abstr act
\nWhile the “deep learning tsunami” continues to define the state of the art in speech and language processing\, finite-state tra nsducer grammars developed by linguists and engineers are still widely use d in industrial\, highly-multilingual settings\, particularly for symbolic \, “front-end” speech applications. In this talk\, I will first briefly re view the current state of the OpenFst and OpenGrm finite-state transducer libraries. I then review two “late-breaking” algorithms found in these lib raries. The first is a heuristic but highly-effective general-purpose opti mization routine for weighted transducers. The second is an algorithm for computing the single shortest string of non-deterministic weighted accepto rs which lack certain properties required by classic shortest-path algorit hms. I will then illustrate how the OpenGrm tools can be used to induce a finite-state string-to-string transduction model known as a pair n-gram mo del. This model has been applied to grapheme-to-phoneme conversion\, loanw ord detection\, abbreviation expansion\, and back-transliteration\, among other tasks.
\nBiography
\nKyle Gorman is an assistant professor of linguistics at the Graduate Center\, City Universit y of New York\, and director of the master’s program in computational ling uistics\; he is also a software engineer in the speech and language algori thms group at Google. With Richard Sproat\, he is the coauthor of Finit e-State Text Processing (Morgan & Claypool\, 2021) and the creator of Pynini\, a finite-state text processing library for Python. He has also pu blished on statistical methods for comparing computational models\, text n ormalization\, grapheme-to-phoneme conversion\, and morphological analysis \, as well as many topics in linguistic theory.
\n X-TAGS;LANGUAGE=en-US:2022\,Gorman\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-22422@www.clsp.jhu.edu DTSTAMP:20240328T115856Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nZipf’s law is commonly glossed by the aphorism “infre quent words are frequent\,” but in practice\, it has often meant that ther e are three types of words: frequent\, infrequent\, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (wi th dynamic time warping). Hidden Markov models worked well for moderately infrequent words\, but the problem of OOV words was not solved until sequ ence-to-sequence neural nets de-reified the concept of a word. Many other social phenomena follow power-law distributions. The number of native sp eakers of the N’th most spoken language\, for example\, is 1.44 billion ov er N to the 1.09. In languages with sufficient data\, we have shown that monolingual pre-training outperforms multilingual pre-training. In less-f requent languages\, multilingual knowledge transfer can significantly redu ce phone error rates. In languages with no training data\, unsupervised A SR methods can be proven to converge\, as long as the eigenvalues of the l anguage model are sufficiently well separated to be measurable. Other syst ems of social categorization may follow similar power-law distributions. Disability\, for example\, can cause speech patterns that were never seen in the training database\, but not all disabilities need do so. The inabi lity of speech technology to work for people with even common disabilities is probably caused by a lack of data\, and can probably be solved by find ing better modes of interaction between technology researchers and the com munities served by technology.\nBiography\nMark Hasegawa-Johnson is a Will iam L. Everitt Faculty Fellow of Electrical and Computer Engineering at th e University of Illinois in Urbana-Champaign. He has published research i n speech production and perception\, source separation\, voice conversion\ , and low-resource automatic speech recognition. DTSTART;TZID=America/New_York:20221209T120000 DTEND;TZID=America/New_York:20221209T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Mark Hasegawa-Johnson (University of Illinois Urbana-Champaign) “Zi pf’s Law Suggests a Three-Pronged Approach to Inclusive Speech Recognition ” URL:https://www.clsp.jhu.edu/events/mark-hasegawa-johnson-university-of-ill inois-urbana-champaign/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nZipf’s law is commonly glossed by the aphorism “infre quent words are frequent\,” but in practice\, it has often meant that ther e are three types of words: frequent\, infrequent\, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (wi th dynamic time warping). Hidden Markov models worked well for moderately infrequent words\, but the problem of OOV words was not solved until sequ ence-to-sequence neural nets de-reified the concept of a word. Many other social phenomena follow power-law distributions. The number of native sp eakers of the N’th most spoken language\, for example\, is 1.44 billion ov er N to the 1.09. In languages with sufficient data\, we have shown that monolingual pre-training outperforms multilingual pre-training. In less-f requent languages\, multilingual knowledge transfer can significantly redu ce phone error rates. In languages with no training data\, unsupervised A SR methods can be proven to converge\, as long as the eigenvalues of the l anguage model are sufficiently well separated to be measurable. Other syst ems of social categorization may follow similar power-law distributions. Disability\, for example\, can cause speech patterns that were never seen in the training database\, but not all disabilities need do so. The inabi lity of speech technology to work for people with even common disabilities is probably caused by a lack of data\, and can probably be solved by find ing better modes of interaction between technology researchers and the com munities served by technology.
\nBiography
\nMark Hasegawa-Johnson is a William L. Everitt Faculty Fellow of Electrical and Computer Engineering at the University of Illinois in Urbana-Champaig n. He has published research in speech production and perception\, source separation\, voice conversion\, and low-resource automatic speech recogni tion.
\n X-TAGS;LANGUAGE=en-US:2022\,December\,Hasegawa-Johnson END:VEVENT END:VCALENDAR