BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20117@www.clsp.jhu.edu DTSTAMP:20240329T010852Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nNeural sequence generation systems oftentimes generat e sequences by searching for the most likely sequence under the learnt pro bability distribution. This assumes that the most likely sequence\, i.e. t he mode\, under such a model must also be the best sequence it has to offe r (often in a given context\, e.g. conditioned on a source sentence in tra nslation). Recent findings in neural machine translation (NMT) show that t he true most likely sequence oftentimes is empty under many state-of-the-a rt NMT models. This follows a large list of other pathologies and biases o bserved in NMT and other sequence generation models: a length bias\, large r beams degrading performance\, exposure bias\, and many more. Many of the se works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this: it is mode-seeking search \, e.g. beam search\, that introduces many of these pathologies and biases \, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass ove r many translations\, and that the most likely translation oftentimes is a rare event. We further show that translation distributions do capture imp ortant aspects of translation well in expectation. Therefore\, we advocate for decision rules that take into account the entire probability distribu tion and not just its mode. We provide one example of such a decision rule \, and show that this is a fruitful research direction.\nBiography\nI am a n assistant professor (UD) in natural language processing at the Institute for Logic\, Language and Computation where I lead the Probabilistic Langu age Learning group.\nMy work concerns the design of models and algorithms that learn to represent\, understand\, and generate language data. Example s of specific problems I am interested in include language modelling\, mac hine translation\, syntactic parsing\, textual entailment\, text classific ation\, and question answering.\nI also develop techniques to approach gen eral machine learning problems such as probabilistic inference\, gradient and density estimation.\nMy interests sit at the intersection of disciplin es such as statistics\, machine learning\, approximate inference\, global optimization\, formal languages\, and computational linguistics.\n \n DTSTART;TZID=America/New_York:20210419T120000 DTEND;TZID=America/New_York:20210419T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Wilker Aziz (University of Amsterdam) “The Inadequacy of the Mode in Neural Machine Translation” URL:https://www.clsp.jhu.edu/events/wilker-aziz-university-of-amsterdam/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nNeural sequence generation systems oftentimes generat e sequences by searching for the most likely sequence under the learnt pro bability distribution. This assumes that the most likely sequence\, i.e. t he mode\, under such a model must also be the best sequence it has to offe r (often in a given context\, e.g. conditioned on a source sentence in tra nslation). Recent findings in neural machine translation (NMT) show that t he true most likely sequence oftentimes is empty under many state-of-the-a rt NMT models. This follows a large list of other pathologies and biases o bserved in NMT and other sequence generation models: a length bias\, large r beams degrading performance\, exposure bias\, and many more. Many of the se works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this: it is mode-seeking search \, e.g. beam search\, that introduces many of these pathologies and biases \, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass ove r many translations\, and that the most likely translation oftentimes is a rare event. We further show that translation distributions do capture imp ortant aspects of translation well in expectation. Therefore\, we advocate for decision rules that take into account the entire probability distribu tion and not just its mode. We provide one example of such a decision rule \, and show that this is a fruitful research direction.
\nBi ography
\nI am an assistant professor (UD) in natu ral language processing at the Institute for Logic\, Language and Computation where I lead the Probabilistic Language Learning group.
\nMy work concerns the design of models and algorithms that learn to represe nt\, understand\, and generate language data. Examples of specific problem s I am interested in include language modelling\, machine translation\, sy ntactic parsing\, textual entailment\, text classification\, and question answering.
\nI also develop techniques to approach general machine l earning problems such as probabilistic inference\, gradient and density es timation.
\nMy interests sit at the intersection of disciplines such as statistics\, machine learning\, approximate inference\, global optimiz ation\, formal languages\, and computational linguistics.
\n\n< p> \n X-TAGS;LANGUAGE=en-US:2021\,April\,Aziz END:VEVENT BEGIN:VEVENT UID:ai1ec-22422@www.clsp.jhu.edu DTSTAMP:20240329T010852Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nZipf’s law is commonly glossed by the aphorism “infre quent words are frequent\,” but in practice\, it has often meant that ther e are three types of words: frequent\, infrequent\, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (wi th dynamic time warping). Hidden Markov models worked well for moderately infrequent words\, but the problem of OOV words was not solved until sequ ence-to-sequence neural nets de-reified the concept of a word. Many other social phenomena follow power-law distributions. The number of native sp eakers of the N’th most spoken language\, for example\, is 1.44 billion ov er N to the 1.09. In languages with sufficient data\, we have shown that monolingual pre-training outperforms multilingual pre-training. In less-f requent languages\, multilingual knowledge transfer can significantly redu ce phone error rates. In languages with no training data\, unsupervised A SR methods can be proven to converge\, as long as the eigenvalues of the l anguage model are sufficiently well separated to be measurable. Other syst ems of social categorization may follow similar power-law distributions. Disability\, for example\, can cause speech patterns that were never seen in the training database\, but not all disabilities need do so. The inabi lity of speech technology to work for people with even common disabilities is probably caused by a lack of data\, and can probably be solved by find ing better modes of interaction between technology researchers and the com munities served by technology.\nBiography\nMark Hasegawa-Johnson is a Will iam L. Everitt Faculty Fellow of Electrical and Computer Engineering at th e University of Illinois in Urbana-Champaign. He has published research i n speech production and perception\, source separation\, voice conversion\ , and low-resource automatic speech recognition. DTSTART;TZID=America/New_York:20221209T120000 DTEND;TZID=America/New_York:20221209T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Mark Hasegawa-Johnson (University of Illinois Urbana-Champaign) “Zi pf’s Law Suggests a Three-Pronged Approach to Inclusive Speech Recognition ” URL:https://www.clsp.jhu.edu/events/mark-hasegawa-johnson-university-of-ill inois-urbana-champaign/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Abstr act
\nZipf’s law is commonly glossed by the aphorism “infre quent words are frequent\,” but in practice\, it has often meant that ther e are three types of words: frequent\, infrequent\, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (wi th dynamic time warping). Hidden Markov models worked well for moderately infrequent words\, but the problem of OOV words was not solved until sequ ence-to-sequence neural nets de-reified the concept of a word. Many other social phenomena follow power-law distributions. The number of native sp eakers of the N’th most spoken language\, for example\, is 1.44 billion ov er N to the 1.09. In languages with sufficient data\, we have shown that monolingual pre-training outperforms multilingual pre-training. In less-f requent languages\, multilingual knowledge transfer can significantly redu ce phone error rates. In languages with no training data\, unsupervised A SR methods can be proven to converge\, as long as the eigenvalues of the l anguage model are sufficiently well separated to be measurable. Other syst ems of social categorization may follow similar power-law distributions. Disability\, for example\, can cause speech patterns that were never seen in the training database\, but not all disabilities need do so. The inabi lity of speech technology to work for people with even common disabilities is probably caused by a lack of data\, and can probably be solved by find ing better modes of interaction between technology researchers and the com munities served by technology.
\nBiography
\nMark Hasegawa-Johnson is a William L. Everitt Faculty Fellow of Electrical and Computer Engineering at the University of Illinois in Urbana-Champaig n. He has published research in speech production and perception\, source separation\, voice conversion\, and low-resource automatic speech recogni tion.
\n X-TAGS;LANGUAGE=en-US:2022\,December\,Hasegawa-Johnson END:VEVENT END:VCALENDAR