2.0-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//GREGORIANPUBLISHhttps://www.clsp.jhu.eduAmerica/New_YorkAmerica/New_YorkAmerica/New_York2022-11-06T02:00:002023-11-05T02:00:00EST-04:00-05:002023-03-12T02:00:002024-03-10T02:00:00EDT-05:00-04:00ai1ec-20117@www.clsp.jhu.edu2023-03-20T23:26:49ZWilker Aziz (University of Amsterdam) “The Inadequacy of the Mode in Neural Machine Translation”<p><strong>Abstract</strong></p>
<p>Neural sequence generation systems oftentimes generate sequences by searching for the most likely sequence under the learnt probability distribution. This assumes that the most likely sequence, i.e. the mode, under such a model must also be the best sequence it has to offer (often in a given context, e.g. conditioned on a source sentence in translation). Recent findings in neural machine translation (NMT) show that the true most likely sequence oftentimes is empty under many state-of-the-art NMT models. This follows a large list of other pathologies and biases observed in NMT and other sequence generation models: a length bias, larger beams degrading performance, exposure bias, and many more. Many of these works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this: it is mode-seeking search, e.g. beam search, that introduces many of these pathologies and biases, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass over many translations, and that the most likely translation oftentimes is a rare event. We further show that translation distributions do capture important aspects of translation well in expectation. Therefore, we advocate for decision rules that take into account the entire probability distribution and not just its mode. We provide one example of such a decision rule, and show that this is a fruitful research direction.</p>
<p><strong>Biography</strong></p>
<p>I am an <em>assistant professor</em> (UD) in natural language processing at the <a href="https://wilkeraziz.github.io/about/%22Institute%20for%20Logic,%20Language%20and%20Computation%22">Institute for Logic, Language and Computation</a> where I lead the <a href="https://probabll.github.io/">Probabilistic Language Learning group</a>.</p>
<p>My work concerns the design of models and algorithms that learn to represent, understand, and generate language data. Examples of specific problems I am interested in include language modelling, machine translation, syntactic parsing, textual entailment, text classification, and question answering.</p>
<p>I also develop techniques to approach general machine learning problems such as probabilistic inference, gradient and density estimation.</p>
<p>My interests sit at the intersection of disciplines such as statistics, machine learning, approximate inference, global optimization, formal languages, and computational linguistics.</p>
<p> </p>
<p> </p>America/New_York2021-04-19T12:00:00America/New_York2021-04-19T13:15:00en-USSeminarsvia Zoom0https://www.clsp.jhu.edu/events/wilker-aziz-university-of-amsterdam/freeen-US2021,April,Azizai1ec-21259@www.clsp.jhu.edu2023-03-20T23:26:49ZTom McCoy (Johns Hopkins University) “Opening the Black Box of Deep Learning: Representations, Inductive Biases, and Robustness”<p><strong>Abstract</strong></p>
<p>Natural language processing has been revolutionized by neural networks, which perform impressively well in applications such as machine translation and question answering. Despite their success, neural networks still have some substantial shortcomings: Their internal workings are poorly understood, and they are notoriously brittle, failing on example types that are rare in their training data. In this talk, I will use the unifying thread of hierarchical syntactic structure to discuss approaches for addressing these shortcomings. First, I will argue for a new evaluation paradigm based on targeted, hypothesis-driven tests that better illuminate what models have learned; using this paradigm, I will show that even state-of-the-art models sometimes fail to recognize the hierarchical structure of language (e.g., to conclude that “The book on the table is blue” implies “The table is blue.”) Second, I will show how these behavioral failings can be explained through analysis of models’ inductive biases and internal representations, focusing on the puzzle of how neural networks represent discrete symbolic structure in continuous vector space. I will close by showing how insights from these analyses can be used to make models more robust through approaches based on meta-learning, structured architectures, and data augmentation.</p>
<p><strong>Biography</strong></p>
<p>Tom McCoy is a PhD candidate in the Department of Cognitive Science at Johns Hopkins University. As an undergraduate, he studied computational linguistics at Yale. His research combines natural language processing, cognitive science, and machine learning to study how we can achieve robust generalization in models of language, as this remains one of the main areas where current AI systems fall short. In particular, he focuses on inductive biases and representations of linguistic structure, since these are two of the major components that determine how learners generalize to novel types of input.</p>America/New_York2022-01-31T12:00:00America/New_York2022-01-31T13:15:00en-USSeminarsAmes Hall 234 @ 3400 N. Charles Street, Baltimore, MD 212180https://www.clsp.jhu.edu/events/tom-mccoy-johns-hopkins-university-opening-the-black-box-of-deep-learning-representations-inductive-biases-and-robustness/freeen-US2022,January,McCoy