2.0-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//GREGORIANPUBLISHhttps://www.clsp.jhu.eduAmerica/New_YorkAmerica/New_YorkAmerica/New_York2022-11-06T02:00:002023-11-05T02:00:00EST-04:00-05:002023-03-12T02:00:002024-03-10T02:00:00EDT-05:00-04:00ai1ec-20117@www.clsp.jhu.edu2023-01-29T13:57:19ZWilker Aziz (University of Amsterdam) “The Inadequacy of the Mode in Neural Machine Translation”<p><strong>Abstract</strong></p>
<p>Neural sequence generation systems oftentimes generate sequences by searching for the most likely sequence under the learnt probability distribution. This assumes that the most likely sequence, i.e. the mode, under such a model must also be the best sequence it has to offer (often in a given context, e.g. conditioned on a source sentence in translation). Recent findings in neural machine translation (NMT) show that the true most likely sequence oftentimes is empty under many state-of-the-art NMT models. This follows a large list of other pathologies and biases observed in NMT and other sequence generation models: a length bias, larger beams degrading performance, exposure bias, and many more. Many of these works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this: it is mode-seeking search, e.g. beam search, that introduces many of these pathologies and biases, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass over many translations, and that the most likely translation oftentimes is a rare event. We further show that translation distributions do capture important aspects of translation well in expectation. Therefore, we advocate for decision rules that take into account the entire probability distribution and not just its mode. We provide one example of such a decision rule, and show that this is a fruitful research direction.</p>
<p><strong>Biography</strong></p>
<p>I am an <em>assistant professor</em> (UD) in natural language processing at the <a href="https://wilkeraziz.github.io/about/%22Institute%20for%20Logic,%20Language%20and%20Computation%22">Institute for Logic, Language and Computation</a> where I lead the <a href="https://probabll.github.io/">Probabilistic Language Learning group</a>.</p>
<p>My work concerns the design of models and algorithms that learn to represent, understand, and generate language data. Examples of specific problems I am interested in include language modelling, machine translation, syntactic parsing, textual entailment, text classification, and question answering.</p>
<p>I also develop techniques to approach general machine learning problems such as probabilistic inference, gradient and density estimation.</p>
<p>My interests sit at the intersection of disciplines such as statistics, machine learning, approximate inference, global optimization, formal languages, and computational linguistics.</p>
<p> </p>
<p> </p>America/New_York2021-04-19T12:00:00America/New_York2021-04-19T13:15:00en-USSeminarsvia Zoom0https://www.clsp.jhu.edu/events/wilker-aziz-university-of-amsterdam/freeen-US2021,April,Azizai1ec-22395@www.clsp.jhu.edu2023-01-29T13:57:19ZDavid Chiang (University of Notre Dame) “Exact Recursive Probabilistic Programming with Colin McDonald, Darcey Riley, Kenneth Sible (Notre Dame) and Chung-chieh Shan (Indiana)”<p><strong>Abstract</strong></p>
<div dir="ltr">Recursive calls over recursive data are widely useful for generating probability distributions, and probabilistic programming allows computations over these distributions to be expressed in a modular and intuitive way. Exact inference is also useful, but unfortunately, existing probabilistic programming languages do not perform exact inference on recursive calls over recursive data, forcing programmers to code many applications manually. We introduce a probabilistic language in which a wide variety of recursion can be expressed naturally, and inference carried out exactly. For instance, probabilistic pushdown automata and their generalizations are easy to express, and polynomial-time parsing algorithms for them are derived automatically. We eliminate recursive data types using program transformations related to defunctionalization and refunctionalization. These transformations are assured correct by a linear type system, and a successful choice of transformations, if there is one, is guaranteed to be found by a greedy algorithm. I will also describe the implementation of this language in two phases: first, compilation to a factor graph grammar, and second, computing the sum-product of the factor graph grammar.</div>
<div dir="ltr"></div>
<div dir="ltr"><strong>Biography</strong></div>
<div dir="ltr"><span dir="ltr">David Chiang (PhD, University of Pennsylvania, 2004) is an associate professor in the Department of Computer Science and Engineering at the University of Notre Dame. His research is on computational models for learning human languages, particularly how to translate from one language to another. His work on applying formal grammars and machine learning to translation has been recognized with two best paper awards (at ACL 2005 and NAACL HLT 2009). He has received research grants from DARPA, NSF, Google, and Amazon, has served on the executive board of NAACL and the editorial board of Computational Linguistics and JAIR, and is currently on the editorial board of Transactions of the ACL.</span></div>America/New_York2022-10-17T12:00:00America/New_York2022-10-17T13:15:00en-USSeminarsHackerman Hall B17 @ 3400 N. Charles Street, Baltimore, MD 212180https://www.clsp.jhu.edu/events/david-chiang-university-of-notre-dame/freeen-US2022,Chiang,October