Wilker Aziz (University of Amsterdam) “The Inadequacy of the Mode in Neural Machine Translation” @ via Zoom
Apr 19 @ 12:00 pm – 1:15 pm


Neural sequence generation systems oftentimes generate sequences by searching for the most likely sequence under the learnt probability distribution. This assumes that the most likely sequence, i.e. the mode, under such a model must also be the best sequence it has to offer (often in a given context, e.g. conditioned on a source sentence in translation). Recent findings in neural machine translation (NMT) show that the true most likely sequence oftentimes is empty under many state-of-the-art NMT models. This follows a large list of other pathologies and biases observed in NMT and other sequence generation models: a length bias, larger beams degrading performance, exposure bias, and many more. Many of these works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this: it is mode-seeking search, e.g. beam search, that introduces many of these pathologies and biases, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass over many translations, and that the most likely translation oftentimes is a rare event. We further show that translation distributions do capture important aspects of translation well in expectation. Therefore, we advocate for decision rules that take into account the entire probability distribution and not just its mode. We provide one example of such a decision rule, and show that this is a fruitful research direction.


I am an assistant professor (UD) in natural language processing at the Institute for Logic, Language and Computation where I lead the Probabilistic Language Learning group.

My work concerns the design of models and algorithms that learn to represent, understand, and generate language data. Examples of specific problems I am interested in include language modelling, machine translation, syntactic parsing, textual entailment, text classification, and question answering.

I also develop techniques to approach general machine learning problems such as probabilistic inference, gradient and density estimation.

My interests sit at the intersection of disciplines such as statistics, machine learning, approximate inference, global optimization, formal languages, and computational linguistics.



Carolina Parada (Google AI) “State of Robotics @ Google” @ via Zoom
Apr 23 @ 12:00 pm – 1:15 pm


[email protected]’s mission is to make robots useful in the real world through machine learning. We are excited about a new model for robotics, designed for generalization across diverse environments and instructions. This model is focused on scalable data-driven learning, which is task-agnostic, leverages simulation, learns from past experience, and can be quickly adapted to work in the real-world through limited interactions. In this talk, we’ll share some of our recent work in this direction in both manipulation and locomotion applications.


Carolina Parada is a Senior Engineering Manager at Google Robotics. She leads the robot-mobility group, which focuses on improving robot motion planning, navigation, and locomotion, using reinforcement learning. Prior to that, she led the camera perception team for self-driving cars at Nvidia for 2 years. She was also a lead with Speech @ Google for 7 years, where she drove multiple research and engineering efforts that enabled Ok Google, the Google Assistant, and Voice-Search. Carolina grew up in Venezuela and moved to the US to pursue a B.S. and M.S. degree in Electrical Engineering at University of Washington and her Phd at Johns Hopkins University at the Center for Language and Speech Processing (CLSP).

Center for Language and Speech Processing