Michael Auli (Facebook) “Sequence to Sequence Learning: Fast Training and Inference with Gated Convolutions”
3400 N Charles St
Baltimore, MD 21218
USA
Abstract
Neural architectures for machine translation and language modeling is an active research field. The first part of this talk introduces several architectural changes to the original work of Bahdanau et al. 2014. We replace non-linearities with our novel gated linear units, recurrent units with convolutions and introduce multi-hop attention. These changes improve generalization performance, training efficiency and decoding speed. The second part of the talk analyzes the properties of the distribution predicted by the model and how this influences search.
Biography
Michael Auli is a research scientist at Facebook AI Research in Menlo Park. Michael earned his PhD for his work on CCG parsing at the University of Edinburgh where he was advised by Adam Lopez and Philipp Koehn. He did his postdoc at Microsoft Research where he worked on neural machine translation and neural dialogue models. Currently, Michael works on machine learning and its application to natural language processing, he is particularly interested in text generation tasks.
http://michaelauli.github.io