BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20117@www.clsp.jhu.edu DTSTAMP:20240328T164528Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nNeural sequence generation systems oftentimes generate sequences by searching for the most likely se quence under the learnt probability distribution. This assumes that the mo st likely sequence\, i.e. the mode\, under such a model must also be the b est sequence it has to offer (often in a given context\, e.g. conditioned on a source sentence in translation). Recent findings in neural machine tr anslation (NMT) show that the true most likely sequence oftentimes is empt y under many state-of-the-art NMT models. This follows a large list of oth er pathologies and biases observed in NMT and other sequence generation mo dels: a length bias\, larger beams degrading performance\, exposure bias\, and many more. Many of these works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this : it is mode-seeking search\, e.g. beam search\, that introduces many of t hese pathologies and biases\, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass over many translations\, and that the most likely translation oftentimes is a rare event. We further show that translation d istributions do capture important aspects of translation well in expectati on. Therefore\, we advocate for decision rules that take into account the entire probability distribution and not just its mode. We provide one exam ple of such a decision rule\, and show that this is a fruitful research di rection.
\nBiography
\nI am an assistant professor (UD) in natural language processing at the Institute for Logic\, Language and Computation where I lead the Probabilistic Language L earning group.
\nMy work concerns the design of models and algor ithms that learn to represent\, understand\, and generate language data. E xamples of specific problems I am interested in include language modelling \, machine translation\, syntactic parsing\, textual entailment\, text cla ssification\, and question answering.
\nI also develop techniques to approach general machine learning problems such as probabilistic inferenc e\, gradient and density estimation.
\nMy interests sit at the inter section of disciplines such as statistics\, machine learning\, approximate inference\, global optimization\, formal languages\, and computational li nguistics.
\n\n
DTSTART;TZID=America/New_York:20210419T120000 DTEND;TZID=America/New_York:20210419T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Wilker Aziz (University of Amsterdam) “The Inadequacy of the Mode in Neural Machine Translation” URL:https://www.clsp.jhu.edu/events/wilker-aziz-university-of-amsterdam/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,April\,Aziz END:VEVENT BEGIN:VEVENT UID:ai1ec-22400@www.clsp.jhu.edu DTSTAMP:20240328T164528Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nModern learning architectures for natural language processing have been very suc cessful in incorporating a huge amount of texts into their parameters. How ever\, by and large\, such models store and use knowledge in distributed a nd decentralized ways. This proves unreliable and makes the models ill-sui ted for knowledge-intensive tasks that require reasoning over factual info rmation in linguistic expressions. In this talk\, I will give a few examp les of exploring alternative architectures to tackle those challenges. In particular\, we can improve the performance of such (language) models by r epresenting\, storing and accessing knowledge in a dedicated memory compon ent.
\nThis talk is based on several joint works with Yury Zemlyanskiy (Google Research)\, Michiel de Jong (USC and Google Research)\, William Cohen (Google Research and CMU) and our other collabo rators in Google Research.
\nBiography
\nFei is a research scientist at Google Research. Before that\, he was a Profess or of Computer Science at University of Southern California. His primary r esearch interests are machine learning and its application to various AI p roblems: speech and language processing\, computer vision\, robotics and r ecently weather forecast and climate modeling. He has a PhD (2007) from Computer and Information Science from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing\, China).
DTSTART;TZID=America/New_York:20221024T120000 DTEND;TZID=America/New_York:20221024T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Fei Sha (University of Southern California) “Extracting Information from Text into Memory for Knowledge-Intensive Tasks” URL:https://www.clsp.jhu.edu/events/fei-sha-university-of-southern-californ ia/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,October\,Sha END:VEVENT BEGIN:VEVENT UID:ai1ec-22412@www.clsp.jhu.edu DTSTAMP:20240328T164528Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nDriven by the goal of erad icating language barriers on a global scale\, machine translation has soli dified itself as a key focus of artificial intelligence research today. Ho wever\, such efforts have coalesced around a small subset of languages\, l eaving behind the vast majority of mostly low-resource languages. What doe s it take to break the 200 language barrier while ensuring safe\, high-qua lity results\, all while keeping ethical considerations in mind? In this t alk\, I introduce No Language Left Behind\, an initiative to break languag e barriers for low-resource languages. In No Language Left Behind\, we too k on the low-resource language translation challenge by first contextualiz ing the need for translation support through exploratory interviews with n ative speakers. Then\, we created datasets and models aimed at narrowing t he performance gap between low and high-resource languages. We proposed mu ltiple architectural and training improvements to counteract overfitting w hile training on thousands of tasks. Critically\, we evaluated the perform ance of over 40\,000 different translation directions using a human-transl ated benchmark\, Flores-200\, and combined human evaluation with a novel t oxicity benchmark covering all languages in Flores-200 to assess translati on safety. Our model achieves an improvement of 44% BLEU relative to the p revious state-of-the-art\, laying important groundwork towards realizing a universal translation system in an open-source manner.
\nBi ography
\nAngela is a research scientis t at Meta AI Research in New York\, focusing on supporting efforts in spee ch and language research. Recent projects include No Language Left Behind (https://ai.facebook.com/r esearch/no-language-left-behind/) and Universal Speech Translation for Unwritten Languages (https://ai.faceb ook.com/blog/ai-translation-hokkien/). Before translation\, Angela pre viously focused on research in on-device models for NLP and computer visio n and text generation.
\nDTSTART;TZID=America/New_York:20221118T120000 DTEND;TZID=America/New_York:20221118T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Angela Fan (Meta AI Research) “No Language Left Behind: Scaling Hu man-Centered Machine Translation” URL:https://www.clsp.jhu.edu/events/angela-fan-facebook/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,Fan\,November END:VEVENT END:VCALENDAR