BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-23304@www.clsp.jhu.edu DTSTAMP:20240329T112419Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nTransformers are essential to pretraining. As we approach 5 years of BERT\, the connection between a ttention as architecture and transfer learning remains key to this central thread in NLP. Other architectures such as CNNs and RNNs have been used t o replicate pretraining results\, but these either fail to reach the same accuracy or require supplemental attention layers. This work revisits the semanal BERT result and considers pretraining without attention. We consid er replacing self-attention layers with recently developed approach for lo ng-range sequence modeling and transformer architecture variants. Specific ally\, inspired by recent papers like the structured space space sequence model (S4)\, we use simple routing layers based on state-space models (SSM ) and a bidirectional model architecture based on multiplicative gating. W e discuss the results of the proposed Bidirectional Gated SSM (BiGS) and p resent a range of analysis into its properties. Results show that architec ture does seem to have a notable impact on downstream performance and a di fferent inductive bias that is worth exploring further.
\nBi ography
\nAbstract
\nLarge Language Models (LLM s) have demonstrated remarkable capabilities across various domains. Howev er\, it is still very challenging to build highly-reliable applications wi th LLMs that support specialized use cases. LLMs trained on web data often excel at capturing general language patterns\, but they could struggle to support specialized domains and personalized user needs. Moreover\, LLMs can produce errors that are deceptively plausible\, making them potentiall y dangerous for high-trust scenarios. In this talk\, I will discuss some o f our recent efforts in addressing these challenges with data-efficient tu ning methods and a novel factuality evaluation framework. Specifically\, m y talk will focus on building multilingual applications\, one crucial use case often characterized by limited tuning and evaluation data.
\nBio
Xinyi(Cindy) Wang is a research scientist at Go ogle DeepMind working on Large Language Models(LLM) and its application to generative question-answering. She has worked on multilingual instruction -tuning for Gemini and multilingual generative models used in Google searc h. Before Google DeepMind\, Cindy Wang obtained her PhD degree in Language Technologies at Carnegie Mellon University. During her PhD\, she mainly w orked on developing data-efficient natural language processing~(NLP) syste ms. She has made several contributions in data selection\, data representa tion\, and model adaptation for multilingual NLP.
DTSTART;TZID=America/New_York:20240308T120000 DTEND;TZID=America/New_York:20240308T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Cindy Wang (Google DeepMind) “Building Data-Efficient and Reliable Applications with Large Language Models” URL:https://www.clsp.jhu.edu/events/cindy-wang-google-deepmind-building-dat a-efficient-and-reliable-applications-with-large-language-models/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,March\,Wang END:VEVENT END:VCALENDAR