BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-23304@www.clsp.jhu.edu DTSTAMP:20240329T140613Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nTransformers are essential to pretraining. As we approach 5 years of BERT\, the connection between a ttention as architecture and transfer learning remains key to this central thread in NLP. Other architectures such as CNNs and RNNs have been used t o replicate pretraining results\, but these either fail to reach the same accuracy or require supplemental attention layers. This work revisits the semanal BERT result and considers pretraining without attention. We consid er replacing self-attention layers with recently developed approach for lo ng-range sequence modeling and transformer architecture variants. Specific ally\, inspired by recent papers like the structured space space sequence model (S4)\, we use simple routing layers based on state-space models (SSM ) and a bidirectional model architecture based on multiplicative gating. W e discuss the results of the proposed Bidirectional Gated SSM (BiGS) and p resent a range of analysis into its properties. Results show that architec ture does seem to have a notable impact on downstream performance and a di fferent inductive bias that is worth exploring further.
\nBi ography
\nAbstract
\nBiases in datasets\, or un intentionally introduced spurious cues\, are a common source of misspecifi cation in machine learning. Performant models trained on such data can gen der stereotype or be brittle under distribution shift. In this talk\, we present several results in multimodal and question answering applications studying sources of dataset bias\, and several mitigation methods. We pro pose approaches where known dimensions of dataset bias are explicitly fact ored out of a model during learning\, without needing to modify data. Fina lly\, we ask whether dataset biases can be attributable to annotator behav ior during annotation. Drawing inspiration from work in psychology on cogn itive biases\, we show certain behavioral patterns are highly indicative o f the creation of problematic (but valid) data instances in question answe ring. We give evidence that many existing observations around how dataset bias propagates to models can be attributed to data samples created by ann otators we identify.
\nBiography
\nMark Ya tskar is an Assistant Professor at University of Pennsylvania in th e department of Computer and Information Science. He did his PhD at Univer sity of Washington co-advised by Luke Zettlemoyer and Ali Farhadi. He was a Young Investigator at the Allen Institute for Artificial Intelligence fo r several years working with their computer vision team\, Prior. His work spans Natural Language Processing\, Computer Vision\, and Fairness in Mach ine Learning. He received a Best Paper Award at EMNLP for work on gender b ias amplification\, and his work has been featured in Wired and the New Yo rk Times.
\nDTSTART;TZID=America/New_York:20230210T120000 DTEND;TZID=America/New_York:20230210T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Mark Yatskar (University of Pennsylvania) “Understanding Dataset Bi ases: Behavioral Indicators During Annotation and Contrastive Mitigations” URL:https://www.clsp.jhu.edu/events/mark-yatskar-university-of-pennsylvania / X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,February\,Yatskar END:VEVENT END:VCALENDAR