BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-23304@www.clsp.jhu.edu DTSTAMP:20240328T111805Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nTransformers are essential to pretraining. As we appr oach 5 years of BERT\, the connection between attention as architecture an d transfer learning remains key to this central thread in NLP. Other archi tectures such as CNNs and RNNs have been used to replicate pretraining res ults\, but these either fail to reach the same accuracy or require supplem ental attention layers. This work revisits the semanal BERT result and con siders pretraining without attention. We consider replacing self-attention layers with recently developed approach for long-range sequence modeling and transformer architecture variants. Specifically\, inspired by recent p apers like the structured space space sequence model (S4)\, we use simple routing layers based on state-space models (SSM) and a bidirectional model architecture based on multiplicative gating. We discuss the results of th e proposed Bidirectional Gated SSM (BiGS) and present a range of analysis into its properties. Results show that architecture does seem to have a no table impact on downstream performance and a different inductive bias that is worth exploring further.\nBiography\nAlexander “Sasha” Rush is an Asso ciate Professor at Cornell Tech. His work is at the intersection of natura l language processing and generative modeling with applications in text ge neration\, efficient inference\, and controllability. He has written sever al popular open-source software projects supporting NLP research and data science\, and works part-time as a researcher at Hugging Face. He is the s ecretary of ICLR and developed software used to run virtual conferences du ring COVID. His work has received paper and demo awards at major NLP\, vis ualization\, and hardware conferences\, an NSF Career Award\, and a Sloan Fellowship. He tweets and blogs\, mostly about coding and ML\, at @srush_n lp. DTSTART;TZID=America/New_York:20230203T120000 DTEND;TZID=America/New_York:20230203T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Sasha Rush (Cornell University) “Pretraining Without Attention” URL:https://www.clsp.jhu.edu/events/sasha-rush-cornell-university/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nTransformers are essential to pretraining. As we appr oach 5 years of BERT\, the connection between attention as architecture an d transfer learning remains key to this central thread in NLP. Other archi tectures such as CNNs and RNNs have been used to replicate pretraining res ults\, but these either fail to reach the same accuracy or require supplem ental attention layers. This work revisits the semanal BERT result and con siders pretraining without attention. We consider replacing self-attention layers with recently developed approach for long-range sequence modeling and transformer architecture variants. Specifically\, inspired by recent p apers like the structured space space sequence model (S4)\, we use simple routing layers based on state-space models (SSM) and a bidirectional model architecture based on multiplicative gating. We discuss the results of th e proposed Bidirectional Gated SSM (BiGS) and present a range of analysis into its properties. Results show that architecture does seem to have a no table impact on downstream performance and a different inductive bias that is worth exploring further.
\nBiography
\nAbstr act
\nUnderstanding the implications underlying a text is c
ritical to assessing its impact\, in particular the social dynamics that m
ay result from a reading of the text. This requires endowing artificial in
telligence (AI) systems with pragmatic reasoning\, for example to correctl
y conclude that the statement “Epidemics and cases of disease in the 21st
century are “staged”” relates to unfounded conspiracy theories. In this ta
lk\, I discuss how shortcomings in the ability of current AI systems to re
ason about pragmatics present challenges to equitable detection of false o
r harmful language. I demonstrate how these shortcomings can be addressed
by imposing human-interpretable structure on deep learning architectures u
sing insights from linguistics.
In the first part of the talk\, I describe how adversarial text gen
eration algorithms can be used to improve robustness of content moderation
systems. I then introduce a pragmatic formalism for reasoning about harmf
ul implications conveyed by social media text. I show how this pragmatic a
pproach can be combined with generative neural language models to uncover
implications of news headlines. I also address the bottleneck to progress
in text generation posed by gaps in evaluation of factuality. I conclude b
y showing how context-aware content moderation can be used to ensure safe
interactions with conversational agents.
\n
Biography
\nSaadia Gabr iel is a PhD candidate in the Paul G. Allen School of Computer Scie nce & Engineering at the University of Washington\, advised by Prof. Yejin Choi and Prof. Franziska Roesner. Her research re volves around natural language processing and machine learning\, with a pa rticular focus on building systems for understanding how social commonsens e manifests in text (i.e. how do people typically behave in social scenari os)\, as well as mitigating spread of false or harmful text (e.g. Covid-19 misinformation). Her work has been covered by a wide range of media outle ts like Forbes and TechCrunch. It has also received a 2019 ACL best short paper nomination\, a 2019 IROS RoboCup best paper nomination and won a bes t paper award at the 2020 WeCNLP summit. Prior to her PhD\, Saadia received a BA summa cum laude from Mount Holyoke College in Computer Sc ience and Mathematics.
\n\n X-TAGS;LANGUAGE=en-US:2023\,February\,Gabriel END:VEVENT END:VCALENDAR