BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-23304@www.clsp.jhu.edu DTSTAMP:20240329T064431Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nTransformers are essential to pretraining. As we appr oach 5 years of BERT\, the connection between attention as architecture an d transfer learning remains key to this central thread in NLP. Other archi tectures such as CNNs and RNNs have been used to replicate pretraining res ults\, but these either fail to reach the same accuracy or require supplem ental attention layers. This work revisits the semanal BERT result and con siders pretraining without attention. We consider replacing self-attention layers with recently developed approach for long-range sequence modeling and transformer architecture variants. Specifically\, inspired by recent p apers like the structured space space sequence model (S4)\, we use simple routing layers based on state-space models (SSM) and a bidirectional model architecture based on multiplicative gating. We discuss the results of th e proposed Bidirectional Gated SSM (BiGS) and present a range of analysis into its properties. Results show that architecture does seem to have a no table impact on downstream performance and a different inductive bias that is worth exploring further.\nBiography\nAlexander “Sasha” Rush is an Asso ciate Professor at Cornell Tech. His work is at the intersection of natura l language processing and generative modeling with applications in text ge neration\, efficient inference\, and controllability. He has written sever al popular open-source software projects supporting NLP research and data science\, and works part-time as a researcher at Hugging Face. He is the s ecretary of ICLR and developed software used to run virtual conferences du ring COVID. His work has received paper and demo awards at major NLP\, vis ualization\, and hardware conferences\, an NSF Career Award\, and a Sloan Fellowship. He tweets and blogs\, mostly about coding and ML\, at @srush_n lp. DTSTART;TZID=America/New_York:20230203T120000 DTEND;TZID=America/New_York:20230203T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Sasha Rush (Cornell University) “Pretraining Without Attention” URL:https://www.clsp.jhu.edu/events/sasha-rush-cornell-university/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nTransformers are essential to pretraining. As we appr oach 5 years of BERT\, the connection between attention as architecture an d transfer learning remains key to this central thread in NLP. Other archi tectures such as CNNs and RNNs have been used to replicate pretraining res ults\, but these either fail to reach the same accuracy or require supplem ental attention layers. This work revisits the semanal BERT result and con siders pretraining without attention. We consider replacing self-attention layers with recently developed approach for long-range sequence modeling and transformer architecture variants. Specifically\, inspired by recent p apers like the structured space space sequence model (S4)\, we use simple routing layers based on state-space models (SSM) and a bidirectional model architecture based on multiplicative gating. We discuss the results of th e proposed Bidirectional Gated SSM (BiGS) and present a range of analysis into its properties. Results show that architecture does seem to have a no table impact on downstream performance and a different inductive bias that is worth exploring further.
\nBiography
\nAbstr act
\nHistory repeats itself\, sometimes in a bad way. Prev enting natural or man-made disasters requires being aware of these pattern s and taking pre-emptive action to address and reduce them\, or ideally\, eliminate them. Emerging events\, such as the COVID pandemic and the Ukrai ne Crisis\, require a time-sensitive comprehensive understanding of the si tuation to allow for appropriate decision-making and effective action resp onse. Automated generation of situation reports can significantly reduce t he time\, effort\, and cost for domain experts when preparing their offici al human-curated reports. However\, AI research toward this goal has been very limited\, and no successful trials have yet been conducted to automat e such report generation and “what-if” disaster forecasting. Pre-existing natural language processing and information retrieval techniques are insuf ficient to identify\, locate\, and summarize important information\, and l ack detailed\, structured\, and strategic awareness. In this talk I will p resent SmartBook\, a novel framework that cannot be solved by large langua ge models alone\, to consume large volumes of multimodal multilingual news data and produce a structured situation report with multiple hypotheses ( claims) summarized and grounded with rich links to factual evidence throug h multimodal knowledge extraction\, claim detection\, fact checking\, misi nformation detection and factual error correction. Furthermore\, SmartBook can also serve as a novel news event simulator\, or an intelligent prophe tess. Given “What-if” conditions and dimensions elicited from a domain ex pert user concerning a disaster scenario\, SmartBook will induce schemas f rom historical events\, and automatically generate a complex event graph a long with a timeline of news articles that describe new simulated events a nd character-centric stories based on a new Λ-shaped attention mask that c an generate text with infinite length. By effectively simulating disaster scenarios in both event graph and natural language format\, we expect Smar tBook will greatly assist humanitarian workers and policymakers to exercis e reality checks\, and thus better prevent and respond to future disasters .
\nBio
\nHeng Ji is a professor at Computer Science Department\, and an affiliated faculty member at Electrical and Co mputer Engineering Department and Coordinated Science Laboratory of Univer sity of Illinois Urbana-Champaign. She is an Amazon Scholar. She is the Fo unding Director of Amazon-Illinois Center on AI for Interactive Conversati onal Experiences (AICE). She received her B.A. and M. A. in Computational Linguistics from Tsinghua University\, and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing\, especially on Multimedia Multilingual Information Ex traction\, Knowledge-enhanced Large Language Models\, Knowledge-driven Gen eration and Conversational AI. She was selected as a Young Scientist to at tend the 6th World Laureates Association Forum\, and selected to participa te in DARPA AI Forward in 2023. She was selected as “Young Scientist” and a member of the Global Future Council on the Future of Computing by the Wo rld Economic Forum in 2016 and 2017. The awards she received include Women Leaders of Conversational AI (Class of 2023) by Project Voice\, “AI’s 10 to Watch” Award by IEEE Intelligent Systems in 2013\, NSF CAREER award in 2009\, PACLIC2012 Best paper runner-up\, “Best of ICDM2013” paper award\, “Best of SDM2013” paper award\, ACL2018 Best Demo paper nomination\, ACL20 20 Best Demo Paper Award\, NAACL2021 Best Demo Paper Award\, Google Resear ch Award in 2009 and 2014\, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-2018. She was invited to testify to the U.S. House Cybersecurity\, Data Analytics\, & IT Committee as an AI expert in 2 023. She was invited by the Secretary of the U.S. Air Force and AFRL to jo in Air Force Data Analytics Expert Panel to inform the Air Force Strategy 2030\, and invited to speak at the Federal Information Integrity R&D Inter agency Working Group (IIRD IWG) briefing in 2023. She is the lead of many multi-institution projects and tasks\, including the U.S. ARL projects on information fusion and knowledge networks construction\, DARPA ECOLE MIRAC LE team\, DARPA KAIROS RESIN team and DARPA DEFT Tinker Bell team. She has coordinated the NIST TAC Knowledge Base Population task 2010-2022. She wa s the associate editor for IEEE/ACM Transaction on Audio\, Speech\, and La nguage Processing\, and served as the Program Committee Co-Chair of many c onferences including NAACL-HLT2018 and AACL-IJCNLP2022. She is elected as the North American Chapter of the Association for Computational Linguistic s (NAACL) secretary 2020-2023. Her research has been widely supported by t he U.S. government agencies (DARPA\, NSF\, DoE\, ARL\, IARPA\, AFRL\, DHS) and industry (Apple\, Amazon\, Google\, Facebook\, Bosch\, IBM\, Disney).
\n X-TAGS;LANGUAGE=en-US:2024\,April\,Ji END:VEVENT END:VCALENDAR