BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-22374@www.clsp.jhu.edu DTSTAMP:20240329T012007Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nIn recent years\, the fiel d of Natural Language Processing has seen a profusion of tasks\, datasets\ , and systems that facilitate reasoning about real-world situations throug h language (e.g.\, RTE\, MNLI\, COMET). Such systems might\, for example\, be trained to consider a situation where “somebody dropped a glass on the floor\,” and conclude it is likely that “the glass shattered” as a result . In this talk\, I will discuss three pieces of work that revisit assumpti ons made by or about these systems. In the first work\, I develop a Defeas ible Inference task\, which enables a system to recognize when a prior ass umption it has made may no longer be true in light of new evidence it rece ives. The second work I will discuss revisits partial-input baselines\, wh ich have highlighted issues of spurious correlations in natural language r easoning datasets and led to unfavorable assumptions about models’ reasoni ng abilities. In particular\, I will discuss experiments that show models may still learn to reason in the presence of spurious dataset artifacts. F inally\, I will touch on work analyzing harmful assumptions made by reason ing models in the form of social stereotypes\, particularly in the case of free-form generative reasoning models.
\nBiography
\nRachel Rudinger is an Assistant Professor in the Department of Co mputer Science at the University of Maryland\, College Park. She holds joi nt appointments in the Department of Linguistics and the Institute for Adv anced Computer Studies (UMIACS). In 2019\, Rachel completed her Ph.D. in C omputer Science at Johns Hopkins University in the Center for Language and Speech Processing. From 2019-2020\, she was a Young Investigator at the A llen Institute for AI in Seattle\, and a visiting researcher at the Univer sity of Washington. Her research interests include computational semantics \, common-sense reasoning\, and issues of social bias and fairness in NLP.
DTSTART;TZID=America/New_York:20220916T120000 DTEND;TZID=America/New_York:20220916T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Rachel Rudinger (University of Maryland\, College Park) “Not So Fas t!: Revisiting Assumptions in (and about) Natural Language Reasoning” URL:https://www.clsp.jhu.edu/events/rachel-rudinger-university-of-maryland- college-park-not-so-fast-revisiting-assumptions-in-and-about-natural-langu age-reasoning/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,Rudinger\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-22422@www.clsp.jhu.edu DTSTAMP:20240329T012007Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nZipf’s law is commonly glo ssed by the aphorism “infrequent words are frequent\,” but in practice\, i t has often meant that there are three types of words: frequent\, infreque nt\, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (with dynamic time warping). Hidden Markov models worked well for moderately infrequent words\, but the problem of OOV word s was not solved until sequence-to-sequence neural nets de-reified the con cept of a word. Many other social phenomena follow power-law distribution s. The number of native speakers of the N’th most spoken language\, for e xample\, is 1.44 billion over N to the 1.09. In languages with sufficient data\, we have shown that monolingual pre-training outperforms multilingu al pre-training. In less-frequent languages\, multilingual knowledge tran sfer can significantly reduce phone error rates. In languages with no tra ining data\, unsupervised ASR methods can be proven to converge\, as long as the eigenvalues of the language model are sufficiently well separated t o be measurable. Other systems of social categorization may follow similar power-law distributions. Disability\, for example\, can cause speech pat terns that were never seen in the training database\, but not all disabili ties need do so. The inability of speech technology to work for people wi th even common disabilities is probably caused by a lack of data\, and can probably be solved by finding better modes of interaction between technol ogy researchers and the communities served by technology.
\nBiography
\nMark Hasegawa-Johnson is a William L. Everitt F aculty Fellow of Electrical and Computer Engineering at the University of Illinois in Urbana-Champaign. He has published research in speech product ion and perception\, source separation\, voice conversion\, and low-resour ce automatic speech recognition.
DTSTART;TZID=America/New_York:20221209T120000 DTEND;TZID=America/New_York:20221209T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Mark Hasegawa-Johnson (University of Illinois Urbana-Champaign) “Zi pf’s Law Suggests a Three-Pronged Approach to Inclusive Speech Recognition ” URL:https://www.clsp.jhu.edu/events/mark-hasegawa-johnson-university-of-ill inois-urbana-champaign/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,December\,Hasegawa-Johnson END:VEVENT BEGIN:VEVENT UID:ai1ec-23312@www.clsp.jhu.edu DTSTAMP:20240329T012007Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nAdvanced neural language m odels have grown ever larger and more complex\, pushing forward the limits of language understanding and generation\, while diminishing interpretabi lity. The black-box nature of deep neural networks blocks humans from unde rstanding them\, as well as trusting and using them in real-world applicat ions. This talk will introduce interpretation techniques that bridge the g ap between humans and models for developing trustworthy natural language p rocessing
\n (NLP). I will first show how to explain black-box models and evaluate their explanations for understanding their p rediction behavior. Then I will introduce how to improve the interpretabil ity of neural language models by making their decision-making transparent and rationalized. Finally\, I will discuss how to diagnose and improve mod els (e.g.\, robustness) through the lens of explanations. I will conclude with future research directions that are centered around model interpretab ility and committed to facilitating communications and interactions betwee n intelligent machines\, system developers\, and end users for long-term t rustworthy AI.Biography
\nHanjie Chen is a Ph.D. candidate in Computer Science at the University of Virginia\, advis ed by Prof. Yangfeng Ji. Her research interests lie in Trustworthy AI\, Na tural Language Processing (NLP)\, and
DTSTART;TZID=America/New_York:20230313T120000 DTEND;TZID=America/New_York:20230313T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Hanjie Chen (University of Virginia) “Bridging Humans and Machines: Techniques for Trustworthy NLP” URL:https://www.clsp.jhu.edu/events/hanjie-chen-university-of-virginia/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Chen\,February END:VEVENT BEGIN:VEVENT UID:ai1ec-24507@www.clsp.jhu.edu DTSTAMP:20240329T012007Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: Interpretabl e Machine Learning. She develops interpretation techniques to explain neur al language models and make their prediction behavior transparent and reli able. She is a recipient of the Carlos and Esther Farrar Fellowship and th e Best Poster Award at the ACM CAPWIC 2021. Her work has been published at top-tier NLP/AI conferences (e.g.\, ACL\, AAAI\, EMNLP\, NAACL) and selec ted by the National Center for Women & Information Technology (NCWIT) Coll egiate Award Finalist 2021. She (as the primary instructor) co-designed an d taught the course\, Interpretable Machine Learning\, and was awarded the UVA CS Outstanding Graduate Teaching Award and University-wide Graduate T eaching Awards Nominee (top 5% of graduate instructors). More details can be found at https://www.cs.virginia.edu/~hc9mxAbstract
\nHistory repeats itself\, s ometimes in a bad way. Preventing natural or man-made disasters requires b eing aware of these patterns and taking pre-emptive action to address and reduce them\, or ideally\, eliminate them. Emerging events\, such as the C OVID pandemic and the Ukraine Crisis\, require a time-sensitive comprehens ive understanding of the situation to allow for appropriate decision-makin g and effective action response. Automated generation of situation reports can significantly reduce the time\, effort\, and cost for domain experts when preparing their official human-curated reports. However\, AI research toward this goal has been very limited\, and no successful trials have ye t been conducted to automate such report generation and “what-if” disaster forecasting. Pre-existing natural language processing and information ret rieval techniques are insufficient to identify\, locate\, and summarize im portant information\, and lack detailed\, structured\, and strategic aware ness. In this talk I will present SmartBook\, a novel framework that canno t be solved by large language models alone\, to consume large volumes of m ultimodal multilingual news data and produce a structured situation report with multiple hypotheses (claims) summarized and grounded with rich links to factual evidence through multimodal knowledge extraction\, claim detec tion\, fact checking\, misinformation detection and factual error correcti on. Furthermore\, SmartBook can also serve as a novel news event simulator \, or an intelligent prophetess. Given “What-if” conditions and dimension s elicited from a domain expert user concerning a disaster scenario\, Smar tBook will induce schemas from historical events\, and automatically gener ate a complex event graph along with a timeline of news articles that desc ribe new simulated events and character-centric stories based on a new Λ-s haped attention mask that can generate text with infinite length. By effec tively simulating disaster scenarios in both event graph and natural langu age format\, we expect SmartBook will greatly assist humanitarian workers and policymakers to exercise reality checks\, and thus better prevent and respond to future disasters.
\nBio
\nHeng Ji is a professor at Computer Science Department\, and an affiliated faculty member at Electrical and Computer Engineering Department and Coordinated S cience Laboratory of University of Illinois Urbana-Champaign. She is an Am azon Scholar. She is the Founding Director of Amazon-Illinois Center on AI for Interactive Conversational Experiences (AICE). She received her B.A. and M. A. in Computational Linguistics from Tsinghua University\, and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing\, especially on Multimedia Multilingual Information Extraction\, Knowledge-enhanced Large Language Mo dels\, Knowledge-driven Generation and Conversational AI. She was selected as a Young Scientist to attend the 6th World Laureates Association Forum\ , and selected to participate in DARPA AI Forward in 2023. She was selecte d as “Young Scientist” and a member of the Global Future Council on the Fu ture of Computing by the World Economic Forum in 2016 and 2017. The awards she received include Women Leaders of Conversational AI (Class of 2023) b y Project Voice\, “AI’s 10 to Watch” Award by IEEE Intelligent Systems in 2013\, NSF CAREER award in 2009\, PACLIC2012 Best paper runner-up\, “Best of ICDM2013” paper award\, “Best of SDM2013” paper award\, ACL2018 Best De mo paper nomination\, ACL2020 Best Demo Paper Award\, NAACL2021 Best Demo Paper Award\, Google Research Award in 2009 and 2014\, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-2018. She was invi ted to testify to the U.S. House Cybersecurity\, Data Analytics\, & IT Com mittee as an AI expert in 2023. She was invited by the Secretary of the U. S. Air Force and AFRL to join Air Force Data Analytics Expert Panel to inf orm the Air Force Strategy 2030\, and invited to speak at the Federal Info rmation Integrity R&D Interagency Working Group (IIRD IWG) briefing in 202 3. She is the lead of many multi-institution projects and tasks\, includin g the U.S. ARL projects on information fusion and knowledge networks const ruction\, DARPA ECOLE MIRACLE team\, DARPA KAIROS RESIN team and DARPA DEF T Tinker Bell team. She has coordinated the NIST TAC Knowledge Base Popula tion task 2010-2022. She was the associate editor for IEEE/ACM Transaction on Audio\, Speech\, and Language Processing\, and served as the Program C ommittee Co-Chair of many conferences including NAACL-HLT2018 and AACL-IJC NLP2022. She is elected as the North American Chapter of the Association f or Computational Linguistics (NAACL) secretary 2020-2023. Her research has been widely supported by the U.S. government agencies (DARPA\, NSF\, DoE\ , ARL\, IARPA\, AFRL\, DHS) and industry (Apple\, Amazon\, Google\, Facebo ok\, Bosch\, IBM\, Disney).
DTSTART;TZID=America/New_York:20240405T120000 DTEND;TZID=America/New_York:20240405T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, Maryland 21218 SEQUENCE:0 SUMMARY:Heng Ji (University of Illinois Urbana-Champaign) “SmartBook: an AI Prophetess for Disaster Reporting and Forecasting” URL:https://www.clsp.jhu.edu/events/heng-ji-university-of-illinois-urbana-c hampaign-smartbook-an-ai-prophetess-for-disaster-reporting-and-forecasting / X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,April\,Ji END:VEVENT END:VCALENDAR