BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21259@www.clsp.jhu.edu DTSTAMP:20240328T214154Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nNatural language processin g has been revolutionized by neural networks\, which perform impressively well in applications such as machine translation and question answering. D espite their success\, neural networks still have some substantial shortco mings: Their internal workings are poorly understood\, and they are notori ously brittle\, failing on example types that are rare in their training d ata. In this talk\, I will use the unifying thread of hierarchical syntact ic structure to discuss approaches for addressing these shortcomings. Firs t\, I will argue for a new evaluation paradigm based on targeted\, hypothe sis-driven tests that better illuminate what models have learned\; using t his paradigm\, I will show that even state-of-the-art models sometimes fai l to recognize the hierarchical structure of language (e.g.\, to conclude that “The book on the table is blue” implies “The table is blue.”) Second\ , I will show how these behavioral failings can be explained through analy sis of models’ inductive biases and internal representations\, focusing on the puzzle of how neural networks represent discrete symbolic structure i n continuous vector space. I will close by showing how insights from these analyses can be used to make models more robust through approaches based on meta-learning\, structured architectures\, and data augmentation.
\nBiography
\nTom McCoy is a PhD candidate in the Department of Cognitive Science at Johns Hopkins University. As an undergr aduate\, he studied computational linguistics at Yale. His research combin es natural language processing\, cognitive science\, and machine learning to study how we can achieve robust generalization in models of language\, as this remains one of the main areas where current AI systems fall short. In particular\, he focuses on inductive biases and representations of lin guistic structure\, since these are two of the major components that deter mine how learners generalize to novel types of input.
DTSTART;TZID=America/New_York:20220131T120000 DTEND;TZID=America/New_York:20220131T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Tom McCoy (Johns Hopkins University) “Opening the Black Box of Deep Learning: Representations\, Inductive Biases\, and Robustness” URL:https://www.clsp.jhu.edu/events/tom-mccoy-johns-hopkins-university-open ing-the-black-box-of-deep-learning-representations-inductive-biases-and-ro bustness/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,January\,McCoy END:VEVENT BEGIN:VEVENT UID:ai1ec-21267@www.clsp.jhu.edu DTSTAMP:20240328T214154Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nIn this talk\, I present a multipronged strategy for zero-shot cross-lingual Information Extraction\ , that is the construction of an IE model for some target language\, given existing annotations exclusively in some other language. This work is par t of the JHU team’s effort under the IARPA BETTER program. I explore data augmentation techniques including data projection and self-training\, and how different pretrained encoders impact them. We find through extensive e xperiments and extension of techniques that a combination of approaches\, both new and old\, leads to better performance than any one cross-lingual strategy in particular.
\nBiography
\nAbstract
\nNon-in vasive neural interfaces have the potential to transform human-computer in teraction by providing users with low friction\, information rich\, always available inputs. Reality Labs at Meta is developing such an interface fo r the control of augmented reality devices based on electromyographic (EMG ) signals captured at the wrist. Speech and audio technologies turn out to be especially well suited to unlocking the full potential of these signal s and interactions and this talk will present several specific problems an d the speech and audio approaches that have advanced us towards this ultim ate goal of effortless and joyful interfaces. We will provide the necessar y neuroscientific background to understand these signals\, describe automa tic speech recognition-inspired interfaces generating text and beamforming -inspired interfaces for identifying individual neurons\, and then explain how they connect with egocentric machine intelligence tasks that might re side on these devices.
\nBiography
\nMichael I Mandel is a Research Scientist in Reality Labs at Meta. Previously\, he was an Associate Professor of Computer and Information Science at Brooklyn College and the CUNY Graduate Center working at the intersection of machi ne learning\, signal processing\, and psychoacoustics. He earned his BSc i n Computer Science from the Massachusetts Institute of Technology and his MS and PhD with distinction in Electrical Engineering from Columbia Univer sity as a Fu Foundation Presidential Scholar. He was an FQRNT Postdoctoral Research Fellow in the Machine Learning laboratory (LISA/MILA) at the Uni versité de Montréal\, an Algorithm Developer at Audience Inc\, and a Resea rch Scientist in Computer Science and Engineering at the Ohio State Univer sity. His work has been supported by the National Science Foundation\, inc luding via a CAREER award\, the Alfred P. Sloan Foundation\, and Google\, Inc.
DTSTART;TZID=America/New_York:20240129T120000 DTEND;TZID=America/New_York:20240129T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Michael I Mandel (Meta) “Speech and Audio Processing in Non-Invasiv e Brain-Computer Interfaces at Meta” URL:https://www.clsp.jhu.edu/events/michael-i-mandel-cuny/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,January\,Mandel END:VEVENT BEGIN:VEVENT UID:ai1ec-24507@www.clsp.jhu.edu DTSTAMP:20240328T214154Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nHistory repeats itself\, s ometimes in a bad way. Preventing natural or man-made disasters requires b eing aware of these patterns and taking pre-emptive action to address and reduce them\, or ideally\, eliminate them. Emerging events\, such as the C OVID pandemic and the Ukraine Crisis\, require a time-sensitive comprehens ive understanding of the situation to allow for appropriate decision-makin g and effective action response. Automated generation of situation reports can significantly reduce the time\, effort\, and cost for domain experts when preparing their official human-curated reports. However\, AI research toward this goal has been very limited\, and no successful trials have ye t been conducted to automate such report generation and “what-if” disaster forecasting. Pre-existing natural language processing and information ret rieval techniques are insufficient to identify\, locate\, and summarize im portant information\, and lack detailed\, structured\, and strategic aware ness. In this talk I will present SmartBook\, a novel framework that canno t be solved by large language models alone\, to consume large volumes of m ultimodal multilingual news data and produce a structured situation report with multiple hypotheses (claims) summarized and grounded with rich links to factual evidence through multimodal knowledge extraction\, claim detec tion\, fact checking\, misinformation detection and factual error correcti on. Furthermore\, SmartBook can also serve as a novel news event simulator \, or an intelligent prophetess. Given “What-if” conditions and dimension s elicited from a domain expert user concerning a disaster scenario\, Smar tBook will induce schemas from historical events\, and automatically gener ate a complex event graph along with a timeline of news articles that desc ribe new simulated events and character-centric stories based on a new Λ-s haped attention mask that can generate text with infinite length. By effec tively simulating disaster scenarios in both event graph and natural langu age format\, we expect SmartBook will greatly assist humanitarian workers and policymakers to exercise reality checks\, and thus better prevent and respond to future disasters.
\nBio
\nHeng Ji is a professor at Computer Science Department\, and an affiliated faculty member at Electrical and Computer Engineering Department and Coordinated S cience Laboratory of University of Illinois Urbana-Champaign. She is an Am azon Scholar. She is the Founding Director of Amazon-Illinois Center on AI for Interactive Conversational Experiences (AICE). She received her B.A. and M. A. in Computational Linguistics from Tsinghua University\, and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing\, especially on Multimedia Multilingual Information Extraction\, Knowledge-enhanced Large Language Mo dels\, Knowledge-driven Generation and Conversational AI. She was selected as a Young Scientist to attend the 6th World Laureates Association Forum\ , and selected to participate in DARPA AI Forward in 2023. She was selecte d as “Young Scientist” and a member of the Global Future Council on the Fu ture of Computing by the World Economic Forum in 2016 and 2017. The awards she received include Women Leaders of Conversational AI (Class of 2023) b y Project Voice\, “AI’s 10 to Watch” Award by IEEE Intelligent Systems in 2013\, NSF CAREER award in 2009\, PACLIC2012 Best paper runner-up\, “Best of ICDM2013” paper award\, “Best of SDM2013” paper award\, ACL2018 Best De mo paper nomination\, ACL2020 Best Demo Paper Award\, NAACL2021 Best Demo Paper Award\, Google Research Award in 2009 and 2014\, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-2018. She was invi ted to testify to the U.S. House Cybersecurity\, Data Analytics\, & IT Com mittee as an AI expert in 2023. She was invited by the Secretary of the U. S. Air Force and AFRL to join Air Force Data Analytics Expert Panel to inf orm the Air Force Strategy 2030\, and invited to speak at the Federal Info rmation Integrity R&D Interagency Working Group (IIRD IWG) briefing in 202 3. She is the lead of many multi-institution projects and tasks\, includin g the U.S. ARL projects on information fusion and knowledge networks const ruction\, DARPA ECOLE MIRACLE team\, DARPA KAIROS RESIN team and DARPA DEF T Tinker Bell team. She has coordinated the NIST TAC Knowledge Base Popula tion task 2010-2022. She was the associate editor for IEEE/ACM Transaction on Audio\, Speech\, and Language Processing\, and served as the Program C ommittee Co-Chair of many conferences including NAACL-HLT2018 and AACL-IJC NLP2022. She is elected as the North American Chapter of the Association f or Computational Linguistics (NAACL) secretary 2020-2023. Her research has been widely supported by the U.S. government agencies (DARPA\, NSF\, DoE\ , ARL\, IARPA\, AFRL\, DHS) and industry (Apple\, Amazon\, Google\, Facebo ok\, Bosch\, IBM\, Disney).
DTSTART;TZID=America/New_York:20240405T120000 DTEND;TZID=America/New_York:20240405T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, Maryland 21218 SEQUENCE:0 SUMMARY:Heng Ji (University of Illinois Urbana-Champaign) “SmartBook: an AI Prophetess for Disaster Reporting and Forecasting” URL:https://www.clsp.jhu.edu/events/heng-ji-university-of-illinois-urbana-c hampaign-smartbook-an-ai-prophetess-for-disaster-reporting-and-forecasting / X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,April\,Ji END:VEVENT END:VCALENDAR