BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21259@www.clsp.jhu.edu DTSTAMP:20240329T064504Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nNatural language processin g has been revolutionized by neural networks\, which perform impressively well in applications such as machine translation and question answering. D espite their success\, neural networks still have some substantial shortco mings: Their internal workings are poorly understood\, and they are notori ously brittle\, failing on example types that are rare in their training d ata. In this talk\, I will use the unifying thread of hierarchical syntact ic structure to discuss approaches for addressing these shortcomings. Firs t\, I will argue for a new evaluation paradigm based on targeted\, hypothe sis-driven tests that better illuminate what models have learned\; using t his paradigm\, I will show that even state-of-the-art models sometimes fai l to recognize the hierarchical structure of language (e.g.\, to conclude that “The book on the table is blue” implies “The table is blue.”) Second\ , I will show how these behavioral failings can be explained through analy sis of models’ inductive biases and internal representations\, focusing on the puzzle of how neural networks represent discrete symbolic structure i n continuous vector space. I will close by showing how insights from these analyses can be used to make models more robust through approaches based on meta-learning\, structured architectures\, and data augmentation.
\nBiography
\nTom McCoy is a PhD candidate in the Department of Cognitive Science at Johns Hopkins University. As an undergr aduate\, he studied computational linguistics at Yale. His research combin es natural language processing\, cognitive science\, and machine learning to study how we can achieve robust generalization in models of language\, as this remains one of the main areas where current AI systems fall short. In particular\, he focuses on inductive biases and representations of lin guistic structure\, since these are two of the major components that deter mine how learners generalize to novel types of input.
DTSTART;TZID=America/New_York:20220131T120000 DTEND;TZID=America/New_York:20220131T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Tom McCoy (Johns Hopkins University) “Opening the Black Box of Deep Learning: Representations\, Inductive Biases\, and Robustness” URL:https://www.clsp.jhu.edu/events/tom-mccoy-johns-hopkins-university-open ing-the-black-box-of-deep-learning-representations-inductive-biases-and-ro bustness/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,January\,McCoy END:VEVENT BEGIN:VEVENT UID:ai1ec-21270@www.clsp.jhu.edu DTSTAMP:20240329T064504Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract
\nSocial media allows resear chers to track societal and cultural changes over time based on language a nalysis tools. Many of these tools rely on statistical algorithms which ne ed to be tuned to specific types of language. Recent studies have question ed the robustness of longitudinal analyses based on statistical methods du e to issues of temporal bias and semantic shift. To what extent are change s in semantics over time affecting the reliability of longitudinal analyse s? We examine this question through a case study: understanding shifts in mental health during the course of the COVID-19 pandemic. We demonstrate t hat a recently-introduced method for measuring semantic shift may be used to proactively identify failure points of language-based models and improv e predictive generalization over time. Ultimately\, we find that these ana lyses are critical to producing accurate longitudinal studies of social me dia.
DTSTART;TZID=America/New_York:20220207T120000 DTEND;TZID=America/New_York:20220207T131500 LOCATION:In Person or Virtual Option @ https://wse.zoom.us/j/96735183473 @ 234 Ames Hall\, 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Keith Harrigian “The Problem of Semantic Shift in Longitudinal Monitoring of Social Media: A Case Study on Mental Health d uring the COVID-19 Pandemic” URL:https://www.clsp.jhu.edu/events/student-seminar-keith-harrigian-the-pro blem-of-semantic-shift-in-longitudinal-monitoring-of-social-media-a-case-s tudy-on-mental-health-during-the-covid-19-pandemic/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,February\,Harrigian END:VEVENT BEGIN:VEVENT UID:ai1ec-21616@www.clsp.jhu.edu DTSTAMP:20240329T064504Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract
\nSocial media allows resear chers to track societal and cultural changes over time based on language a nalysis tools. Many of these tools rely on statistical algorithms which ne ed to be tuned to specific types of language. Recent studies have shown th e absence of appropriate tuning\, specifically in the presence of semantic shift\, can hinder robustness of the underlying methods. However\, little is known about the practical effect this sensitivity may have on downstre am longitudinal analyses. We explore this gap in the literature through a timely case study: understanding shifts in depression during the course of the COVID-19 pandemic. We find that inclusion of only a small number of s emantically-unstable features can promote significant changes in longitudi nal estimates of our target outcome. At the same time\, we demonstrate tha t a recently-introduced method for measuring semantic shift may be used to proactively identify failure points of language-based models and\, in tur n\, improve predictive generalization.
DTSTART;TZID=America/New_York:20220318T120000 DTEND;TZID=America/New_York:20220318T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Keith Harrigian “The Problem of Semantic Shift in Longitudinal Monitoring of Social Media” URL:https://www.clsp.jhu.edu/events/student-seminar-keith-harrigian-the-pro blem-of-semantic-shift-in-longitudinal-monitoring-of-social-media/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,Harrigian\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-24457@www.clsp.jhu.edu DTSTAMP:20240329T064504Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract
\nAs artificial intelligence (AI) continues to rapidly expand into existing healthcare infrastructure – e.g.\, clinical decision support\, administrative tasks\, and public hea lth surveillance – it is perhaps more important than ever to reflect on th e broader purpose of such systems. While much focus has been on the potent ial for this technology to improve general health outcomes\, there also ex ists a significant\, but understated\, opportunity to use this technology to address health-related disparities. Accomplishing the latter depends no t only on our ability to effectively identify addressable areas of systemi c inequality and translate them into tasks that are machine learnable\, bu t also our ability to measure\, interpret\, and counteract barriers in tra ining data that may inhibit robustness to distribution shift upon deployme nt (i.e.\, new populations\, temporal dynamics). In this talk\, we will di scuss progress made along both of these dimensions. We will begin by provi ding background on the state of AI for promoting health equity. Then\, we will present results from a recent clinical phenotyping project and discus s their implication on prevailing views regarding language model robustnes s in clinical applications. Finally\, we will showcase ongoing efforts to proactively address systemic inequality in healthcare by identifying and c haracterizing stigmatizing language in medical records.
DTSTART;TZID=America/New_York:20240226T120000 DTEND;TZID=America/New_York:20240226T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Keith Harrigian (JHU) “Fighting Bias From Bias: Robust Natural Lang uage Processing Techniques to Promote Health Equity” URL:https://www.clsp.jhu.edu/events/keith-harrigian-jhu-fighting-bias-from- bias-robust-natural-language-processing-techniques-to-promote-health-equit y/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,February\,Harrigian END:VEVENT END:VCALENDAR