BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21041@www.clsp.jhu.edu DTSTAMP:20240329T141916Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nNarration is a universal h uman practice that serves as a key site of education\, collective memory\, fostering social belief systems\, and furthering human creativity. Recent studies in economics (Shiller\, 2020)\, climate science (Bushell et al.\, 2017)\, political polarization (Kubin et al.\, 2021)\, and mental health (Adler et al.\, 2016) suggest an emerging interdisciplinary consensus that narrative is a central concept for understanding human behavior and belie fs. For close to half a century\, the field of narratology has developed a rich set of theoretical frameworks for understanding narrative. And yet t hese theories have largely gone untested on large\, heterogenous collectio ns of texts. Scholars continue to generate schemas by extrapolating from s mall numbers of manually observed documents. In this talk\, I will discuss how we can use machine learning to develop data-driven theories of narrat ion to better understand what Labov and Waletzky called “the simplest and most fundamental narrative structures.” How can machine learning help us a pproach what we might call a minimal theory of narrativity?
\nAndrew Piper is Professor and William Dawson Scholar in the Department of Languages\, Literatures\, and Cultures at McGill University. He is the director of _.t xtlab
\n\na laboratory for cultural analytics\, and editor of the /Journal of Cultural Analytics/\, an open-access journal dedicated to the computational study of culture. He is the author of numerous books and articles on the relatio nship of technology and reading\, including /Book Was There: Reading in El ectronic Times/(Chicago 2012)\, /Enumerations: Data and Literary Study/(Ch icago 2018)\, and most recently\, /Can We Be Wrong? The Problem of Textual Evidence in a Time of Data/(Cambridge 2020).
DTSTART;TZID=America/New_York:20211112T120000 DTEND;TZID=America/New_York:20211112T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Andrew Piper (McGill University) ” How can we use machine learning to understand narration?” URL:https://www.clsp.jhu.edu/events/andrew-piper-mcgill-university-how-can- we-use-machine-learning-to-understand-narration/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,November\,Piper END:VEVENT BEGIN:VEVENT UID:ai1ec-23316@www.clsp.jhu.edu DTSTAMP:20240329T141916Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nUnderstanding the implicat ions underlying a text is critical to assessing its impact\, in particular the social dynamics that may result from a reading of the text. This requ ires endowing artificial intelligence (AI) systems with pragmatic reasonin g\, for example to correctly conclude that the statement “Epidemics and ca ses of disease in the 21st century are “staged”” relates to unfounded cons piracy theories. In this talk\, I discuss how shortcomings in the ability of current AI systems to reason about pragmatics present challenges to equ itable detection of false or harmful language. I demonstrate how these sho rtcomings can be addressed by imposing human-interpretable structure on de ep learning architectures using insights from linguistics.
\n< p> In the first part of the talk\, I descri be how adversarial text generation algorithms can be used to improve robus tness of content moderation systems. I then introduce a pragmatic formalis m for reasoning about harmful implications conveyed by social media text. I show how this pragmatic approach can be combined with generative neural language models to uncover implications of news headlines. I also address the bottleneck to progress in text generation posed by gaps in evaluation of factuality. I conclude by showing how context-aware content moderation can be used to ensure safe interactions with conversational agents. \nBiography
\nSaadia Gabriel is a PhD candidate in the Paul G. Al len School of Computer Science & Engineering at the University of Washingt on\, advised by Prof. Yejin Choi and Prof. Franziska Roesner. Her research revolves around natural language processing and m achine learning\, with a particular focus on building systems for understa nding how social commonsense manifests in text (i.e. how do people typical ly behave in social scenarios)\, as well as mitigating spread of false or harmful text (e.g. Covid-19 misinformation). Her work has been covered by a wide range of media outlets like Forbes and TechCrunch. It has also rece ived a 2019 ACL best short paper nomination\, a 2019 IROS RoboCup best pap er nomination and won a best paper award at the 2020 WeCNLP summit. Prior to her PhD\, Saadia received a BA summa cum laude from Mount Hol yoke College in Computer Science and Mathematics.
\nDTSTART;TZID=America/New_York:20230227T120000 DTEND;TZID=America/New_York:20230227T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Saadia Gabriel (University of Washington) “Socially Responsible and Factual Reasoning for Equitable AI Systems” URL:https://www.clsp.jhu.edu/events/saadia-gabriel-university-of-washington -socially-responsible-and-factual-reasoning-for-equitable-ai-systems/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,February\,Gabriel END:VEVENT BEGIN:VEVENT UID:ai1ec-24157@www.clsp.jhu.edu DTSTAMP:20240329T141916Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nIn this talk\, I will pres ent a simple extension of image-based Masked Autoencoders (MAE) to self-su pervised representation learning from audio spectrograms. Following the Tr ansformer encoder-decoder design in MAE\, our Audio-MAE first encodes audi o spectrogram patches with a high masking ratio\, feeding only the non-mas ked tokens through encoder layers. The decoder then re-orders and decodes the encoded context padded with mask tokens\, in order to reconstruct the input spectrogram. We find it beneficial to incorporate local window atten tion in the decoder\, as audio spectrograms are highly correlated in local time and frequency bands. We then fine-tune the encoder with a lower mask ing ratio on target datasets. Empirically\, Audio-MAE sets new state-of-th e-art performance on six audio and speech classification tasks\, outperfor ming other recent models that use external supervised pre-training.
\n< p>Bio\nFlorian Metze is a Research Scientist Manag er at Meta AI in New York\, supporting a team of researchers and engineers working on multi-modal (image\, video\, audio\, text) content understandi ng for Meta’s Family of Apps (Instagram\, Threads\, Facebook\, WhatsApp). He used to be an Associate Research Professor at Carnegie Mellon Universit y\, in the School of Computer Science’s Language Technologies Institute\, where he still is an Adjunct Professor. He is also a co-founder of Abridge \, a company working on extracting information from doctor patient convers ations. His work covers many areas of speech recognition and multi-media a nalysis with a focus on end-to-end deep learning. Currently\, he focuses o n multi-modal processing of videos\, and using that information to recomme nd unconnected content. In the past\, he has worked on low resource and mu lti-lingual speech processing\, speech recognition with articulatory featu res\, large-scale multi-media retrieval and summarization\, information ex traction from medical interviews\, and recognition of personality or simil ar meta-data from speech.
\nFor more information\, please see http://www.cs.cmu.edu/directory /fmetze
\nDTSTART;TZID=America/New_York:20231110T120000 DTEND;TZID=America/New_York:20231110T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Florian Metze (CMU) “Masked Autoencoders that Listen” URL:https://www.clsp.jhu.edu/events/florian-metze-cmu/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Metze\,November END:VEVENT END:VCALENDAR