BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-22374@www.clsp.jhu.edu DTSTAMP:20240328T235808Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nIn recent years\, the fiel d of Natural Language Processing has seen a profusion of tasks\, datasets\ , and systems that facilitate reasoning about real-world situations throug h language (e.g.\, RTE\, MNLI\, COMET). Such systems might\, for example\, be trained to consider a situation where “somebody dropped a glass on the floor\,” and conclude it is likely that “the glass shattered” as a result . In this talk\, I will discuss three pieces of work that revisit assumpti ons made by or about these systems. In the first work\, I develop a Defeas ible Inference task\, which enables a system to recognize when a prior ass umption it has made may no longer be true in light of new evidence it rece ives. The second work I will discuss revisits partial-input baselines\, wh ich have highlighted issues of spurious correlations in natural language r easoning datasets and led to unfavorable assumptions about models’ reasoni ng abilities. In particular\, I will discuss experiments that show models may still learn to reason in the presence of spurious dataset artifacts. F inally\, I will touch on work analyzing harmful assumptions made by reason ing models in the form of social stereotypes\, particularly in the case of free-form generative reasoning models.
\nBiography
\nRachel Rudinger is an Assistant Professor in the Department of Co mputer Science at the University of Maryland\, College Park. She holds joi nt appointments in the Department of Linguistics and the Institute for Adv anced Computer Studies (UMIACS). In 2019\, Rachel completed her Ph.D. in C omputer Science at Johns Hopkins University in the Center for Language and Speech Processing. From 2019-2020\, she was a Young Investigator at the A llen Institute for AI in Seattle\, and a visiting researcher at the Univer sity of Washington. Her research interests include computational semantics \, common-sense reasoning\, and issues of social bias and fairness in NLP.
DTSTART;TZID=America/New_York:20220916T120000 DTEND;TZID=America/New_York:20220916T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Rachel Rudinger (University of Maryland\, College Park) “Not So Fas t!: Revisiting Assumptions in (and about) Natural Language Reasoning” URL:https://www.clsp.jhu.edu/events/rachel-rudinger-university-of-maryland- college-park-not-so-fast-revisiting-assumptions-in-and-about-natural-langu age-reasoning/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,Rudinger\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-23894@www.clsp.jhu.edu DTSTAMP:20240328T235808Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nThe use of NLP in the real m of financial technology is broad and complex\, with applications ranging from sentiment analysis and named entity recognition to question answerin g. Large Language Models (LLMs) have been shown to be effective on a varie ty of tasks\; however\, no LLM specialized for the financial domain has be en reported in the literature. In this work\, we present BloombergGPT\, a 50 billion parameter language model that is trained on a wide range of fin ancial data. We construct a 363 billion token dataset based on Bloomberg’s extensive data sources\, perhaps the largest domain-specific dataset yet\ , augmented with 345 billion tokens from general-purpose datasets. We val idate BloombergGPT on standard LLM benchmarks\, open financial benchmarks\ , and a suite of internal benchmarks that most accurately reflect our inte nded usage. Our mixed dataset training leads to a model that outperforms e xisting models on financial tasks by significant margins without sacrifici ng performance on general LLM benchmarks. Additionally\, we explain our mo deling choices\, training process\, and evaluation methodology.
\nBiography
Mark Dredze is the John C Malone Professo r of Computer Science at Johns Hopkins University and the Director of Rese arch (Foundations of AI) for the JHU AI-X Foundry. He develops Artificial Intelligence Systems based on natural language processing and explores app lications to public health and medicine.
\nProf. Dredze is affiliate d with the Malone Center for Engineering in Healthcare\, the Center for La nguage and Speech Processing\, among others. He holds a joint appointment in the Biomedical Informatics & Data Science Section (BIDS)\, under the Depart ment of Medicine (DOM)\, Division of General Internal Medicine (GIM) in th e School of Medicine. He obtained his PhD from the University of Pennsylva nia in 2009.
DTSTART;TZID=America/New_York:20230918T120000 DTEND;TZID=America/New_York:20230918T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Mark Dredze (Johns Hopkins University) “BloombergGPT: A Large Langu age Model for Finance” URL:https://www.clsp.jhu.edu/events/mark-dredze-johns-hopkins-university/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Dredze\,September END:VEVENT END:VCALENDAR