BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-23894@www.clsp.jhu.edu DTSTAMP:20240329T071829Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nThe use of NLP in the real m of financial technology is broad and complex\, with applications ranging from sentiment analysis and named entity recognition to question answerin g. Large Language Models (LLMs) have been shown to be effective on a varie ty of tasks\; however\, no LLM specialized for the financial domain has be en reported in the literature. In this work\, we present BloombergGPT\, a 50 billion parameter language model that is trained on a wide range of fin ancial data. We construct a 363 billion token dataset based on Bloomberg’s extensive data sources\, perhaps the largest domain-specific dataset yet\ , augmented with 345 billion tokens from general-purpose datasets. We val idate BloombergGPT on standard LLM benchmarks\, open financial benchmarks\ , and a suite of internal benchmarks that most accurately reflect our inte nded usage. Our mixed dataset training leads to a model that outperforms e xisting models on financial tasks by significant margins without sacrifici ng performance on general LLM benchmarks. Additionally\, we explain our mo deling choices\, training process\, and evaluation methodology.
\nBiography
Mark Dredze is the John C Malone Professo r of Computer Science at Johns Hopkins University and the Director of Rese arch (Foundations of AI) for the JHU AI-X Foundry. He develops Artificial Intelligence Systems based on natural language processing and explores app lications to public health and medicine.
\nProf. Dredze is affiliate d with the Malone Center for Engineering in Healthcare\, the Center for La nguage and Speech Processing\, among others. He holds a joint appointment in the Biomedical Informatics & Data Science Section (BIDS)\, under the Depart ment of Medicine (DOM)\, Division of General Internal Medicine (GIM) in th e School of Medicine. He obtained his PhD from the University of Pennsylva nia in 2009.
DTSTART;TZID=America/New_York:20230918T120000 DTEND;TZID=America/New_York:20230918T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Mark Dredze (Johns Hopkins University) “BloombergGPT: A Large Langu age Model for Finance” URL:https://www.clsp.jhu.edu/events/mark-dredze-johns-hopkins-university/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Dredze\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-24481@www.clsp.jhu.edu DTSTAMP:20240329T071829Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nNatural language provides an intuitive and powerful interface to access knowledge at scale. Modern l anguage systems draw information from two rich knowledge sources: (1) info rmation stored in their parameters during massive pretraining and (2) docu ments retrieved at inference time. Yet\, we are far from building systems that can reliably provide information from such knowledge sources. In this talk\, I will discuss paths for more robust systems. In the first part of the talk\, I will present a module for scaling retrieval-based knowledge augmentation. We learn a compressor that maps retrieved documents into tex tual summaries prior to in-context integration. This not only reduces the computational costs but also filters irrelevant or incorrect information. In the second half of the talk\, I will discuss the challenges of updating knowledge stored in model parameters and propose a method to prevent mode ls from reciting outdated information by identifying facts that are prone to rapid change. I will conclude my talk by proposing an interactive syste m that can elicit information from users when needed.
\nBiog raphy
\nEunsol Choi is an assistant pro fessor in the Computer Science department at the University of Texas at Au stin. Prior to UT\, she spent a year at Google AI as a visiting researcher . Her research area spans natural language processing and machine learning . She is particularly interested in interpreting and reasoning about text in a dynamic real world context. She is a recipient of a Facebook research fellowship\, Google faculty research award\, Sony faculty award\, and an outstanding paper award at EMNLP. She received a Ph.D. in computer science and engineering from University of Washington and B.A in mathematics and computer science from Cornell University.
\nDTSTART;TZID=America/New_York:20240315T120000 DTEND;TZID=America/New_York:20240315T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21209 SEQUENCE:0 SUMMARY:Eunsol Choi (University of Texas at Austin) “Knowledge-Rich Languag e Systems in a Dynamic World” URL:https://www.clsp.jhu.edu/events/eunsol-choi-university-of-texas-at-aust in-knowledge-rich-language-systems-in-a-dynamic-world/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,Choi\,March END:VEVENT END:VCALENDAR