BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20117@www.clsp.jhu.edu DTSTAMP:20240328T141530Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nNeural sequence generation systems oftentimes generat e sequences by searching for the most likely sequence under the learnt pro bability distribution. This assumes that the most likely sequence\, i.e. t he mode\, under such a model must also be the best sequence it has to offe r (often in a given context\, e.g. conditioned on a source sentence in tra nslation). Recent findings in neural machine translation (NMT) show that t he true most likely sequence oftentimes is empty under many state-of-the-a rt NMT models. This follows a large list of other pathologies and biases o bserved in NMT and other sequence generation models: a length bias\, large r beams degrading performance\, exposure bias\, and many more. Many of the se works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this: it is mode-seeking search \, e.g. beam search\, that introduces many of these pathologies and biases \, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass ove r many translations\, and that the most likely translation oftentimes is a rare event. We further show that translation distributions do capture imp ortant aspects of translation well in expectation. Therefore\, we advocate for decision rules that take into account the entire probability distribu tion and not just its mode. We provide one example of such a decision rule \, and show that this is a fruitful research direction.\nBiography\nI am a n assistant professor (UD) in natural language processing at the Institute for Logic\, Language and Computation where I lead the Probabilistic Langu age Learning group.\nMy work concerns the design of models and algorithms that learn to represent\, understand\, and generate language data. Example s of specific problems I am interested in include language modelling\, mac hine translation\, syntactic parsing\, textual entailment\, text classific ation\, and question answering.\nI also develop techniques to approach gen eral machine learning problems such as probabilistic inference\, gradient and density estimation.\nMy interests sit at the intersection of disciplin es such as statistics\, machine learning\, approximate inference\, global optimization\, formal languages\, and computational linguistics.\n \n DTSTART;TZID=America/New_York:20210419T120000 DTEND;TZID=America/New_York:20210419T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Wilker Aziz (University of Amsterdam) “The Inadequacy of the Mode in Neural Machine Translation” URL:https://www.clsp.jhu.edu/events/wilker-aziz-university-of-amsterdam/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nNeural sequence generation systems oftentimes generat e sequences by searching for the most likely sequence under the learnt pro bability distribution. This assumes that the most likely sequence\, i.e. t he mode\, under such a model must also be the best sequence it has to offe r (often in a given context\, e.g. conditioned on a source sentence in tra nslation). Recent findings in neural machine translation (NMT) show that t he true most likely sequence oftentimes is empty under many state-of-the-a rt NMT models. This follows a large list of other pathologies and biases o bserved in NMT and other sequence generation models: a length bias\, large r beams degrading performance\, exposure bias\, and many more. Many of the se works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this: it is mode-seeking search \, e.g. beam search\, that introduces many of these pathologies and biases \, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass ove r many translations\, and that the most likely translation oftentimes is a rare event. We further show that translation distributions do capture imp ortant aspects of translation well in expectation. Therefore\, we advocate for decision rules that take into account the entire probability distribu tion and not just its mode. We provide one example of such a decision rule \, and show that this is a fruitful research direction.
\nBi ography
\nI am an assistant professor (UD) in natu ral language processing at the Institute for Logic\, Language and Computation where I lead the Probabilistic Language Learning group.
\nMy work concerns the design of models and algorithms that learn to represe nt\, understand\, and generate language data. Examples of specific problem s I am interested in include language modelling\, machine translation\, sy ntactic parsing\, textual entailment\, text classification\, and question answering.
\nI also develop techniques to approach general machine l earning problems such as probabilistic inference\, gradient and density es timation.
\nMy interests sit at the intersection of disciplines such as statistics\, machine learning\, approximate inference\, global optimiz ation\, formal languages\, and computational linguistics.
\n\n< p> \n X-TAGS;LANGUAGE=en-US:2021\,April\,Aziz END:VEVENT BEGIN:VEVENT UID:ai1ec-23894@www.clsp.jhu.edu DTSTAMP:20240328T141530Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nThe use of NLP in the realm of financial technology i s broad and complex\, with applications ranging from sentiment analysis an d named entity recognition to question answering. Large Language Models (L LMs) have been shown to be effective on a variety of tasks\; however\, no LLM specialized for the financial domain has been reported in the literatu re. In this work\, we present BloombergGPT\, a 50 billion parameter langua ge model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg’s extensive data sources\, p erhaps the largest domain-specific dataset yet\, augmented with 345 billio n tokens from general-purpose datasets. We validate BloombergGPT on stand ard LLM benchmarks\, open financial benchmarks\, and a suite of internal b enchmarks that most accurately reflect our intended usage. Our mixed datas et training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general L LM benchmarks. Additionally\, we explain our modeling choices\, training p rocess\, and evaluation methodology.\nBiography\nMark Dredze is the John C Malone Professor of Computer Science at Johns Hopkins University and the Director of Research (Foundations of AI) for the JHU AI-X Foundry. He deve lops Artificial Intelligence Systems based on natural language processing and explores applications to public health and medicine.\nProf. Dredze is affiliated with the Malone Center for Engineering in Healthcare\, the Cent er for Language and Speech Processing\, among others. He holds a joint app ointment in the Biomedical Informatics & Data Science Section (BIDS)\, und er the Department of Medicine (DOM)\, Division of General Internal Medicin e (GIM) in the School of Medicine. He obtained his PhD from the University of Pennsylvania in 2009. DTSTART;TZID=America/New_York:20230918T120000 DTEND;TZID=America/New_York:20230918T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Mark Dredze (Johns Hopkins University) “BloombergGPT: A Large Langu age Model for Finance” URL:https://www.clsp.jhu.edu/events/mark-dredze-johns-hopkins-university/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Abstr act
\nThe use of NLP in the realm of financial technology i s broad and complex\, with applications ranging from sentiment analysis an d named entity recognition to question answering. Large Language Models (L LMs) have been shown to be effective on a variety of tasks\; however\, no LLM specialized for the financial domain has been reported in the literatu re. In this work\, we present BloombergGPT\, a 50 billion parameter langua ge model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg’s extensive data sources\, p erhaps the largest domain-specific dataset yet\, augmented with 345 billio n tokens from general-purpose datasets. We validate BloombergGPT on stand ard LLM benchmarks\, open financial benchmarks\, and a suite of internal b enchmarks that most accurately reflect our intended usage. Our mixed datas et training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general L LM benchmarks. Additionally\, we explain our modeling choices\, training p rocess\, and evaluation methodology.
\nBiography
\nMark Dredze is the John C Malone Professor of Computer Science at Jo hns Hopkins University and the Director of Research (Foundations of AI) fo r the JHU AI-X Foundry. He develops Artificial Intelligence Systems based on natural language processing and explores applications to public health and medicine.
\nProf. Dredze is affiliated with the Malone Center fo r Engineering in Healthcare\, the Center for Language and Speech Processin g\, among others. He holds a joint appointment in the Bio medical Informatics & Data Science Section (< span class='il'>BIDS)\, under the Department of Medicine (DOM)\, Di vision of General Internal Medicine (GIM) in the School of Medicine. He ob tained his PhD from the University of Pennsylvania in 2009.
\n HTML> X-TAGS;LANGUAGE=en-US:2023\,Dredze\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-24481@www.clsp.jhu.edu DTSTAMP:20240328T141530Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nNatural language provides an intuitive and powerful i nterface to access knowledge at scale. Modern language systems draw inform ation from two rich knowledge sources: (1) information stored in their par ameters during massive pretraining and (2) documents retrieved at inferenc e time. Yet\, we are far from building systems that can reliably provide i nformation from such knowledge sources. In this talk\, I will discuss path s for more robust systems. In the first part of the talk\, I will present a module for scaling retrieval-based knowledge augmentation. We learn a co mpressor that maps retrieved documents into textual summaries prior to in- context integration. This not only reduces the computational costs but als o filters irrelevant or incorrect information. In the second half of the t alk\, I will discuss the challenges of updating knowledge stored in model parameters and propose a method to prevent models from reciting outdated i nformation by identifying facts that are prone to rapid change. I will con clude my talk by proposing an interactive system that can elicit informati on from users when needed.\nBiography\nEunsol Choi is an assistant profess or in the Computer Science department at the University of Texas at Austin . Prior to UT\, she spent a year at Google AI as a visiting researcher. He r research area spans natural language processing and machine learning. Sh e is particularly interested in interpreting and reasoning about text in a dynamic real world context. She is a recipient of a Facebook research fel lowship\, Google faculty research award\, Sony faculty award\, and an outs tanding paper award at EMNLP. She received a Ph.D. in computer science and engineering from University of Washington and B.A in mathematics and comp uter science from Cornell University. DTSTART;TZID=America/New_York:20240315T120000 DTEND;TZID=America/New_York:20240315T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21209 SEQUENCE:0 SUMMARY:Eunsol Choi (University of Texas at Austin) “Knowledge-Rich Languag e Systems in a Dynamic World” URL:https://www.clsp.jhu.edu/events/eunsol-choi-university-of-texas-at-aust in-knowledge-rich-language-systems-in-a-dynamic-world/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nNatural language provides an intuitive and powerful i nterface to access knowledge at scale. Modern language systems draw inform ation from two rich knowledge sources: (1) information stored in their par ameters during massive pretraining and (2) documents retrieved at inferenc e time. Yet\, we are far from building systems that can reliably provide i nformation from such knowledge sources. In this talk\, I will discuss path s for more robust systems. In the first part of the talk\, I will present a module for scaling retrieval-based knowledge augmentation. We learn a co mpressor that maps retrieved documents into textual summaries prior to in- context integration. This not only reduces the computational costs but als o filters irrelevant or incorrect information. In the second half of the t alk\, I will discuss the challenges of updating knowledge stored in model parameters and propose a method to prevent models from reciting outdated i nformation by identifying facts that are prone to rapid change. I will con clude my talk by proposing an interactive system that can elicit informati on from users when needed.
\nBiography
\nEunsol Choi is an assistant professor in the Computer Scie nce department at the University of Texas at Austin. Prior to UT\, she spe nt a year at Google AI as a visiting researcher. Her research area spans n atural language processing and machine learning. She is particularly inter ested in interpreting and reasoning about text in a dynamic real world con text. She is a recipient of a Facebook research fellowship\, Google facult y research award\, Sony faculty award\, and an outstanding paper award at EMNLP. She received a Ph.D. in computer science and engineering from Unive rsity of Washington and B.A in mathematics and computer science from Corne ll University.
\n\n X-TAGS;LANGUAGE=en-US:2024\,Choi\,March END:VEVENT END:VCALENDAR