BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-22422@www.clsp.jhu.edu DTSTAMP:20240329T064212Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nZipf’s law is commonly glossed by the aphorism “infre quent words are frequent\,” but in practice\, it has often meant that ther e are three types of words: frequent\, infrequent\, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (wi th dynamic time warping). Hidden Markov models worked well for moderately infrequent words\, but the problem of OOV words was not solved until sequ ence-to-sequence neural nets de-reified the concept of a word. Many other social phenomena follow power-law distributions. The number of native sp eakers of the N’th most spoken language\, for example\, is 1.44 billion ov er N to the 1.09. In languages with sufficient data\, we have shown that monolingual pre-training outperforms multilingual pre-training. In less-f requent languages\, multilingual knowledge transfer can significantly redu ce phone error rates. In languages with no training data\, unsupervised A SR methods can be proven to converge\, as long as the eigenvalues of the l anguage model are sufficiently well separated to be measurable. Other syst ems of social categorization may follow similar power-law distributions. Disability\, for example\, can cause speech patterns that were never seen in the training database\, but not all disabilities need do so. The inabi lity of speech technology to work for people with even common disabilities is probably caused by a lack of data\, and can probably be solved by find ing better modes of interaction between technology researchers and the com munities served by technology.\nBiography\nMark Hasegawa-Johnson is a Will iam L. Everitt Faculty Fellow of Electrical and Computer Engineering at th e University of Illinois in Urbana-Champaign. He has published research i n speech production and perception\, source separation\, voice conversion\ , and low-resource automatic speech recognition. DTSTART;TZID=America/New_York:20221209T120000 DTEND;TZID=America/New_York:20221209T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Mark Hasegawa-Johnson (University of Illinois Urbana-Champaign) “Zi pf’s Law Suggests a Three-Pronged Approach to Inclusive Speech Recognition ” URL:https://www.clsp.jhu.edu/events/mark-hasegawa-johnson-university-of-ill inois-urbana-champaign/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nZipf’s law is commonly glossed by the aphorism “infre quent words are frequent\,” but in practice\, it has often meant that ther e are three types of words: frequent\, infrequent\, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (wi th dynamic time warping). Hidden Markov models worked well for moderately infrequent words\, but the problem of OOV words was not solved until sequ ence-to-sequence neural nets de-reified the concept of a word. Many other social phenomena follow power-law distributions. The number of native sp eakers of the N’th most spoken language\, for example\, is 1.44 billion ov er N to the 1.09. In languages with sufficient data\, we have shown that monolingual pre-training outperforms multilingual pre-training. In less-f requent languages\, multilingual knowledge transfer can significantly redu ce phone error rates. In languages with no training data\, unsupervised A SR methods can be proven to converge\, as long as the eigenvalues of the l anguage model are sufficiently well separated to be measurable. Other syst ems of social categorization may follow similar power-law distributions. Disability\, for example\, can cause speech patterns that were never seen in the training database\, but not all disabilities need do so. The inabi lity of speech technology to work for people with even common disabilities is probably caused by a lack of data\, and can probably be solved by find ing better modes of interaction between technology researchers and the com munities served by technology.
\nBiography
\nMark Hasegawa-Johnson is a William L. Everitt Faculty Fellow of Electrical and Computer Engineering at the University of Illinois in Urbana-Champaig n. He has published research in speech production and perception\, source separation\, voice conversion\, and low-resource automatic speech recogni tion.
\n X-TAGS;LANGUAGE=en-US:2022\,December\,Hasegawa-Johnson END:VEVENT BEGIN:VEVENT UID:ai1ec-23894@www.clsp.jhu.edu DTSTAMP:20240329T064212Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nThe use of NLP in the realm of financial technology i s broad and complex\, with applications ranging from sentiment analysis an d named entity recognition to question answering. Large Language Models (L LMs) have been shown to be effective on a variety of tasks\; however\, no LLM specialized for the financial domain has been reported in the literatu re. In this work\, we present BloombergGPT\, a 50 billion parameter langua ge model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg’s extensive data sources\, p erhaps the largest domain-specific dataset yet\, augmented with 345 billio n tokens from general-purpose datasets. We validate BloombergGPT on stand ard LLM benchmarks\, open financial benchmarks\, and a suite of internal b enchmarks that most accurately reflect our intended usage. Our mixed datas et training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general L LM benchmarks. Additionally\, we explain our modeling choices\, training p rocess\, and evaluation methodology.\nBiography\nMark Dredze is the John C Malone Professor of Computer Science at Johns Hopkins University and the Director of Research (Foundations of AI) for the JHU AI-X Foundry. He deve lops Artificial Intelligence Systems based on natural language processing and explores applications to public health and medicine.\nProf. Dredze is affiliated with the Malone Center for Engineering in Healthcare\, the Cent er for Language and Speech Processing\, among others. He holds a joint app ointment in the Biomedical Informatics & Data Science Section (BIDS)\, und er the Department of Medicine (DOM)\, Division of General Internal Medicin e (GIM) in the School of Medicine. He obtained his PhD from the University of Pennsylvania in 2009. DTSTART;TZID=America/New_York:20230918T120000 DTEND;TZID=America/New_York:20230918T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Mark Dredze (Johns Hopkins University) “BloombergGPT: A Large Langu age Model for Finance” URL:https://www.clsp.jhu.edu/events/mark-dredze-johns-hopkins-university/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nThe use of NLP in the realm of financial technology i s broad and complex\, with applications ranging from sentiment analysis an d named entity recognition to question answering. Large Language Models (L LMs) have been shown to be effective on a variety of tasks\; however\, no LLM specialized for the financial domain has been reported in the literatu re. In this work\, we present BloombergGPT\, a 50 billion parameter langua ge model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg’s extensive data sources\, p erhaps the largest domain-specific dataset yet\, augmented with 345 billio n tokens from general-purpose datasets. We validate BloombergGPT on stand ard LLM benchmarks\, open financial benchmarks\, and a suite of internal b enchmarks that most accurately reflect our intended usage. Our mixed datas et training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general L LM benchmarks. Additionally\, we explain our modeling choices\, training p rocess\, and evaluation methodology.
\nBiography
\nMark Dredze is the John C Malone Professor of Computer Science at Jo hns Hopkins University and the Director of Research (Foundations of AI) fo r the JHU AI-X Foundry. He develops Artificial Intelligence Systems based on natural language processing and explores applications to public health and medicine.
\nProf. Dredze is affiliated with the Malone Center fo r Engineering in Healthcare\, the Center for Language and Speech Processin g\, among others. He holds a joint appointment in the Bio medical Informatics & Data Science Section (< span class='il'>BIDS)\, under the Department of Medicine (DOM)\, Di vision of General Internal Medicine (GIM) in the School of Medicine. He ob tained his PhD from the University of Pennsylvania in 2009.
\n HTML> X-TAGS;LANGUAGE=en-US:2023\,Dredze\,September END:VEVENT END:VCALENDAR