BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20117@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nNeural sequence generation systems oftentimes generat e sequences by searching for the most likely sequence under the learnt pro bability distribution. This assumes that the most likely sequence\, i.e. t he mode\, under such a model must also be the best sequence it has to offe r (often in a given context\, e.g. conditioned on a source sentence in tra nslation). Recent findings in neural machine translation (NMT) show that t he true most likely sequence oftentimes is empty under many state-of-the-a rt NMT models. This follows a large list of other pathologies and biases o bserved in NMT and other sequence generation models: a length bias\, large r beams degrading performance\, exposure bias\, and many more. Many of the se works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this: it is mode-seeking search \, e.g. beam search\, that introduces many of these pathologies and biases \, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass ove r many translations\, and that the most likely translation oftentimes is a rare event. We further show that translation distributions do capture imp ortant aspects of translation well in expectation. Therefore\, we advocate for decision rules that take into account the entire probability distribu tion and not just its mode. We provide one example of such a decision rule \, and show that this is a fruitful research direction.\nBiography\nI am a n assistant professor (UD) in natural language processing at the Institute for Logic\, Language and Computation where I lead the Probabilistic Langu age Learning group.\nMy work concerns the design of models and algorithms that learn to represent\, understand\, and generate language data. Example s of specific problems I am interested in include language modelling\, mac hine translation\, syntactic parsing\, textual entailment\, text classific ation\, and question answering.\nI also develop techniques to approach gen eral machine learning problems such as probabilistic inference\, gradient and density estimation.\nMy interests sit at the intersection of disciplin es such as statistics\, machine learning\, approximate inference\, global optimization\, formal languages\, and computational linguistics.\n \n DTSTART;TZID=America/New_York:20210419T120000 DTEND;TZID=America/New_York:20210419T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Wilker Aziz (University of Amsterdam) “The Inadequacy of the Mode in Neural Machine Translation” URL:https://www.clsp.jhu.edu/events/wilker-aziz-university-of-amsterdam/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nNeural sequence generation systems oftentimes generat e sequences by searching for the most likely sequence under the learnt pro bability distribution. This assumes that the most likely sequence\, i.e. t he mode\, under such a model must also be the best sequence it has to offe r (often in a given context\, e.g. conditioned on a source sentence in tra nslation). Recent findings in neural machine translation (NMT) show that t he true most likely sequence oftentimes is empty under many state-of-the-a rt NMT models. This follows a large list of other pathologies and biases o bserved in NMT and other sequence generation models: a length bias\, large r beams degrading performance\, exposure bias\, and many more. Many of the se works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this: it is mode-seeking search \, e.g. beam search\, that introduces many of these pathologies and biases \, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass ove r many translations\, and that the most likely translation oftentimes is a rare event. We further show that translation distributions do capture imp ortant aspects of translation well in expectation. Therefore\, we advocate for decision rules that take into account the entire probability distribu tion and not just its mode. We provide one example of such a decision rule \, and show that this is a fruitful research direction.
\nBi ography
\nI am an assistant professor (UD) in natu ral language processing at the Institute for Logic\, Language and Computation where I lead the Probabilistic Language Learning group.
\nMy work concerns the design of models and algorithms that learn to represe nt\, understand\, and generate language data. Examples of specific problem s I am interested in include language modelling\, machine translation\, sy ntactic parsing\, textual entailment\, text classification\, and question answering.
\nI also develop techniques to approach general machine l earning problems such as probabilistic inference\, gradient and density es timation.
\nMy interests sit at the intersection of disciplines such as statistics\, machine learning\, approximate inference\, global optimiz ation\, formal languages\, and computational linguistics.
\n\n< p> \n X-TAGS;LANGUAGE=en-US:2021\,April\,Aziz END:VEVENT BEGIN:VEVENT UID:ai1ec-20120@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nRobotics@Google’s mission is to make robots useful in the real world through machine learning. We are excited about a new model for robotics\, designed for generalization across diverse environments an d instructions. This model is focused on scalable data-driven learning\, w hich is task-agnostic\, leverages simulation\, learns from past experience \, and can be quickly adapted to work in the real-world through limited in teractions. In this talk\, we’ll share some of our recent work in this dir ection in both manipulation and locomotion applications.\nBiography\nCarol ina Parada is a Senior Engineering Manager at Google Robotics. She leads t he robot-mobility group\, which focuses on improving robot motion planning \, navigation\, and locomotion\, using reinforcement learning. Prior to th at\, she led the camera perception team for self-driving cars at Nvidia fo r 2 years. She was also a lead with Speech @ Google for 7 years\, where sh e drove multiple research and engineering efforts that enabled Ok Google\, the Google Assistant\, and Voice-Search. Carolina grew up in Venezuela an d moved to the US to pursue a B.S. and M.S. degree in Electrical Engineeri ng at University of Washington and her Phd at Johns Hopkins University at the Center for Language and Speech Processing (CLSP). DTSTART;TZID=America/New_York:20210423T120000 DTEND;TZID=America/New_York:20210423T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Carolina Parada (Google AI) “State of Robotics @ Google” URL:https://www.clsp.jhu.edu/events/carolina-parada-google-ai/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Abstr act
\nRobotics@Google’s mission is to make robots useful i n the real world through machine learning. We are excited about a new mode l for robotics\, designed for generalization across diverse environments a nd instructions. This model is focused on scalable data-driven learning\, which is task-agnostic\, leverages simulation\, learns from past experienc e\, and can be quickly adapted to work in the real-world through limited i nteractions. In this talk\, we’ll share some of our recent work in this di rection in both manipulation and locomotion applications.
\n< strong>Biography
\nCarolina Parad a is a Senior Engineering Manager at Google Robotics. She leads the robot-mobility group\, which focuses on improving robot motion planning\, navigation\, and locomotion\, using reinforcement learning. Prior to that \, she led the camera perception team for self-driving cars at Nvidia for 2 years. She was also a lead with Speech @ Google for 7 years\, where she drove multiple research and engineering efforts that enabled Ok Google\, t he Google Assistant\, and Voice-Search. Carolina< /span> grew up in Venezuela and moved to the US to pursue a B.S. and M.S. degree in Electrical Engineering at University of Washington and her Phd a t Johns Hopkins University at the Center for Language and Speech Processin g (CLSP).
\n X-TAGS;LANGUAGE=en-US:2021\,April\,Parada END:VEVENT BEGIN:VEVENT UID:ai1ec-20716@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nOver the last few years\, deep neural models have tak en over the field of natural language processing (NLP)\, brandishing great improvements on many of its sequence-level tasks. But the end-to-end natu re of these models makes it hard to figure out whether the way they repres ent individual words aligns with how language builds itself from the botto m up\, or how lexical changes in register and domain can affect the untest ed aspects of such representations.\nIn this talk\, I will present NYTWIT\ , a dataset created to challenge large language models at the lexical leve l\, tasking them with identification of processes leading to the formation of novel English words\, as well as with segmentation and recovery of the specific subclass of novel blends. I will then present XRayEmb\, a method which alleviates the hardships of processing these novelties by fitting a character-level encoder to the existing models’ subword tokenizers\; and conclude with a discussion of the drawbacks of current tokenizers’ vocabul ary creation schemes.\nBiography\nYuval Pinter is a Senior Lecturer in the Department of Computer Science at Ben-Gurion University of the Negev\, fo cusing on natural language processing. Yuval got his PhD at the Georgia In stitute of Technology School of Interactive Computing as a Bloomberg Data Science PhD Fellow. Before that\, he worked as a Research Engineer at Yaho o Labs and as a Computational Linguist at Ginger Software\, and obtained a n MA in Linguistics and a BSc in CS and Mathematics\, both from Tel Aviv U niversity. Yuval blogs (in Hebrew) about language matters on Dagesh Kal. DTSTART;TZID=America/New_York:20210910T120000 DTEND;TZID=America/New_York:20210910T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD SEQUENCE:0 SUMMARY:Yuval Pinter (Ben-Gurion University – Virtual Visit) “Challenging a nd Adapting NLP Models to Lexical Phenomena” URL:https://www.clsp.jhu.edu/events/yuval-pinter/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nOver the last few years\, deep neural models have tak en over the field of natural language processing (NLP)\, brandishing great improvements on many of its sequence-level tasks. But the end-to-end natu re of these models makes it hard to figure out whether the way they repres ent individual words aligns with how language builds itself from the botto m up\, or how lexical changes in register and domain can affect the untest ed aspects of such representations.
\nIn this talk\, I will present NYTWIT\, a dataset created to challenge large language models at the lexic al level\, tasking them with identification of processes leading to the fo rmation of novel English words\, as well as with segmentation and recovery of the specific subclass of novel blends. I will then present XRayEmb\, a method which alleviates the hardships of processing these novelties by fi tting a character-level encoder to the existing models’ subword tokenizers \; and conclude with a discussion of the drawbacks of current tokenizers’ vocabulary creation schemes.
\nBiography
\nYuval Pinter
is a Senior Lecturer in the Department of Computer Science at Ben-Gurion
University of the Negev\, focusing on natural language processing. Yuval got his PhD at the Georgia Institute of Tec
hnology School of Interactive Computing as a Bloomberg Data Science PhD Fe
llow. Before that\, he worked as a Research Engineer at Yahoo Labs and as
a Computational Linguist at Ginger Software\, and obtained an MA in Lingui
stics and a BSc in CS and Mathematics\, both from Tel Aviv University.
Abstr act
\nText simplification aims to help audiences read and u nderstand a piece of text through lexical\, syntactic\, and discourse modi fications\, while remaining faithful to its central idea and meaning. Than ks to large-scale parallel corpora derived from Wikipedia and News\, much of modern-day text simplification research focuses on sentence simplificat ion\, transforming original\, more complex sentences into simplified versi ons. In this talk\, I present new frontiers that focus on discourse operat ions. First\, we consider the challenging task of simplifying highly techn ical language\, in our case\, medical texts. We introduce a new corpus of parallel texts in English comprising technical and lay summaries of all pu blished evidence pertaining to different clinical topics. We then propose a new metric to quantify stylistic differentiates between the two\, and mo dels for paragraph-level simplification. Second\, we present the first dat a-driven study of inserting elaborations and explanations during simplific ation\, and illustrate the richness and complexities of this phenomenon. p>\n
Biography
\nAbstr act
\nRaytheon BBN participated in the IARPA MATERIAL progr am\, whose objective is to enable rapid development of language-independen t methods for cross-lingual information retrieval (CLIR). The challenging CLIR task of retrieving documents written (or spoken) in one language so t hat they satisfy an information need expressed in a different language is exacerbated by unique challenges posed by the MATERIAL program: limited tr aining data for automatic speech recognition and machine translation\, sca nt lexical resources\, non-standardized orthography\, etc. Furthermore\, t he format of the queries and the “Query-Weighted Value” performance measur e are non-standard and not previously studied in the IR community. In this talk\, we will describe the Raytheon BBN CLIR system\, which was successf ul at addressing the above challenges and unique characteristics of the pr ogram.
\nBiography
\nDamianos Karakos has been at Raytheon BBN for the past nine years\, wh ere he is currently a Senior Principal Engineer\, Research. Before that\, he was research faculty at Johns Hopkins University. He has worked on seve ral Government projects (e.g.\, DARPA GALE\, DARPA RATS\, IARPA BABEL\, IA RPA MATERIAL\, IARPA BETTER) and on a variety of HLT-related topics (e.g.\ , speech recognition\, speech activity detection\, keyword search\, inform ation retrieval). He has published more than 60 peer-reviewed papers. His research interests lie at the intersection of human language technology an d machine learning\, with an emphasis on statistical methods. He obtained a PhD in Electrical Engineering from the University of Maryland\, College Park\, in 2002.
\n\n
Abstr act
\nIn recent years\, the field of Natural Language Proce ssing has seen a profusion of tasks\, datasets\, and systems that facilita te reasoning about real-world situations through language (e.g.\, RTE\, MN LI\, COMET). Such systems might\, for example\, be trained to consider a s ituation where “somebody dropped a glass on the floor\,” and conclude it i s likely that “the glass shattered” as a result. In this talk\, I will dis cuss three pieces of work that revisit assumptions made by or about these systems. In the first work\, I develop a Defeasible Inference task\, which enables a system to recognize when a prior assumption it has made may no longer be true in light of new evidence it receives. The second work I wil l discuss revisits partial-input baselines\, which have highlighted issues of spurious correlations in natural language reasoning datasets and led t o unfavorable assumptions about models’ reasoning abilities. In particular \, I will discuss experiments that show models may still learn to reason i n the presence of spurious dataset artifacts. Finally\, I will touch on wo rk analyzing harmful assumptions made by reasoning models in the form of s ocial stereotypes\, particularly in the case of free-form generative reaso ning models.
\nBiography
\nRachel Rudinger is an Assistant Professor in the Department of Computer Science at the Unive rsity of Maryland\, College Park. She holds joint appointments in the Depa rtment of Linguistics and the Institute for Advanced Computer Studies (UMI ACS). In 2019\, Rachel completed her Ph.D. in Computer Science at Johns Ho pkins University in the Center for Language and Speech Processing. From 20 19-2020\, she was a Young Investigator at the Allen Institute for AI in Se attle\, and a visiting researcher at the University of Washington. Her res earch interests include computational semantics\, common-sense reasoning\, and issues of social bias and fairness in NLP.
\n X-TAGS;LANGUAGE=en-US:2022\,Rudinger\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-22375@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nI will present our work on data augmentation using st yle transfer as a way to improve domain adaptation in sequence labeling ta sks. The target domain is social media data\, and the task is named entity recognition (NER). The premise is that we can transform the labelled out of domain data into something that stylistically is more closely related t o the target data. Then we can train a model on a combination of the gener ated data and the smaller amount of in domain data to improve NER predicti on performance. I will show recent empirical results on these efforts.\nIf time allows\, I will also give an overview of other research projects I’m currently leading at RiTUAL (Research in Text Understanding and Analysis of Language) lab. The common thread among all these research problems is t he scarcity of labeled data.\nBiography\nThamar Solorio is a Professor of Computer Science at the University of Houston (UH). She holds graduate deg rees in Computer Science from the Instituto Nacional de Astrofísica\, Ópti ca y Electrónica\, in Puebla\, Mexico. Her research interests include info rmation extraction from social media data\, enabling technology for code-s witched data\, stylistic modeling of text\, and more recently multimodal a pproaches for online content understanding. She is the director and founde r of the RiTUAL Lab at UH. She is the recipient of an NSF CAREER award for her work on authorship attribution\, and recipient of the 2014 Emerging L eader ABIE Award in Honor of Denice Denton. She is currently serving a sec ond term as an elected board member of the North American Chapter of the A ssociation of Computational Linguistics and was PC co-chair for NAACL 2019 . She recently joined the team of Editors in Chief for the ACL Rolling Rev iew (ARR) system. Her research is currently funded by the NSF and by ADOBE . DTSTART;TZID=America/New_York:20220923T120000 DTEND;TZID=America/New_York:20220923T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Thamar Solorio (University of Houston) “Style Transfer for Data Aug mentation in Sequence Labeling Tasks” URL:https://www.clsp.jhu.edu/events/thamar-solorio-university-of-houston-st yle-transfer-for-data-augmentation-in-sequence-labeling-tasks/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nI will present our work on data a ugmentation using style transfer as a way to improve domain adaptation in sequence labeling tasks. The target domain is social media data\, and the task is named entity recognition (NER). The premise is that we can transfo rm the labelled out of domain data into something that stylistically is mo re closely related to the target data. Then we can train a model on a comb ination of the generated data and the smaller amount of in domain data to improve NER prediction performance. I will show recent empirical results o n these efforts.
\nIf time allows\, I will also give an overview of other research projects I’m currently leading at RiTUA L (Research in Text Understanding and Analysis of Language) lab. The commo n thread among all these research problems is the scarcity of labeled data .
\nBiography
\nThamar Solorio is a Professor of Computer Science at the Univer sity of Houston (UH). She holds graduate degrees in Computer Science from the Instituto Nacional de Astrofísica\, Óptica y Electrónica\, in Puebla\, Mexico. Her research interests include information extraction from social media data\, enabling technology for code-switched data\, stylistic model ing of text\, and more recently multimodal approaches for online content u nderstanding. She is the director and founder of the RiTUAL Lab at UH. She is the recipient of an NSF CAREER award for her work on authorship attrib ution\, and recipient of the 2014 Emerging Leader ABIE Award in Honor of D enice Denton. She is currently serving a second term as an elected board m ember of the North American Chapter of the Association of Computational Li nguistics and was PC co-chair for NAACL 2019. She recently joined the team of Editors in Chief for the ACL Rolling Review (ARR) system. Her research is currently funded by the NSF and by ADOBE.
\n X-TAGS;LANGUAGE=en-US:2022\,September\,Solorio END:VEVENT BEGIN:VEVENT UID:ai1ec-22380@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nThe availability of large multilingual pre-trained la nguage models has opened up exciting pathways for developing NLP technolog ies for languages with scarce resources. In this talk I will advocate for the need to go beyond the most common languages in multilingual evaluation \, and on the challenges of handling new\, unseen-during-training language s and varieties. I will also share some of my experiences with working wit h indigenous and other endangered language communities and activists.\nBio graphy\n\nAntonios Anastasopoulos is an Assistant Professor in Computer Sc ience at George Mason University. In 2019\, Antonis received his PhD in Co mputer Science from the University of Notre Dame and then worked as a post doctoral researcher at the Language Technologies Institute at Carnegie Mel lon University. His research interests revolve around computational lingui stics and natural language processing with a focus on low-resource setting s\, endangered languages\, and cross-lingual learning.\n\n\n DTSTART;TZID=America/New_York:20220930T120000 DTEND;TZID=America/New_York:20220930T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Antonios Anastasopoulos (George Mason University) “NLP Beyond the T op-100 Languages” URL:https://www.clsp.jhu.edu/events/antonis-anastasopoulos-george-mason-uni versity/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nThe availability of large multilingual pre-trained la nguage models has opened up exciting pathways for developing NLP technolog ies for languages with scarce resources. In this talk I will advocate for the need to go beyond the most common languages in multilingual evaluation \, and on the challenges of handling new\, unseen-during-training language s and varieties. I will also share some of my experiences with working wit h indigenous and other endangered language communities and activists.
\nBiography
\nAntonios Anastasopoulos is an Assistant Professor in Compu ter Science at George Mason University. In 2019\, Antonis received his PhD in Computer Science from the University of Notre Dame and then worked as a postdoctoral researcher at the Language Technologies Institute at Carneg ie Mellon University. His research interests revolve around computational linguistics and natural language processing with a focus on low-resource s ettings\, endangered languages\, and cross-lingual learning.
\n\n X-TAGS;LANGUAGE=en-US:2022\,Anastasopoulos\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-22400@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nModern learning architectures for natural language pr ocessing have been very successful in incorporating a huge amount of texts into their parameters. However\, by and large\, such models store and use knowledge in distributed and decentralized ways. This proves unreliable a nd makes the models ill-suited for knowledge-intensive tasks that require reasoning over factual information in linguistic expressions. In this tal k\, I will give a few examples of exploring alternative architectures to t ackle those challenges. In particular\, we can improve the performance of such (language) models by representing\, storing and accessing knowledge i n a dedicated memory component.\nThis talk is based on several joint works with Yury Zemlyanskiy (Google Research)\, Michiel de Jong (USC and Google Research)\, William Cohen (Google Research and CMU) and our other collabo rators in Google Research.\nBiography\nFei is a research scientist at Goog le Research. Before that\, he was a Professor of Computer Science at Unive rsity of Southern California. His primary research interests are machine l earning and its application to various AI problems: speech and language pr ocessing\, computer vision\, robotics and recently weather forecast and cl imate modeling. He has a PhD (2007) from Computer and Information Scienc e from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing\, China). DTSTART;TZID=America/New_York:20221024T120000 DTEND;TZID=America/New_York:20221024T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Fei Sha (University of Southern California) “Extracting Information from Text into Memory for Knowledge-Intensive Tasks” URL:https://www.clsp.jhu.edu/events/fei-sha-university-of-southern-californ ia/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Abstr act
\nModern learning architectures for natural language processing have been very successful in incorporating a huge amount of texts into their parameters. However\, by and large\, such models store and use knowledge in distributed and decentralized ways. This proves unreliable and makes the models ill-suited for knowledge-intensive tasks that require reasoning over factual information in linguistic expre ssions. In this talk\, I will give a few examples of exploring alternativ e architectures to tackle those challenges. In particular\, we can improve the performance of such (language) models by representing\, storing and a ccessing knowledge in a dedicated memory component.
\nThis talk is based on several joint works with Yury Zemlyanskiy (Goo gle Research)\, Michiel de Jong (USC and Google Research)\, William Cohen (Google Research and CMU) and our other collaborators in Google Research.< /p>\n
Biography
\nFei is a research scientist at Google Research. Before that\, he was a Professor of Computer Science at U niversity of Southern California. His primary research interests are machi ne learning and its application to various AI problems: speech and languag e processing\, computer vision\, robotics and recently weather forecast an d climate modeling. He has a PhD (2007) from Computer and Information Sc ience from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing\, China).
\n X-TAGS;LANGUAGE=en-US:2022\,October\,Sha END:VEVENT BEGIN:VEVENT UID:ai1ec-23515@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\n\n\n\nHow important are different temporal speech mod ulations for speech recognition? We answer this question from two compleme ntary perspectives. Firstly\, we quantify the amount of phonetic informati on in the modulation spectrum of speech by computing the mutual informatio n between temporal modulations with frame-wise phoneme labels. Looking fro m another perspective\, we ask – which speech modulations an Automatic Spe ech Recognition (ASR) system prefers for its operation. Data-driven weight s are learned over the modulation spectrum and optimized for an end-to-end ASR task. Both methods unanimously agree that speech information is mostl y contained in slow modulation. Maximum mutual information occurs around 3 -6 Hz which also happens to be the range of modulations most preferred by the ASR. In addition\, we show that the incorporation of this knowledge in to ASRs significantly reduces their dependency on the amount of training d ata.\n DTSTART;TZID=America/New_York:20230403T120000 DTEND;TZID=America/New_York:20230403T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Samik Sadhu (JHU) “Importance of Different Tempor al Modulations of Speech: A Tale of Two Perspectives” URL:https://www.clsp.jhu.edu/events/student-seminar-samik-sadhu/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nHow important are different temporal speech modulations for speec h recognition? We answer this question from two complementary perspectives . Firstly\, we quantify the amount of phonetic information in the modulati on spectrum of speech by computing the mutual information between temporal modulations with frame-wise phoneme labels. Looking from another perspect ive\, we ask – which speech modulations an Automatic Speech Recognition (A SR) system prefers for its operation. Data-driven weights are learned over the modulation spectrum and optimized for an end-to-end ASR task. Both me thods unanimously agree that speech information is mostly contained in slo w modulation. Maximum mutual information occurs around 3-6 Hz which also h appens to be the range of modulations most preferred by the ASR. In additi on\, we show that the incorporation of this knowledge into ASRs significan tly reduces their dependency on the amount of training data.
\n< p> \nLearning How to Play With The Machines: Taking Stock of Where the Collaboration Between Computational and Social Science Stands
\n\n
Speakers: Jeff Gill\, Ernesto Calvo\, Hale Sirin and Antonios Anastasopoulos
\n X-TAGS;LANGUAGE=en-US:2023\,April\,APSA Roundtable END:VEVENT BEGIN:VEVENT UID:ai1ec-23586@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20230410T120000 DTEND;TZID=America/New_York:20230410T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Ruizhe Huang URL:https://www.clsp.jhu.edu/events/student-seminar-ruizhe-huang/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,Huang END:VEVENT BEGIN:VEVENT UID:ai1ec-23588@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nAdvances in open domain Large Language Models (LLMs) starting with BERT and more recently with GPT-4\, PaLM\, and LLaMA have fa cilitated dramatic improvements in conversational systems. These improveme nts include an unprecedented breadth of conversational interactions betwee n humans and machines while maintaining and sometimes surpassing the accur acy of systems trained specifically for known\, closed domains. However\, many applications still require higher levels of accuracy than pre-trained LLMs can provide. There are many studies underway to accomplish this. Bro adly speaking\, the methods assume the pre-trained models are fixed (due t o cost/time)\, and instead look to various augmentation methods including prompting strategies and model adaptation/fine-tuning.\nOne augmentation s trategy leverages the context of the conversation. For example\, who are t he participants and what is known about these individuals (personal contex t)\, what was just said (dialogue context)\, where is the conversation tak ing place (geo context)\, what time of day and season is it (time context) \, etc. A powerful form of context is the shared visual setting of the co nversation between the human(s) and machine. The shared visual scene may b e from a device (phone\, smart glasses) or represented on a screen (browse r\, maps\, etc.) The elements in the visual context can be exploited by gr ounding the natural language conversational interaction\, thereby changing the priors of certain concepts and increasing the accuracy of the system. In this talk\, I will present some of my historical work in this area as well as my recent work in the AI Virtual Assistant (AVA) Lab at Georgia Te ch.\nBio\nDr. Larry Heck is a Professor with a joint appointment in the Sc hool of Electrical and Computer Engineering and the School of Interactive Computing at the Georgia Institute of Technology. He holds the Rhesa S. Fa rmer Distinguished Chair of Advanced Computing Concepts and is a Georgia R esearch Alliance Eminent Scholar. His received the BSEE from Texas Tech Un iversity (1986)\, and MSEE and PhD EE from the Georgia Institute of Techno logy (1989\,1991). He is a Fellow of the IEEE\, inducted into the Academy of Distinguished Engineering Alumni at Georgia Tech and received the Disti nguished Engineer Award from the Texas Tech University Whitacre College of Engineering. He was a Senior Research Engineer with SRI (1992-98)\, Vice President of R&D at Nuance (1998-2005)\, Vice President of Search and Adve rtising Sciences at Yahoo! (2005-2009)\, Chief Scientist of the Microsoft Speech products and Distinguished Engineer in Microsoft Research (2009-201 4)\, Principal Scientist with Google Research (2014-2017)\, and CEO of Viv Labs and SVP at Samsung (2017-2021).\n\n DTSTART;TZID=America/New_York:20230414T120000 DTEND;TZID=America/New_York:20230414T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Larry Heck (Georgia Institute of Technology) “The AVA Digital Human : Improving Conversational Interactions through Visually Situated Context” URL:https://www.clsp.jhu.edu/events/larry-heck-georgia-institute-of-technol ogy-the-ava-digital-human-improving-conversational-interactions-through-vi sually-situated-context/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAdvances in open domain Large Lan guage Models (LLMs) starting with BERT and more recently with GPT-4\, PaLM \, and LLaMA have facilitated dramatic improvements in conversational syst ems. These improvements include an unprecedented breadth of conversational interactions between humans and machines while maintaining and sometimes surpassing the accuracy of systems trained specifically for known\, closed domains. However\, many applications still require higher levels of accur acy than pre-trained LLMs can provide. There are many studies underway to accomplish this. Broadly speaking\, the methods assume the pre-trained mod els are fixed (due to cost/time)\, and instead look to various augmentatio n methods including prompting strategies and model adaptation/fine-tuning.
\nOne augmentation strategy leverages the conte xt of the conversation. For example\, who are the participants and what is known about these individuals (personal context)\, what was just said (di alogue context)\, where is the conversation taking place (geo context)\, w hat time of day and season is it (time context)\, etc. A powerful form of context is the shared visual setting of the conversation between the huma n(s) and machine. The shared visual scene may be from a device (phone\, sm art glasses) or represented on a screen (browser\, maps\, etc.) The elemen ts in the visual context can be exploited by grounding the natural languag e conversational interaction\, thereby changing the priors of certain conc epts and increasing the accuracy of the system. In this talk\, I will pres ent some of my historical work in this area as well as my recent work in t he AI Virtual Assistant (AVA) Lab at Georgia Tech.
\nBio
\nDr. Larry Heck is a Professor with a joi nt appointment in the School of Electrical and Computer Engineering and th e School of Interactive Computing at the Georgia Institute of Technology. He holds the Rhesa S. Farmer Distinguished Chair of Advanced Computing Con cepts and is a Georgia Research Alliance Eminent Scholar. His received the BSEE from Texas Tech University (1986)\, and MSEE and PhD EE from the Geo rgia Institute of Technology (1989\,1991). He is a Fellow of the IEEE\, in ducted into the Academy of Distinguished Engineering Alumni at Georgia Tec h and received the Distinguished Engineer Award from the Texas Tech Univer sity Whitacre College of Engineering. He was a Senior Research Engineer wi th SRI (1992-98)\, Vice President of R&D at Nuance (1998-2005)\, Vice Pres ident of Search and Advertising Sciences at Yahoo! (2005-2009)\, Chief Sci entist of the Microsoft Speech products and Distinguished Engineer in Micr osoft Research (2009-2014)\, Principal Scientist with Google Research (201 4-2017)\, and CEO of Viv Labs and SVP at Samsung (2017-2021).
\n\n
\n X-TAGS;LANGUAGE=en-US:2023\,April\,Heck END:VEVENT BEGIN:VEVENT UID:ai1ec-23590@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nMachine Translation has the ultimate goal of eliminat ing language barriers. However\, the area has focused mainly on a few lang uages\, leaving many low-resource languages without support. In this talk\ , I will discuss the challenges of bringing translation support for 200 wr itten languages and beyond.\n\nFirst\, I talk about the No Language Left B ehind Project\, where we took on this challenge by first contextualizing t he need for low-resource language translation support through exploratory interviews with native speakers. Then\, we created datasets and models aim ed at narrowing the performance gap between low and high-resource language s. We proposed multiple architectural and training improvements to counter act over-fitting while training on thousands of language-pairs/tasks. We e valuated the performance of over 40\,000 different translation directions. \n\nAfterwards\, I’ll discuss the challenges of pushing translation perfor mance beyond text for languages that don’t have written standards like Hok kien.\nOur models achieve state-of-the-art performance and lay important g roundwork towards realizing a universal translation system. At the same ti me\, we keep making open-source contributions for everyone to keep advanci ng the research for the languages they care about.\nBio\nPaco is Research Scientist Manager supporting translation teams in Meta AI (FAIR). He works in the field of machine translation with a focus on low-resource translat ion (e.g. NLLB\, FLORES) and the aim to break language barriers. He joined Meta in 2016. His research has been published in top-tier NLP venues like ACL\, EMNLP. He was the co-chair of the Research director at AMTA (2020-2 022). He has ave organized several research competitions focused on low-re source translation and data filtering. Paco obtained his PhD from the ITES M in Mexico\, was a visiting scholar at the LTI-CMU from 2008-2009\, and p articipated in DARPA’s GALE evaluation program. Paco was a post-doc and sc ientist at Qatar Computing Research Institute in Qatar in 2012-2016 DTSTART;TZID=America/New_York:20230417T120000 DTEND;TZID=America/New_York:20230417T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Paco Guzman (Meta AI) “Building a Universal Translation System to B reak Down Language Barriers” URL:https://www.clsp.jhu.edu/events/paco-guzman-meta-ai/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nOur models achieve state-of-the-art performance and lay important groundwork towards realizing a universal translation system. At the same time\, we keep maki ng open-source contributions for everyone to keep advancing the research f or the languages they care about.
\nBio
\nPac o is Research Scientist Manager supporting translation teams in Meta AI (F AIR). He works in the field of machine translation with a focus on low-res ource translation (e.g. NLLB\, FLORES) and the aim to break language barri ers. He joined Meta in 2016. His research has been published in top-tier N LP venues like ACL\, EMNLP. He was the co-chair of the Research director a t AMTA (2020-2022). He has ave organized several research competitions foc used on low-resource translation and data filtering. Paco obtained his PhD from the ITESM in Mexico\, was a visiting scholar at the LTI-CMU from 200 8-2009\, and participated in DARPA’s GALE evaluation program. Paco was a p ost-doc and scientist at Qatar Computing Research Institute in Qatar in 20 12-2016
\n X-TAGS;LANGUAGE=en-US:2023\,April\,Guzman END:VEVENT BEGIN:VEVENT UID:ai1ec-23592@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nLarge language models (LLMs) have ushered in exciting capabilities in language understanding and text generation\, with systems like ChatGPT holding fluent dialogs with users and being almost indisting uishable from humans. While this has obviously raised conversational syste ms and chatbots to a new level\, it also presents exciting new opportuniti es for building artificial agents with improved decision making capabiliti es. Specifically\, the ability to reason with language can allow us to bui ld agents that can 1) execute complex action sequences to effect change in the world\, 2) learn new skills by ‘reading’ in addition to ‘doing’\, and 3) allow for easier personalization and control over their behavior. In t his talk\, I will demonstrate how we can build such language-enabled agent s that exhibit the above traits across various use cases such as multi-hop question answering\, web interaction\, and robotic tool manipulation. In the end\, I will also discuss some dangers of using these LLM-based system s and some challenges that lie ahead in ensuring their safe use.\nBiograph y\nKarthik Narasimhan is an assistant professor in the Computer Science de partment at Princeton University and a co-Director of the Princeton NLP gr oup. His research spans the areas of natural language processing and reinf orcement learning\, with the goal of building intelligent agents that lear n to operate in the world through both their own experience (”doing things ”) and leveraging existing human knowledge (”reading about things”). Karth ik received his PhD from MIT in 2017\, and spent a year as a visiting rese arch scientist at OpenAI contributing to the GPT language model\, prior to joining Princeton in 2018. His research has been recognized by the NSF CA REER\, a Google Research Scholar Award\, an Amazon research award (2019)\, Bell Labs runner-up prize and outstanding paper awards at EMNLP (2015\, 2 016) and NeurIPS (2022). DTSTART;TZID=America/New_York:20230421T120000 DTEND;TZID=America/New_York:20230421T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Karthik Narasimhan (Princeton University) ” Towards General-Purpose Language-Enabled Agents: Machines that can Read\, Think and Act” URL:https://www.clsp.jhu.edu/events/karthik-narasimhan-princeton-university / X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nLarge language models (LLMs) have ushered in exciting capabilities in language understanding and text generation\, with systems like ChatGPT holding fluent dialogs with users and being almost indisting uishable from humans. While this has obviously raised conversational syste ms and chatbots to a new level\, it also presents exciting new opportuniti es for building artificial agents with improved decision making capabiliti es. Specifically\, the ability to reason with language can allow us to bui ld agents that can 1) execute complex action sequences to effect change in the world\, 2) learn new skills by ‘reading’ in addition to ‘doing’\, and 3) allow for easier personalization and control over their behavior. In t his talk\, I will demonstrate how we can build such language-enabled agent s that exhibit the above traits across various use cases such as multi-hop question answering\, web interaction\, and robotic tool manipulation. In the end\, I will also discuss some dangers of using these LLM-based system s and some challenges that lie ahead in ensuring their safe use.
\n< strong>Biography
\nKarthik Narasimhan is an assistan t professor in the Computer Science department at Princeton University and a co-Director of the Princeton NLP group. His research spans the areas of natural language processing and reinforcement learning\, with the goal of building intelligent agents that learn to operate in the world through bo th their own experience (”doing things”) and leveraging existing human kno wledge (”reading about things”). Karthik received his PhD from MIT in 2017 \, and spent a year as a visiting research scientist at OpenAI contributin g to the GPT language model\, prior to joining Princeton in 2018. His rese arch has been recognized by the NSF CAREER\, a Google Research Scholar Awa rd\, an Amazon research award (2019)\, Bell Labs runner-up prize and outst anding paper awards at EMNLP (2015\, 2016) and NeurIPS (2022).
\n X-TAGS;LANGUAGE=en-US:2023\,April\,Narasimhan END:VEVENT BEGIN:VEVENT UID:ai1ec-23606@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20230424T120000 DTEND;TZID=America/New_York:20230424T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Brian Lu URL:https://www.clsp.jhu.edu/events/student-seminar-brian-lu/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,Lu END:VEVENT BEGIN:VEVENT UID:ai1ec-23608@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nAutomated analysis of student writing has the potenti al to provide alternatives to selected-response questions such as multiple choice\, and to enable teachers and instructors to assess students’ reaso ning skills based on their long-form writing. Further\, automated support to assess both short answers and long passages could provide students with a smoother trajectory towards mastery of written communication. Our meth ods focus on the specific ideas students express to support formative asse ssment through different kinds of feedback\, which aims to scaffold their abilities to reason and communicate. In this talk I review our work in the PSU NLP lab on methods for automated assessment of different forms of stu dent writing\, from younger and older students. I will briefly illustrate highly curated datasets created in collaboration with researchers in STEM education\, results from deployment of an older content analysis tool on middle school physics essays\, and very preliminary results on assessment of college students’ physics lab reports. I will also present our current work on short answer assessment using a novel recurrent relation network that incorporates contrastive learning.\nBio\nBecky Passonneau has been a Professor in the Department of Computer Science and Engineering at Penn St ate University since 2016\, when she joined as the first NLP researcher. S ince that time the NLP faculty has grown to include Rui Zhang and Wenpeng Yin. Becky’s research in natural language processing addresses computation al pragmatics\, meaning the investigation of language as a system of inter active behavior that serves a wide range of purposes. She received her PhD in Linguistics from the University of Chicago in 1985\, and worked at sev eral academic and industry research labs before joining Penn State. Her wo rk is reported in over 140 publications in journals and refereed conferenc e proceedings\, and has been funded through 27 sponsored projects from 16 sources\, including government agencies\, corporate sponsors\, corporate gifts\, and foundations.. DTSTART;TZID=America/New_York:20230428T120000 DTEND;TZID=America/New_York:20230428T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Becky Passonneau (Penn State University) ” Automated Support to Sca ffold Students’ Short- and Long-form STEM Writing” URL:https://www.clsp.jhu.edu/events/becky-passonneau-penn-state-university- automated-support-to-scaffold-students-short-and-long-form-stem-writing/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAutomated analysis of student writing has the potenti al to provide alternatives to selected-response questions such as multiple choice\, and to enable teachers and instructors to assess students’ reaso ning skills based on their long-form writing. Further\, automated support to assess both short answers and long passages could provide students with a smoother trajectory towards mastery of written communication. Our meth ods focus on the specific ideas students express to support formative asse ssment through different kinds of feedback\, which aims to scaffold their abilities to reason and communicate. In this talk I review our work in the PSU NLP lab on methods for automated assessment of different forms of stu dent writing\, from younger and older students. I will briefly illustrate highly curated datasets created in collaboration with researchers in STEM education\, results from deployment of an older content analysis tool on middle school physics essays\, and very preliminary results on assessment of college students’ physics lab reports. I will also present our current work on short answer assessment using a novel recurrent relation network that incorporates contrastive learning.
\nBio
\nBecky Passonneau has been a Professor in the Department of Computer Sci ence and Engineering at Penn State University since 2016\, when she joined as the first NLP researcher. Since that time the NLP faculty has grown to include Rui Zhang and Wenpeng Yin. Becky’s research in natural language p rocessing addresses computational pragmatics\, meaning the investigation o f language as a system of interactive behavior that serves a wide range of purposes. She received her PhD in Linguistics from the University of Chic ago in 1985\, and worked at several academic and industry research labs be fore joining Penn State. Her work is reported in over 140 publications in journals and refereed conference proceedings\, and has been funded through 27 sponsored projects from 16 sources\, including government agencies\, corporate sponsors\, corporate gifts\, and foundations..
\n X-TAGS;LANGUAGE=en-US:2023\,April\,Passonneau END:VEVENT BEGIN:VEVENT UID:ai1ec-23882@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nLarge language models (LLMs) have demonstrated incred ible power\, but they also possess vulnerabilities that can lead to misuse and potential attacks. In this presentation\, we will address two fundame ntal questions regarding the responsible utilization of LLMs: (1) How can we accurately identify AI-generated text? (2) What measures can safeguard the intellectual property of LLMs? We will introduce two recent watermarki ng techniques designed for text and models\, respectively. Our discussion will encompass the theoretical underpinnings that ensure the correctness o f watermark detection\, along with robustness against evasion attacks. Fur thermore\, we will showcase empirical evidence validating their effectiven ess. These findings establish a solid technical groundwork for policymaker s\, legal professionals\, and generative AI practitioners alike.\nBiograph y\nLei Li is an Assistant Professor in Language Technology Institute at Ca rnegie Mellon University. He received Ph.D. from Carnegie Mellon Universit y School of Computer Science. He is a recipient of ACL 2021 Best Paper Awa rd\, CCF Young Elite Award in 2019\, CCF distinguished speaker in 2017\, W u Wen-tsün AI prize in 2017\, and 2012 ACM SIGKDD dissertation award (runn er-up)\, and is recognized as Notable Area Chair of ICLR 2023. Previously\ , he was a faculty member at UC Santa Barbara. Prior to that\, he founded ByteDance AI Lab in 2016 and led its research in NLP\, ML\, Robotics\, an d Drug Discovery. He launched ByteDance’s machine translation system VolcT rans and AI writing system Xiaomingbot\, serving one billion users. DTSTART;TZID=America/New_York:20230901T120000 DTEND;TZID=America/New_York:20230901T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Lei Li (Carnegie Mellon University) “Empowering Responsible Use of Large Language Models” URL:https://www.clsp.jhu.edu/events/lei-li-carnegie-mellon-university-empow ering-responsible-use-of-large-language-models/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nLarge language models (LLMs) have demonstrated incred ible power\, but they also possess vulnerabilities that can lead to misuse and potential attacks. In this presentation\, we will address two fundame ntal questions regarding the responsible utilization of LLMs: (1) How can we accurately identify AI-generated text? (2) What measures can safeguard the intellectual property of LLMs? We will introduce two recent watermarki ng techniques designed for text and models\, respectively. Our discussion will encompass the theoretical underpinnings that ensure the correctness o f watermark detection\, along with robustness against evasion attacks. Fur thermore\, we will showcase empirical evidence validating their effectiven ess. These findings establish a solid technical groundwork for policymaker s\, legal professionals\, and generative AI practitioners alike.
\n< strong>Biography
\nLei Li is an Assistant Professor in Lang uage Technology Institute at Carnegie Mellon University. He received Ph.D. from Carnegie Mellon University School of Computer Science. He is a recip ient of ACL 2021 Best Paper Award\, CCF Young Elite Award in 2019\, CCF di stinguished speaker in 2017\, Wu Wen-tsün AI prize in 2017\, and 2012 ACM SIGKDD dissertation award (runner-up)\, and is recognized as Notable Area Chair of ICLR 2023. Previously\, he was a faculty member at UC Santa Barba ra. Prior to that\, he founded ByteDance AI Lab in 2016 and led its resea rch in NLP\, ML\, Robotics\, and Drug Discovery. He launched ByteDance’s m achine translation system VolcTrans and AI writing system Xiaomingbot\, se rving one billion users.
\n X-TAGS;LANGUAGE=en-US:2023\,Li\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-23886@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nThe arms race to build increasingly larger\, powerful language models (LMs) in the past year has been remarkable. Yet incorpora ting LMs effectively into practical applications that facilitate manual wo rkflows remains challenging. I will discuss LMs’ limiting factors and our efforts to overcome them. I will start with challenges surrounding efficie nt and robust LM alignment. I will share insights from our recent paper “S elf-Instruct” (ACL 2023)\, where we used vanilla (unaligned) LMs for align ing itself\, an approach that has yielded some success. Then\, I will move on to the challenge of tracing the output of LMs to reliable sources\, a weakness that makes them prone to hallucinations. I will discuss our recen t approach of ‘according-to’ prompting\, which steers LMs to quote directl y from sources observed in its pre-training. If time permits\, I will disc uss our ongoing project to adapt LMs to interact with web pages. Throughou t the presentation\, I will highlight our progress\, and end with question s about our future progress.\nBiography\nDaniel Khashabi is an assistant p rofessor in computer science at Johns Hopkins University and the Center fo r Language and Speech Processing (CLSP) member. He is interested in buildi ng reasoning-driven modular NLP systems that are robust\, transparent\, an d communicative\, particularly those that use natural language as the comm unication medium. Khashabi has published over 40 papers on natural languag e processing and AI in top-tier venues. His work touches upon developing. His research has won the ACL 2023 Outstanding Paper Award\, NAACL 2022 Bes t Paper Award\, research gifts from the Allen Institute for AI\, and an Am azon Research Award 2023. Before joining Hopkins\, he was a postdoctoral f ellow at the Allen Institute for AI (2019-2022) and obtained a Ph.D. from the University of Pennsylvania in 2019. DTSTART;TZID=America/New_York:20230908T120000 DTEND;TZID=America/New_York:20230908T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Daniel Khashabi (Johns Hopkins University) “Building More Helpful L anguage Models” URL:https://www.clsp.jhu.edu/events/daniel-khashabi-johns-hopkins-universit y/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nThe arms race to build increasingly larger\, powerful language models (LMs) in the past year has been remarkable. Yet incorpora ting LMs effectively into practical applications that facilitate manual wo rkflows remains challenging. I will discuss LMs’ limiting factors and our efforts to overcome them. I will start with challenges surrounding efficie nt and robust LM alignment. I will share insights from our recent paper “Self-Instruct” (ACL 2023)\, where we used vanilla (unaligned) LMs for aligning itself\, an approach that has yielded some success. Then\, I will move on to the challenge of t racing the output of LMs to reliable sources\, a weakness that makes them prone to hallucinations. I will discuss our recent approach of ‘according-to’ prompting\, which steers LM s to quote directly from sources observed in its pre-training. If time per mits\, I will discuss our ongoing project to adapt LMs to interact with we b pages. Throughout the presentation\, I will highlight our progress\, and end with questions about our future progress.
\nBiography strong>
\nDaniel Khashabi is an assistant professor in computer science at Johns Hopkins University and the Center for Language and Speech Pr ocessing (CLSP) member. He is interested in building reasoning-driven modu lar NLP systems that are robust\, transparent\, and communicative\, partic ularly those that use natural language as the communication medium. Khasha bi has published over 40 papers on natural language processing and AI in t op-tier venues. His work touches upon developing. His research has won the ACL 2023 Outstanding Paper Award\, NAACL 2022 Best Paper Award\, research gifts from the Allen Institute for AI\, and an Amazon Research Award 2023 . Before joining Hopkins\, he was a postdoctoral fellow at the Allen Insti tute for AI (2019-2022) and obtained a Ph.D. from the University of Pennsy lvania in 2019.
\n X-TAGS;LANGUAGE=en-US:2023\,Khashabi\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-23888@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nEmbedding text sequences is a widespread requirement in modern language understanding. Existing approaches focus largely on con stant-size representations. This is problematic\, as the amount of informa tion contained in text often varies with the length of the input. We propo se a solution called Nugget\, which encodes language into a representation based on a dynamically selected subset of input tokens. These nuggets are learned through tasks like autoencoding and machine translation\, and int uitively segment language into meaningful units. We demonstrate Nugget out performs related approaches in tasks involving semantic comparison. Finall y\, we illustrate these compact units allow for expanding the contextual w indow of a language model (LM)\, suggesting new future LMs that can condit ion on significantly larger amounts of content. DTSTART;TZID=America/New_York:20230911T120000 DTEND;TZID=America/New_York:20230911T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Guanghui Qin “Nugget: Neural Agglomerative Embedd ings of Text (ICML 2023)” URL:https://www.clsp.jhu.edu/events/student-seminar-guanghui-qin/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nEmbedding text sequ ences is a widespread requirement in modern language understanding. Existi ng approaches focus largely on constant-size representations. This is prob lematic\, as the amount of information contained in text often varies with the length of the input. We propose a solution called Nugget\, which enco des language into a representation based on a dynamically selected subset of input tokens. These nuggets are learned through tasks like autoencoding and machine translation\, and intuitively segment language into meaningfu l units. We demonstrate Nugget outperforms related approaches in tasks inv olving semantic comparison. Finally\, we illustrate these compact units al low for expanding the contextual window of a language model (LM)\, suggest ing new future LMs that can condition on significantly larger amounts of c ontent.
\n X-TAGS;LANGUAGE=en-US:2023\,Qin\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-23892@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nThe growing power in computing and AI promises a near -term future of human-machine teamwork. In this talk\, I will present my r esearch group’s efforts in understanding the complex dynamics of human-mac hine interaction and designing intelligent machines aimed to assist and co llaborate with people. I will focus on 1) tools for onboarding machine tea mmates and authoring machine assistance\, 2) methods for detecting\, and b roadly managing\, errors in collaboration\, and 3) building blocks of know ledge needed to enable ad hoc human-machine teamwork. I will also highligh t our recent work on designing assistive\, collaborative machines to suppo rt older adults aging in place.\nBiography\nChien-Ming Huang is the John C . Malone Assistant Professor in the Department of Computer Science at the Johns Hopkins University. His research focuses on designing interactive AI aimed to assist and collaborate with people. He publishes in top-tier ven ues in HRI\, HCI\, and robotics including Science Robotics\, HRI\, CHI\, a nd CSCW. His research has received media coverage from MIT Technology Revi ew\, Tech Insider\, and Science Nation. Huang completed his postdoctoral t raining at Yale University and received his Ph.D. in Computer Science at t he University of Wisconsin–Madison. He is a recipient of the NSF CAREER aw ard. https://www.cs.jhu.edu/~cmhuang/ DTSTART;TZID=America/New_York:20230915T120000 DTEND;TZID=America/New_York:20230915T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Chien-Ming Huang (Johns Hopkins University) “Becoming Teammates: De signing Assistive\, Collaborative Machines” URL:https://www.clsp.jhu.edu/events/chien-ming-huang-johns-hopkins-universi ty/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nThe growing power in computing and AI promises a near -term future of human-machine teamwork. In this talk\, I will present my r esearch group’s efforts in understanding the complex dynamics of human-mac hine interaction and designing intelligent machines aimed to assist and co llaborate with people. I will focus on 1) tools for onboarding machine tea mmates and authoring machine assistance\, 2) methods for detecting\, and b roadly managing\, errors in collaboration\, and 3) building blocks of know ledge needed to enable ad hoc human-machine teamwork. I will also highligh t our recent work on designing assistive\, collaborative machines to suppo rt older adults aging in place.
\nBiography
\nChien-Ming Huang is the John C. Malone Assistant Professor in the Departm ent of Computer Science at the Johns Hopkins University. His research focu ses on designing interactive AI aimed to assist and collaborate with peopl e. He publishes in top-tier venues in HRI\, HCI\, and robotics including S cience Robotics\, HRI\, CHI\, and CSCW. His research has received media co verage from MIT Technology Review\, Tech Insider\, and Science Nation. Hua ng completed his postdoctoral training at Yale University and received his Ph.D. in Computer Science at the University of Wisconsin–Madison. He is a recipient of the NSF CAREER award. https://www .cs.jhu.edu/~cmhuang/
\n X-TAGS;LANGUAGE=en-US:2023\,Huang\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-23894@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nThe use of NLP in the realm of financial technology i s broad and complex\, with applications ranging from sentiment analysis an d named entity recognition to question answering. Large Language Models (L LMs) have been shown to be effective on a variety of tasks\; however\, no LLM specialized for the financial domain has been reported in the literatu re. In this work\, we present BloombergGPT\, a 50 billion parameter langua ge model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg’s extensive data sources\, p erhaps the largest domain-specific dataset yet\, augmented with 345 billio n tokens from general-purpose datasets. We validate BloombergGPT on stand ard LLM benchmarks\, open financial benchmarks\, and a suite of internal b enchmarks that most accurately reflect our intended usage. Our mixed datas et training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general L LM benchmarks. Additionally\, we explain our modeling choices\, training p rocess\, and evaluation methodology.\nBiography\nMark Dredze is the John C Malone Professor of Computer Science at Johns Hopkins University and the Director of Research (Foundations of AI) for the JHU AI-X Foundry. He deve lops Artificial Intelligence Systems based on natural language processing and explores applications to public health and medicine.\nProf. Dredze is affiliated with the Malone Center for Engineering in Healthcare\, the Cent er for Language and Speech Processing\, among others. He holds a joint app ointment in the Biomedical Informatics & Data Science Section (BIDS)\, und er the Department of Medicine (DOM)\, Division of General Internal Medicin e (GIM) in the School of Medicine. He obtained his PhD from the University of Pennsylvania in 2009. DTSTART;TZID=America/New_York:20230918T120000 DTEND;TZID=America/New_York:20230918T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Mark Dredze (Johns Hopkins University) “BloombergGPT: A Large Langu age Model for Finance” URL:https://www.clsp.jhu.edu/events/mark-dredze-johns-hopkins-university/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nThe use of NLP in the realm of financial technology i s broad and complex\, with applications ranging from sentiment analysis an d named entity recognition to question answering. Large Language Models (L LMs) have been shown to be effective on a variety of tasks\; however\, no LLM specialized for the financial domain has been reported in the literatu re. In this work\, we present BloombergGPT\, a 50 billion parameter langua ge model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg’s extensive data sources\, p erhaps the largest domain-specific dataset yet\, augmented with 345 billio n tokens from general-purpose datasets. We validate BloombergGPT on stand ard LLM benchmarks\, open financial benchmarks\, and a suite of internal b enchmarks that most accurately reflect our intended usage. Our mixed datas et training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general L LM benchmarks. Additionally\, we explain our modeling choices\, training p rocess\, and evaluation methodology.
\nBiography
\nMark Dredze is the John C Malone Professor of Computer Science at Jo hns Hopkins University and the Director of Research (Foundations of AI) fo r the JHU AI-X Foundry. He develops Artificial Intelligence Systems based on natural language processing and explores applications to public health and medicine.
\nProf. Dredze is affiliated with the Malone Center fo r Engineering in Healthcare\, the Center for Language and Speech Processin g\, among others. He holds a joint appointment in the Bio medical Informatics & Data Science Section (< span class='il'>BIDS)\, under the Department of Medicine (DOM)\, Di vision of General Internal Medicine (GIM) in the School of Medicine. He ob tained his PhD from the University of Pennsylvania in 2009.
\n HTML> X-TAGS;LANGUAGE=en-US:2023\,Dredze\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-23983@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nVisually rich documents (scanned or digital) remain i mportant for many consumer and business use cases. During this talk we wil l share recent work from our team in the Document Intelligence Lab of Adob e Research to understand\, create\, and interact with these documents. Fi rst\, we’ll share a series of work on building models to decompose and und erstand the structure of documents to support use cases around document an alysis and accessibility. Next\, we’ll explore document semantic understan ding for a project where we convert natural language contract clauses to c ode to support business automation. Finally\, we’ll discuss DocEdit\, a mo del and dataset that enables editing structured documents from natural lan guage. \nBIOS:\nRajiv Jain is a Senior Research Scientist in the Document Intelligence Lab in Adobe Research\, where his research focuses on underst anding the layout\, content\, and interaction with documents. Prior to joi ning Adobe\, Rajiv was a consultant at DARPA\, where he worked on the Medi a Forensics Program to secure digital imagery. He previously served for 10 years as a researcher for the Department of Defense where he worked on pr ojects around large scale systems\, computer vision\, and network security . He received his PhD in computer science from the University of Maryland\ , College Park working in the field of document image analysis and retriev al.\nChris Tensmeyer primarily focuses on multi-modal document layout and content understanding as a Research Scientist in the Document Intelligence Lab of Adobe Research. Since joining Adobe 5 years ago\, his work has di rectly impacted popular Adobe features such as mobile Acrobat Liquid Mode\ , PDF table extraction\, handwriting recognition\, and scanned document de tection. Other research interests include general Computer Vision and Dee p Learning. He received his PhD in Computer Science from Brigham Young Un iversity on the topic of Deep Learning for Document Image Analysis. DTSTART;TZID=America/New_York:20230922T120000 DTEND;TZID=America/New_York:20230922T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Rajiv Jain and Chris Tensmeyer (Adobe) “Document Intelligence at Ad obe Research” URL:https://www.clsp.jhu.edu/events/rajiv-jain-and-chris-tensmeyer-adobe-do cument-intelligence-at-adobe-research/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nVisually rich document s (scanned or digital) remain important for many consumer and business use cases. During this talk we will sha re recent work from our team in the Document Intelligence Lab of Adobe Res earch to understand\, create\, and interact with these documents. First\, we’ll share a series of work on building models to decompose and understa nd the structure of documents to support use cases around document analysi s and accessibility. Next\, we’ll explore document semantic understanding for a project where we convert natural language contract clauses to code t o support business automation. Finally\, we’ll discuss DocEdit\, a model a nd dataset that enables editing structured documents from natural language .
\nBIOS:
\nRajiv Jain is a Senior Research Scientist in the Do cument Intelligence Lab in Adobe Research\, where his research focuses on understanding the layout\, content\, and interaction with documents. Prior to joining Adobe\, Rajiv was a consultant at DARPA\, where he worked on t he Media Forensics Program to secure digital imagery. He previously served for 10 years as a researcher for the Department of Defense where he worke d on projects around large scale systems\, computer vision\, and network s ecurity. He received his PhD in computer science from the University of Ma ryland\, College Park working in the field of document image analysis and retrieval.
\nChris Ten smeyer primarily focuses on multi-modal document layout and conte nt understanding as a Research Scientist in the Document Intelligence Lab of Adobe Research. Since joining Adobe 5 years ago\, his work has directl y impacted popular Adobe features such as mobile Acrobat Liquid Mode\, PDF table extraction\, handwriting recognition\, and scanned document detecti on. Other research interests include general Computer Vision and Deep Lea rning. He received his PhD in Computer Science from Brigham Young Univers ity on the topic of Deep Learning for Document Image Analysis.
\n X-TAGS;LANGUAGE=en-US:2023\,Jain and Tensmeyer\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-23896@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nThe field of NLP is in the midst of a disruptive shif t\, fueled most recently by the advent of large language models (LLMs)\, w ith impacts on our methodologies\, funding and public perception. While th e core technologies and scope of real-world impact of our field may be cha nging (everything is different!)\, many of the same key challenges faced s ince the inception of our field remain (nothing has changed). In this talk I’ll describe recent work characterizing and tackling some of these chall enges\, notably: data-efficient domain adaptation and lifelong learning. I will also anchor discussion of cycles and shifts in the field by describi ng findings from a qualitative study of factors shaping the community over time\, including culture\, incentives\, and infrastructure. Through these complementary lenses into the past\, present and future\, I aim to inspir e shared hope\, excitement and discussion. \nBio\nEmma Strubell is the Raj Reddy Assistant Professor in the Language Technologies Institute in the S chool of Computer Science at Carnegie Mellon University\, and a Visiting S cientist at the Allen Institute for Artificial Intelligence. Previously sh e held research scientist roles at Google and FAIR after earning her docto ral degree in 2019 from the University of Massachusetts Amherst. Her resea rch lies at the intersection of natural language processing and machine le arning\, with a focus on providing pragmatic solutions to practitioners wh o wish to gain insights from natural language text via computation- and da ta-efficient AI. Her work has been recognized with a Madrona AI Impact Awa rd\, best paper awards at ACL and EMNLP\, and cited in news outlets includ ing the New York Times and Wall Street Journal. DTSTART;TZID=America/New_York:20230925T120000 DTEND;TZID=America/New_York:20230925T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Emma Strubell (Carnegie Mellon University) “Large Language Models: Everything’s Different and Nothing Has Changed” URL:https://www.clsp.jhu.edu/events/emma-strubell-carnegie-mellon-universit y/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nThe field of NLP i s in the midst of a disruptive shift\, fueled most recently by the advent of large language models (LLMs)\, with impacts on our methodologies\, fund ing and public perception. While the core technologies and scope of real-w orld impact of our field may be changing (everything is different!)\, many of the same key challenges faced since the inception of our field remain (nothing has changed). In this talk I’ll describe recent work characterizi ng and tackling some of these challenges\, notably: data-efficient domain adaptation and lifelong learning. I will also anchor discussion of cycles and shifts in the field by describing findings from a qualitative study of factors shaping the community over time\, including culture\, incentives\ , and infrastructure. Through these complementary lenses into the past\, p resent and future\, I aim to inspire shared hope\, excitement and discussi on.
\nBio
\n< span class='x_x_x_ContentPasted1'>Emma Strubell is the Raj Reddy Assistant Professor in the Language Technologies Institute in the School of Compute r Science at Carnegie Mellon University\, and a Visiting Scientist at the Allen Institute for Artificial Intelligence. Previously she held research scientist roles at Google and FAIR after earning her doctoral degree in 20 19 from the University of Massachusetts Amherst. Her research lies at the intersection of natural language processing and machine learning\, with a focus on providing pragmatic solutions to practitioners who wish to gain i nsights from natural language text via computation- and data-efficient AI. Her work has been recognized with a Madrona AI Impact Award\, best paper awards at ACL and EMNLP\, and cited in news outlets including the New York Times and Wall Street Journal.
\n X-TAGS;LANGUAGE=en-US:2023\,September\,Strubell END:VEVENT BEGIN:VEVENT UID:ai1ec-23898@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nAny valuable NLP dataset has traditionally been shipp ed with crowdsourced categorical labels. Instructions for collecting these labels are easy to communicate and the labels themselves are easy to anno tate. However\, as self-supervision based methods are getting better at ba sically everything\, human annotations may need to provide more nuanced su pervision or enable more detailed evaluation in order to be worth further collecting. One natural extension to existing categorical annotation schem es is to obtain uncertainty information beyond a single hard label. In thi s talk\, I will discuss my recent efforts on introducing scalar labels in place of categorical labels as a form of uncertainty annotation. We demons trate that\, compared to other more obvious annotation schemes for eliciti ng uncertainty information\, scalar labels are significantly more cost-eff ective to annotate\, provide reliable evaluation\, and have a theoretical connection to existing predictive uncertainty metrics. In particular\, the y motivate using other losses as surrogates for calibration evaluation. DTSTART;TZID=America/New_York:20230929T120000 DTEND;TZID=America/New_York:20230929T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:CLSP Student Seminar – Zhengping Jiang “Scalar Labels for Capturing Human Uncertainty” URL:https://www.clsp.jhu.edu/events/clsp-student-seminar-zhengping-jiang/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAny valuable NLP d ataset has traditionally been shipped with crowdsourced categorical labels . Instructions for collecting these labels are easy to communicate and the labels themselves are easy to annotate. However\, as self-supervision bas ed methods are getting better at basically everything\, human annotations may need to provide more nuanced supervision or enable more detailed evalu ation in order to be worth further collecting. One natural extension to ex isting categorical annotation schemes is to obtain uncertainty information beyond a single hard label. In this talk\, I will discuss my recent effor ts on introducing scalar labels in place of categorical labels as a form o f uncertainty annotation. We demonstrate that\, compared to other more obv ious annotation schemes for eliciting uncertainty information\, scalar lab els are significantly more cost-effective to annotate\, provide reliable e valuation\, and have a theoretical connection to existing predictive uncer tainty metrics. In particular\, they motivate using other losses as surrog ates for calibration evaluation.
\n X-TAGS;LANGUAGE=en-US:2023\,Jiang\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-24491@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20240401T120000 DTEND;TZID=America/New_York:20240401T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Yuan Gong URL:https://www.clsp.jhu.edu/events/yuan-gong/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,April\,Gong END:VEVENT BEGIN:VEVENT UID:ai1ec-24507@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nHistory repeats itself\, sometimes in a bad way. Prev enting natural or man-made disasters requires being aware of these pattern s and taking pre-emptive action to address and reduce them\, or ideally\, eliminate them. Emerging events\, such as the COVID pandemic and the Ukrai ne Crisis\, require a time-sensitive comprehensive understanding of the si tuation to allow for appropriate decision-making and effective action resp onse. Automated generation of situation reports can significantly reduce t he time\, effort\, and cost for domain experts when preparing their offici al human-curated reports. However\, AI research toward this goal has been very limited\, and no successful trials have yet been conducted to automat e such report generation and “what-if” disaster forecasting. Pre-existing natural language processing and information retrieval techniques are insuf ficient to identify\, locate\, and summarize important information\, and l ack detailed\, structured\, and strategic awareness. In this talk I will p resent SmartBook\, a novel framework that cannot be solved by large langua ge models alone\, to consume large volumes of multimodal multilingual news data and produce a structured situation report with multiple hypotheses ( claims) summarized and grounded with rich links to factual evidence throug h multimodal knowledge extraction\, claim detection\, fact checking\, misi nformation detection and factual error correction. Furthermore\, SmartBook can also serve as a novel news event simulator\, or an intelligent prophe tess. Given “What-if” conditions and dimensions elicited from a domain ex pert user concerning a disaster scenario\, SmartBook will induce schemas f rom historical events\, and automatically generate a complex event graph a long with a timeline of news articles that describe new simulated events a nd character-centric stories based on a new Λ-shaped attention mask that c an generate text with infinite length. By effectively simulating disaster scenarios in both event graph and natural language format\, we expect Smar tBook will greatly assist humanitarian workers and policymakers to exercis e reality checks\, and thus better prevent and respond to future disasters .\nBio\nHeng Ji is a professor at Computer Science Department\, and an aff iliated faculty member at Electrical and Computer Engineering Department a nd Coordinated Science Laboratory of University of Illinois Urbana-Champai gn. She is an Amazon Scholar. She is the Founding Director of Amazon-Illin ois Center on AI for Interactive Conversational Experiences (AICE). She re ceived her B.A. and M. A. in Computational Linguistics from Tsinghua Unive rsity\, and her M.S. and Ph.D. in Computer Science from New York Universit y. Her research interests focus on Natural Language Processing\, especiall y on Multimedia Multilingual Information Extraction\, Knowledge-enhanced L arge Language Models\, Knowledge-driven Generation and Conversational AI. She was selected as a Young Scientist to attend the 6th World Laureates As sociation Forum\, and selected to participate in DARPA AI Forward in 2023. She was selected as “Young Scientist” and a member of the Global Future C ouncil on the Future of Computing by the World Economic Forum in 2016 and 2017. The awards she received include Women Leaders of Conversational AI ( Class of 2023) by Project Voice\, “AI’s 10 to Watch” Award by IEEE Intelli gent Systems in 2013\, NSF CAREER award in 2009\, PACLIC2012 Best paper ru nner-up\, “Best of ICDM2013” paper award\, “Best of SDM2013” paper award\, ACL2018 Best Demo paper nomination\, ACL2020 Best Demo Paper Award\, NAAC L2021 Best Demo Paper Award\, Google Research Award in 2009 and 2014\, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-20 18. She was invited to testify to the U.S. House Cybersecurity\, Data Anal ytics\, & IT Committee as an AI expert in 2023. She was invited by the Sec retary of the U.S. Air Force and AFRL to join Air Force Data Analytics Exp ert Panel to inform the Air Force Strategy 2030\, and invited to speak at the Federal Information Integrity R&D Interagency Working Group (IIRD IWG) briefing in 2023. She is the lead of many multi-institution projects and tasks\, including the U.S. ARL projects on information fusion and knowledg e networks construction\, DARPA ECOLE MIRACLE team\, DARPA KAIROS RESIN te am and DARPA DEFT Tinker Bell team. She has coordinated the NIST TAC Knowl edge Base Population task 2010-2022. She was the associate editor for IEEE /ACM Transaction on Audio\, Speech\, and Language Processing\, and served as the Program Committee Co-Chair of many conferences including NAACL-HLT2 018 and AACL-IJCNLP2022. She is elected as the North American Chapter of t he Association for Computational Linguistics (NAACL) secretary 2020-2023. Her research has been widely supported by the U.S. government agencies (DA RPA\, NSF\, DoE\, ARL\, IARPA\, AFRL\, DHS) and industry (Apple\, Amazon\, Google\, Facebook\, Bosch\, IBM\, Disney). DTSTART;TZID=America/New_York:20240405T120000 DTEND;TZID=America/New_York:20240405T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, Maryland 21218 SEQUENCE:0 SUMMARY:Heng Ji (University of Illinois Urbana-Champaign) “SmartBook: an AI Prophetess for Disaster Reporting and Forecasting” URL:https://www.clsp.jhu.edu/events/heng-ji-university-of-illinois-urbana-c hampaign-smartbook-an-ai-prophetess-for-disaster-reporting-and-forecasting / X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nHistory repeats itself\, sometimes in a bad way. Prev enting natural or man-made disasters requires being aware of these pattern s and taking pre-emptive action to address and reduce them\, or ideally\, eliminate them. Emerging events\, such as the COVID pandemic and the Ukrai ne Crisis\, require a time-sensitive comprehensive understanding of the si tuation to allow for appropriate decision-making and effective action resp onse. Automated generation of situation reports can significantly reduce t he time\, effort\, and cost for domain experts when preparing their offici al human-curated reports. However\, AI research toward this goal has been very limited\, and no successful trials have yet been conducted to automat e such report generation and “what-if” disaster forecasting. Pre-existing natural language processing and information retrieval techniques are insuf ficient to identify\, locate\, and summarize important information\, and l ack detailed\, structured\, and strategic awareness. In this talk I will p resent SmartBook\, a novel framework that cannot be solved by large langua ge models alone\, to consume large volumes of multimodal multilingual news data and produce a structured situation report with multiple hypotheses ( claims) summarized and grounded with rich links to factual evidence throug h multimodal knowledge extraction\, claim detection\, fact checking\, misi nformation detection and factual error correction. Furthermore\, SmartBook can also serve as a novel news event simulator\, or an intelligent prophe tess. Given “What-if” conditions and dimensions elicited from a domain ex pert user concerning a disaster scenario\, SmartBook will induce schemas f rom historical events\, and automatically generate a complex event graph a long with a timeline of news articles that describe new simulated events a nd character-centric stories based on a new Λ-shaped attention mask that c an generate text with infinite length. By effectively simulating disaster scenarios in both event graph and natural language format\, we expect Smar tBook will greatly assist humanitarian workers and policymakers to exercis e reality checks\, and thus better prevent and respond to future disasters .
\nBio
\nHeng Ji is a professor at Computer Science Department\, and an affiliated faculty member at Electrical and Co mputer Engineering Department and Coordinated Science Laboratory of Univer sity of Illinois Urbana-Champaign. She is an Amazon Scholar. She is the Fo unding Director of Amazon-Illinois Center on AI for Interactive Conversati onal Experiences (AICE). She received her B.A. and M. A. in Computational Linguistics from Tsinghua University\, and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing\, especially on Multimedia Multilingual Information Ex traction\, Knowledge-enhanced Large Language Models\, Knowledge-driven Gen eration and Conversational AI. She was selected as a Young Scientist to at tend the 6th World Laureates Association Forum\, and selected to participa te in DARPA AI Forward in 2023. She was selected as “Young Scientist” and a member of the Global Future Council on the Future of Computing by the Wo rld Economic Forum in 2016 and 2017. The awards she received include Women Leaders of Conversational AI (Class of 2023) by Project Voice\, “AI’s 10 to Watch” Award by IEEE Intelligent Systems in 2013\, NSF CAREER award in 2009\, PACLIC2012 Best paper runner-up\, “Best of ICDM2013” paper award\, “Best of SDM2013” paper award\, ACL2018 Best Demo paper nomination\, ACL20 20 Best Demo Paper Award\, NAACL2021 Best Demo Paper Award\, Google Resear ch Award in 2009 and 2014\, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-2018. She was invited to testify to the U.S. House Cybersecurity\, Data Analytics\, & IT Committee as an AI expert in 2 023. She was invited by the Secretary of the U.S. Air Force and AFRL to jo in Air Force Data Analytics Expert Panel to inform the Air Force Strategy 2030\, and invited to speak at the Federal Information Integrity R&D Inter agency Working Group (IIRD IWG) briefing in 2023. She is the lead of many multi-institution projects and tasks\, including the U.S. ARL projects on information fusion and knowledge networks construction\, DARPA ECOLE MIRAC LE team\, DARPA KAIROS RESIN team and DARPA DEFT Tinker Bell team. She has coordinated the NIST TAC Knowledge Base Population task 2010-2022. She wa s the associate editor for IEEE/ACM Transaction on Audio\, Speech\, and La nguage Processing\, and served as the Program Committee Co-Chair of many c onferences including NAACL-HLT2018 and AACL-IJCNLP2022. She is elected as the North American Chapter of the Association for Computational Linguistic s (NAACL) secretary 2020-2023. Her research has been widely supported by t he U.S. government agencies (DARPA\, NSF\, DoE\, ARL\, IARPA\, AFRL\, DHS) and industry (Apple\, Amazon\, Google\, Facebook\, Bosch\, IBM\, Disney).
\n X-TAGS;LANGUAGE=en-US:2024\,April\,Ji END:VEVENT BEGIN:VEVENT UID:ai1ec-24509@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20240408T120000 DTEND;TZID=America/New_York:20240408T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Berrak Sisman URL:https://www.clsp.jhu.edu/events/berrak-sisman/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,April\,Sisman END:VEVENT BEGIN:VEVENT UID:ai1ec-24511@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20240412T120000 DTEND;TZID=America/New_York:20240412T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Sonal Joshi (JHU) URL:https://www.clsp.jhu.edu/events/sonal-joshi-jhu/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,April\,Joshi END:VEVENT BEGIN:VEVENT UID:ai1ec-24515@www.clsp.jhu.edu DTSTAMP:20240328T224428Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20240415T120000 DTEND;TZID=America/New_York:20240415T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Matthew Wipperman (Regeneron) URL:https://www.clsp.jhu.edu/events/matthew-wipperman-regeneron/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,April\,Wipperman END:VEVENT END:VCALENDAR