BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21489@www.clsp.jhu.edu DTSTAMP:20240328T184632Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nSince it is increasingly harder to opt out from inter acting with AI technology\, people demand that AI is capable of maintainin g contracts such that it supports agency and oversight of people who are r equired to use it or who are affected by it. To help those people create a mental model about how to interact with AI systems\, I extend the underly ing models to self-explain—predict the label/answer and explain this predi ction. In this talk\, I will present how to generate (1) free-text explana tions given in plain English that immediately tell users the gist of the r easoning\, and (2) contrastive explanations that help users understand how they could change the text to get another label.\nBiography\nAna Marasovi ć is a postdoctoral researcher at the Allen Institute for AI (AI2) and the Paul G. Allen School of Computer Science & Engineering at University of W ashington. Her research interests broadly lie in the fields of natural lan guage processing\, explainable AI\, and vision-and-language learning. Her projects are motivated by a unified goal: improve interaction and control of the NLP systems to help people make these systems do what they want wit h the confidence that they’re getting exactly what they need. Prior to joi ning AI2\, Ana obtained her PhD from Heidelberg University.\nHow to pronou nce my name: the first name is Ana like in Spanish\, i.e.\, with a long “a ” like in “water”\; regarding the last name: “mara” as in actress mara wil son + “so” + “veetch”. DTSTART;TZID=America/New_York:20220228T120000 DTEND;TZID=America/New_York:20220228T131500 LOCATION:Ames Hall 234 - Presented Virtually Via Zoom https://wse.zoom.us/j /96735183473 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Ana Marasović (Allen Institute for AI & University of Washington) “ Self-Explaining for Intuitive Interaction with AI” URL:https://www.clsp.jhu.edu/events/ana-marasovic-allen-institute-for-ai-un iversity-of-washington-self-explaining-for-intuitive-interaction-with-ai/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nSince it is increasingly harder to opt out from inter acting with AI technology\, people demand that AI is capable of maintainin g contracts such that it supports agency and oversight of people who are r equired to use it or who are affected by it. To help those people create a mental model about how to interact with AI systems\, I extend the underly ing models to self-explain—predict the label/answer and explain this predi ction. In this talk\, I will present how to generate (1) free-text explana tions given in plain English that immediately tell users the gist of the r easoning\, and (2) contrastive explanations that help users understand how they could change the text to get another label.
\nBiograph y
\nAna Marasović is a postdoctoral researcher at the Allen Institute for AI (AI2) and the Paul G. Allen School of Computer Science & Engineering at University of Washington. Her research interests broadly l ie in the fields of natural language processing\, explainable AI\, and vis ion-and-language learning. Her projects are motivated by a unified goal: i mprove interaction and control of the NLP systems to help people make thes e systems do what they want with the confidence that they’re getting exact ly what they need. Prior to joining AI2\, Ana obtained her PhD from Heidel berg University.
\nHow to pronounce my name: the first name i s Ana like in Spanish\, i.e.\, with a long “a” like in “water”\; regarding the last name: “mara” as in actress mara wilson + “so” + “veetch”.
\n< /BODY> X-TAGS;LANGUAGE=en-US:2022\,February\,Marasovic END:VEVENT BEGIN:VEVENT UID:ai1ec-22400@www.clsp.jhu.edu DTSTAMP:20240328T184632Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nModern learning architectures for natural language pr ocessing have been very successful in incorporating a huge amount of texts into their parameters. However\, by and large\, such models store and use knowledge in distributed and decentralized ways. This proves unreliable a nd makes the models ill-suited for knowledge-intensive tasks that require reasoning over factual information in linguistic expressions. In this tal k\, I will give a few examples of exploring alternative architectures to t ackle those challenges. In particular\, we can improve the performance of such (language) models by representing\, storing and accessing knowledge i n a dedicated memory component.\nThis talk is based on several joint works with Yury Zemlyanskiy (Google Research)\, Michiel de Jong (USC and Google Research)\, William Cohen (Google Research and CMU) and our other collabo rators in Google Research.\nBiography\nFei is a research scientist at Goog le Research. Before that\, he was a Professor of Computer Science at Unive rsity of Southern California. His primary research interests are machine l earning and its application to various AI problems: speech and language pr ocessing\, computer vision\, robotics and recently weather forecast and cl imate modeling. He has a PhD (2007) from Computer and Information Scienc e from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing\, China). DTSTART;TZID=America/New_York:20221024T120000 DTEND;TZID=America/New_York:20221024T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Fei Sha (University of Southern California) “Extracting Information from Text into Memory for Knowledge-Intensive Tasks” URL:https://www.clsp.jhu.edu/events/fei-sha-university-of-southern-californ ia/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nModern learning architectures for natural language processing have been very successful in incorporating a huge amount of texts into their parameters. However\, by and large\, such models store and use knowledge in distributed and decentralized ways. This proves unreliable and makes the models ill-suited for knowledge-intensive tasks that require reasoning over factual information in linguistic expre ssions. In this talk\, I will give a few examples of exploring alternativ e architectures to tackle those challenges. In particular\, we can improve the performance of such (language) models by representing\, storing and a ccessing knowledge in a dedicated memory component.
\nThis talk is based on several joint works with Yury Zemlyanskiy (Goo gle Research)\, Michiel de Jong (USC and Google Research)\, William Cohen (Google Research and CMU) and our other collaborators in Google Research.< /p>\n
Biography
\nFei is a research scientist at Google Research. Before that\, he was a Professor of Computer Science at U niversity of Southern California. His primary research interests are machi ne learning and its application to various AI problems: speech and languag e processing\, computer vision\, robotics and recently weather forecast an d climate modeling. He has a PhD (2007) from Computer and Information Sc ience from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing\, China).
\n X-TAGS;LANGUAGE=en-US:2022\,October\,Sha END:VEVENT BEGIN:VEVENT UID:ai1ec-24465@www.clsp.jhu.edu DTSTAMP:20240328T184632Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nLarge Language Models (LLMs) have demonstrated remark able capabilities across various domains. However\, it is still very chall enging to build highly-reliable applications with LLMs that support specia lized use cases. LLMs trained on web data often excel at capturing general language patterns\, but they could struggle to support specialized domain s and personalized user needs. Moreover\, LLMs can produce errors that are deceptively plausible\, making them potentially dangerous for high-trust scenarios. In this talk\, I will discuss some of our recent efforts in add ressing these challenges with data-efficient tuning methods and a novel fa ctuality evaluation framework. Specifically\, my talk will focus on buildi ng multilingual applications\, one crucial use case often characterized by limited tuning and evaluation data.\nBio\nXinyi(Cindy) Wang is a research scientist at Google DeepMind working on Large Language Models(LLM) and it s application to generative question-answering. She has worked on multilin gual instruction-tuning for Gemini and multilingual generative models used in Google search. Before Google DeepMind\, Cindy Wang obtained her PhD de gree in Language Technologies at Carnegie Mellon University. During her Ph D\, she mainly worked on developing data-efficient natural language proces sing~(NLP) systems. She has made several contributions in data selection\, data representation\, and model adaptation for multilingual NLP. DTSTART;TZID=America/New_York:20240308T120000 DTEND;TZID=America/New_York:20240308T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Cindy Wang (Google DeepMind) “Building Data-Efficient and Reliable Applications with Large Language Models” URL:https://www.clsp.jhu.edu/events/cindy-wang-google-deepmind-building-dat a-efficient-and-reliable-applications-with-large-language-models/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nLarge Language Models (LLMs) have demonstrated remark able capabilities across various domains. However\, it is still very chall enging to build highly-reliable applications with LLMs that support specia lized use cases. LLMs trained on web data often excel at capturing general language patterns\, but they could struggle to support specialized domain s and personalized user needs. Moreover\, LLMs can produce errors that are deceptively plausible\, making them potentially dangerous for high-trust scenarios. In this talk\, I will discuss some of our recent efforts in add ressing these challenges with data-efficient tuning methods and a novel fa ctuality evaluation framework. Specifically\, my talk will focus on buildi ng multilingual applications\, one crucial use case often characterized by limited tuning and evaluation data.
\nBio
\nXinyi(Cindy) Wang is a research scientist at Google DeepMind working on La rge Language Models(LLM) and its application to generative question-answer ing. She has worked on multilingual instruction-tuning for Gemini and mult ilingual generative models used in Google search. Before Google DeepMind\, Cindy Wang obtained her PhD degree in Language Technologies at Carnegie M ellon University. During her PhD\, she mainly worked on developing data-ef ficient natural language processing~(NLP) systems. She has made several co ntributions in data selection\, data representation\, and model adaptation for multilingual NLP.
\n X-TAGS;LANGUAGE=en-US:2024\,March\,Wang END:VEVENT END:VCALENDAR