BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21270@www.clsp.jhu.edu DTSTAMP:20240328T194221Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nSocial media allows researchers to track societal and cultural changes over time based on language analysis tools. Many of thes e tools rely on statistical algorithms which need to be tuned to specific types of language. Recent studies have questioned the robustness of longit udinal analyses based on statistical methods due to issues of temporal bia s and semantic shift. To what extent are changes in semantics over time af fecting the reliability of longitudinal analyses? We examine this question through a case study: understanding shifts in mental health during the co urse of the COVID-19 pandemic. We demonstrate that a recently-introduced m ethod for measuring semantic shift may be used to proactively identify fai lure points of language-based models and improve predictive generalization over time. Ultimately\, we find that these analyses are critical to produ cing accurate longitudinal studies of social media. DTSTART;TZID=America/New_York:20220207T120000 DTEND;TZID=America/New_York:20220207T131500 LOCATION:In Person or Virtual Option @ https://wse.zoom.us/j/96735183473 @ 234 Ames Hall\, 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Keith Harrigian “The Problem of Semantic Shift in Longitudinal Monitoring of Social Media: A Case Study on Mental Health d uring the COVID-19 Pandemic” URL:https://www.clsp.jhu.edu/events/student-seminar-keith-harrigian-the-pro blem-of-semantic-shift-in-longitudinal-monitoring-of-social-media-a-case-s tudy-on-mental-health-during-the-covid-19-pandemic/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nSocial media allows researchers to track societal and cultural changes over time based on language analysis tools. Many of thes e tools rely on statistical algorithms which need to be tuned to specific types of language. Recent studies have questioned the robustness of longit udinal analyses based on statistical methods due to issues of temporal bia s and semantic shift. To what extent are changes in semantics over time af fecting the reliability of longitudinal analyses? We examine this question through a case study: understanding shifts in mental health during the co urse of the COVID-19 pandemic. We demonstrate that a recently-introduced m ethod for measuring semantic shift may be used to proactively identify fai lure points of language-based models and improve predictive generalization over time. Ultimately\, we find that these analyses are critical to produ cing accurate longitudinal studies of social media.
\n X-TAGS;LANGUAGE=en-US:2022\,February\,Harrigian END:VEVENT BEGIN:VEVENT UID:ai1ec-21616@www.clsp.jhu.edu DTSTAMP:20240328T194221Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nSocial media allows researchers to track societal and cultural changes over time based on language analysis tools. Many of thes e tools rely on statistical algorithms which need to be tuned to specific types of language. Recent studies have shown the absence of appropriate tu ning\, specifically in the presence of semantic shift\, can hinder robustn ess of the underlying methods. However\, little is known about the practic al effect this sensitivity may have on downstream longitudinal analyses. W e explore this gap in the literature through a timely case study: understa nding shifts in depression during the course of the COVID-19 pandemic. We find that inclusion of only a small number of semantically-unstable featur es can promote significant changes in longitudinal estimates of our target outcome. At the same time\, we demonstrate that a recently-introduced met hod for measuring semantic shift may be used to proactively identify failu re points of language-based models and\, in turn\, improve predictive gene ralization. DTSTART;TZID=America/New_York:20220318T120000 DTEND;TZID=America/New_York:20220318T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Keith Harrigian “The Problem of Semantic Shift in Longitudinal Monitoring of Social Media” URL:https://www.clsp.jhu.edu/events/student-seminar-keith-harrigian-the-pro blem-of-semantic-shift-in-longitudinal-monitoring-of-social-media/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nSocial media allows researchers to track societal and cultural changes over time based on language analysis tools. Many of thes e tools rely on statistical algorithms which need to be tuned to specific types of language. Recent studies have shown the absence of appropriate tu ning\, specifically in the presence of semantic shift\, can hinder robustn ess of the underlying methods. However\, little is known about the practic al effect this sensitivity may have on downstream longitudinal analyses. W e explore this gap in the literature through a timely case study: understa nding shifts in depression during the course of the COVID-19 pandemic. We find that inclusion of only a small number of semantically-unstable featur es can promote significant changes in longitudinal estimates of our target outcome. At the same time\, we demonstrate that a recently-introduced met hod for measuring semantic shift may be used to proactively identify failu re points of language-based models and\, in turn\, improve predictive gene ralization.
\n X-TAGS;LANGUAGE=en-US:2022\,Harrigian\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-22400@www.clsp.jhu.edu DTSTAMP:20240328T194221Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nModern learning architectures for natural language pr ocessing have been very successful in incorporating a huge amount of texts into their parameters. However\, by and large\, such models store and use knowledge in distributed and decentralized ways. This proves unreliable a nd makes the models ill-suited for knowledge-intensive tasks that require reasoning over factual information in linguistic expressions. In this tal k\, I will give a few examples of exploring alternative architectures to t ackle those challenges. In particular\, we can improve the performance of such (language) models by representing\, storing and accessing knowledge i n a dedicated memory component.\nThis talk is based on several joint works with Yury Zemlyanskiy (Google Research)\, Michiel de Jong (USC and Google Research)\, William Cohen (Google Research and CMU) and our other collabo rators in Google Research.\nBiography\nFei is a research scientist at Goog le Research. Before that\, he was a Professor of Computer Science at Unive rsity of Southern California. His primary research interests are machine l earning and its application to various AI problems: speech and language pr ocessing\, computer vision\, robotics and recently weather forecast and cl imate modeling. He has a PhD (2007) from Computer and Information Scienc e from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing\, China). DTSTART;TZID=America/New_York:20221024T120000 DTEND;TZID=America/New_York:20221024T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Fei Sha (University of Southern California) “Extracting Information from Text into Memory for Knowledge-Intensive Tasks” URL:https://www.clsp.jhu.edu/events/fei-sha-university-of-southern-californ ia/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nModern learning architectures for natural language processing have been very successful in incorporating a huge amount of texts into their parameters. However\, by and large\, such models store and use knowledge in distributed and decentralized ways. This proves unreliable and makes the models ill-suited for knowledge-intensive tasks that require reasoning over factual information in linguistic expre ssions. In this talk\, I will give a few examples of exploring alternativ e architectures to tackle those challenges. In particular\, we can improve the performance of such (language) models by representing\, storing and a ccessing knowledge in a dedicated memory component.
\nThis talk is based on several joint works with Yury Zemlyanskiy (Goo gle Research)\, Michiel de Jong (USC and Google Research)\, William Cohen (Google Research and CMU) and our other collaborators in Google Research.< /p>\n
Biography
\nFei is a research scientist at Google Research. Before that\, he was a Professor of Computer Science at U niversity of Southern California. His primary research interests are machi ne learning and its application to various AI problems: speech and languag e processing\, computer vision\, robotics and recently weather forecast an d climate modeling. He has a PhD (2007) from Computer and Information Sc ience from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing\, China).
\n X-TAGS;LANGUAGE=en-US:2022\,October\,Sha END:VEVENT BEGIN:VEVENT UID:ai1ec-22408@www.clsp.jhu.edu DTSTAMP:20240328T194221Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nAI-powered applications increasingly adopt Deep Neura l Networks (DNNs) for solving many prediction tasks\, leading to more than one DNNs running on resource-constrained devices. Supporting many models simultaneously on a device is challenging due to the linearly increased co mputation\, energy\, and storage costs. An effective approach to address t he problem is multi-task learning (MTL) where a set of tasks are learned j ointly to allow some parameter sharing among tasks. MTL creates multi-task models based on common DNN architectures and has shown significantly redu ced inference costs and improved generalization performance in many machin e learning applications. In this talk\, we will introduce our recent effor ts on leveraging MTL to improve accuracy and efficiency for edge computing . The talk will introduce multi-task architecture design systems that can automatically identify resource-efficient multi-task models with low infer ence costs and high task accuracy.\n\nBiography\n\n\nHui Guan is an Assist ant Professor in the College of Information and Computer Sciences (CICS) a t the University of Massachusetts Amherst\, the flagship campus of the UMa ss system. She received her Ph.D. in Electrical Engineering from North Car olina State University in 2020. Her research lies in the intersection betw een machine learning and systems\, with an emphasis on improving the speed \, scalability\, and reliability of machine learning through innovations i n algorithms and programming systems. Her current research focuses on both algorithm and system optimizations of deep multi-task learning and graph machine learning. DTSTART;TZID=America/New_York:20221111T120000 DTEND;TZID=America/New_York:20221111T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Hui Guan (University of Massachusetts Amherst) “Towards Accurate an d Efficient Edge Computing Via Multi-Task Learning” URL:https://www.clsp.jhu.edu/events/hui-guan-university-of-massachusetts-am herst/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAbstr act
\nAs artificial intelligence (AI) continues to rapidly expand into existing healthcare infrastructure – e.g.\, clinical decision support\, administrative tasks\, and public health surveillance – it is pe rhaps more important than ever to reflect on the broader purpose of such s ystems. While much focus has been on the potential for this technology to improve general health outcomes\, there also exists a significant\, but un derstated\, opportunity to use this technology to address health-related d isparities. Accomplishing the latter depends not only on our ability to ef fectively identify addressable areas of systemic inequality and translate them into tasks that are machine learnable\, but also our ability to measu re\, interpret\, and counteract barriers in training data that may inhibit robustness to distribution shift upon deployment (i.e.\, new populations\ , temporal dynamics). In this talk\, we will discuss progress made along b oth of these dimensions. We will begin by providing background on the stat e of AI for promoting health equity. Then\, we will present results from a recent clinical phenotyping project and discuss their implication on prev ailing views regarding language model robustness in clinical applications. Finally\, we will showcase ongoing efforts to proactively address systemi c inequality in healthcare by identifying and characterizing stigmatizing language in medical records.
\n X-TAGS;LANGUAGE=en-US:2024\,February\,Harrigian END:VEVENT END:VCALENDAR