BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-22375@www.clsp.jhu.edu DTSTAMP:20240328T111137Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nI will present our work on data augmentation using style transfer as a way to im prove domain adaptation in sequence labeling tasks. The target domain is s ocial media data\, and the task is named entity recognition (NER). The pre mise is that we can transform the labelled out of domain data into somethi ng that stylistically is more closely related to the target data. Then we can train a model on a combination of the generated data and the smaller a mount of in domain data to improve NER prediction performance. I will show recent empirical results on these efforts.
\nIf time allows\, I will also give an overview of other research projects I’m currently leading at RiTUAL (Research in Text Understanding and Analysis of Language) lab. The common thread among all these research problems is t he scarcity of labeled data.
\nBiography
\nThamar Solorio is a Professor of Com puter Science at the University of Houston (UH). She holds graduate degree s in Computer Science from the Instituto Nacional de Astrofísica\, Óptica y Electrónica\, in Puebla\, Mexico. Her research interests include informa tion extraction from social media data\, enabling technology for code-swit ched data\, stylistic modeling of text\, and more recently multimodal appr oaches for online content understanding. She is the director and founder o f the RiTUAL Lab at UH. She is the recipient of an NSF CAREER award for he r work on authorship attribution\, and recipient of the 2014 Emerging Lead er ABIE Award in Honor of Denice Denton. She is currently serving a second term as an elected board member of the North American Chapter of the Asso ciation of Computational Linguistics and was PC co-chair for NAACL 2019. S he recently joined the team of Editors in Chief for the ACL Rolling Review (ARR) system. Her research is currently funded by the NSF and by ADOBE. p> DTSTART;TZID=America/New_York:20220923T120000 DTEND;TZID=America/New_York:20220923T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Thamar Solorio (University of Houston) “Style Transfer for Data Aug mentation in Sequence Labeling Tasks” URL:https://www.clsp.jhu.edu/events/thamar-solorio-university-of-houston-st yle-transfer-for-data-augmentation-in-sequence-labeling-tasks/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,September\,Solorio END:VEVENT BEGIN:VEVENT UID:ai1ec-24465@www.clsp.jhu.edu DTSTAMP:20240328T111137Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nLarge Language Models (LLM s) have demonstrated remarkable capabilities across various domains. Howev er\, it is still very challenging to build highly-reliable applications wi th LLMs that support specialized use cases. LLMs trained on web data often excel at capturing general language patterns\, but they could struggle to support specialized domains and personalized user needs. Moreover\, LLMs can produce errors that are deceptively plausible\, making them potentiall y dangerous for high-trust scenarios. In this talk\, I will discuss some o f our recent efforts in addressing these challenges with data-efficient tu ning methods and a novel factuality evaluation framework. Specifically\, m y talk will focus on building multilingual applications\, one crucial use case often characterized by limited tuning and evaluation data.
\nBio
Xinyi(Cindy) Wang is a research scientist at Go ogle DeepMind working on Large Language Models(LLM) and its application to generative question-answering. She has worked on multilingual instruction -tuning for Gemini and multilingual generative models used in Google searc h. Before Google DeepMind\, Cindy Wang obtained her PhD degree in Language Technologies at Carnegie Mellon University. During her PhD\, she mainly w orked on developing data-efficient natural language processing~(NLP) syste ms. She has made several contributions in data selection\, data representa tion\, and model adaptation for multilingual NLP.
DTSTART;TZID=America/New_York:20240308T120000 DTEND;TZID=America/New_York:20240308T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Cindy Wang (Google DeepMind) “Building Data-Efficient and Reliable Applications with Large Language Models” URL:https://www.clsp.jhu.edu/events/cindy-wang-google-deepmind-building-dat a-efficient-and-reliable-applications-with-large-language-models/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,March\,Wang END:VEVENT END:VCALENDAR