BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-21267@www.clsp.jhu.edu
DTSTAMP:20240328T100511Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:
Abstract
\nIn this talk\, I present a
multipronged strategy for zero-shot cross-lingual Information Extraction\
, that is the construction of an IE model for some target language\, given
existing annotations exclusively in some other language. This work is par
t of the JHU team’s effort under the IARPA BETTER program. I explore data
augmentation techniques including data projection and self-training\, and
how different pretrained encoders impact them. We find through extensive e
xperiments and extension of techniques that a combination of approaches\,
both new and old\, leads to better performance than any one cross-lingual
strategy in particular.
\nBiography
\nMahsa
Yarmohammadi is an assistant research scientist in CLSP\, JHU\, who leads
state-of-the-art research in cross-lingual language and speech applicatio
ns and algorithms. A primary focus of Yarmohammadi’s research is using dee
p learning techniques to transfer existing resources into other languages
and to learn representations of language from multilingual data. She also
works in automatic speech recognition and speech translation. Yarmohammadi
received her PhD in computer science and engineering from Oregon Health &
Science University (2016). She joined CLSP as a post-doctoral fellow in 2
017.
\n
DTSTART;TZID=America/New_York:20220204T120000
DTEND;TZID=America/New_York:20220204T131500
LOCATION:Ames 234 Presented Virtually via Zoom https://wse.zoom.us/j/967351
83473
SEQUENCE:0
SUMMARY:Mahsa Yarmohammadi (Johns Hopkins University) “Data Augmentation fo
r Zero-shot Cross-Lingual Information Extraction”
URL:https://www.clsp.jhu.edu/events/mahsa-yarmohammadi-johns-hopkins-univer
sity-data-augmentation-for-zero-shot-cross-lingual-information-extraction/
X-COST-TYPE:free
X-TAGS;LANGUAGE=en-US:2022\,February\,Yarmohammadi
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-21487@www.clsp.jhu.edu
DTSTAMP:20240328T100511Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract
\nEnormous amounts of ever-changing knowledge are a
vailable online in diverse textual styles and diverse formats. Recent adva
nces in deep learning algorithms and large-scale datasets are spurring pro
gress in many Natural Language Processing (NLP) tasks\, including question
answering. Nevertheless\, these models cannot scale up when task-annotate
d training data are scarce. This talk presents my lab’s work toward buildi
ng general-purpose models in NLP and how to systematically evaluate them.
First\, I present a general model for two known tasks of question answerin
g in English and multiple languages that are robust to small domain shifts
. Then\, I show a meta-training approach that can solve a variety of NLP
tasks with only using a few examples and introduce a benchmark to evaluate
cross-task generalization. Finally\, I discuss neuro-symbolic appr
oaches to address more complex tasks by eliciting knowledge from structure
d data and language models.
\n\nBiography
\n\nHanna Hajishirzi is an Assistant Professor in the Paul G. Allen Schoo
l of Computer Science & Engineering at the University of Washington and a
Senior Research Manager at the Allen Institute for AI. Her research spans
different areas in NLP and AI\, focusing on developing general-purpose mac
hine learning algorithms that can solve many NLP tasks. Applications for t
hese algorithms include question answering\, representation learning\, gre
en AI\, knowledge extraction\, and conversational dialogue. Honors include
the NSF CAREER Award\, Sloan Fellowship\, Allen Distinguished Investigato
r Award\, Intel rising star award\, best paper and honorable mention award
s\, and several industry research faculty awards. Hanna received her PhD f
rom University of Illinois and spent a year as a postdoc at Disney Researc
h and CMU.
DTSTART;TZID=America/New_York:20220225T120000
DTEND;TZID=America/New_York:20220225T131500
LOCATION:Ames Hall 234 - Presented Virtually Via Zoom https://wse.zoom.us/j
/96735183473
SEQUENCE:0
SUMMARY:Hanna Hajishirzi (University of Washington & Allen Institute for AI
) “Toward Robust\, Knowledge-Rich NLP”
URL:https://www.clsp.jhu.edu/events/hanna-hajishirzi-university-of-washingt
on-allen-institute-for-ai-toward-robust-knowledge-rich-nlp/
X-COST-TYPE:free
X-TAGS;LANGUAGE=en-US:2022\,February\,Hajishirzi
END:VEVENT
END:VCALENDAR