BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-21487@www.clsp.jhu.edu
DTSTAMP:20240328T094729Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:
Abstract
\nEnormous amounts of ever-changing knowledge are a
vailable online in diverse textual styles and diverse formats. Recent adva
nces in deep learning algorithms and large-scale datasets are spurring pro
gress in many Natural Language Processing (NLP) tasks\, including question
answering. Nevertheless\, these models cannot scale up when task-annotate
d training data are scarce. This talk presents my lab’s work toward buildi
ng general-purpose models in NLP and how to systematically evaluate them.
First\, I present a general model for two known tasks of question answerin
g in English and multiple languages that are robust to small domain shifts
. Then\, I show a meta-training approach that can solve a variety of NLP
tasks with only using a few examples and introduce a benchmark to evaluate
cross-task generalization. Finally\, I discuss neuro-symbolic appr
oaches to address more complex tasks by eliciting knowledge from structure
d data and language models.
\n\nBiography
\n\nHanna Hajishirzi is an Assistant Professor in the Paul G. Allen Schoo
l of Computer Science & Engineering at the University of Washington and a
Senior Research Manager at the Allen Institute for AI. Her research spans
different areas in NLP and AI\, focusing on developing general-purpose mac
hine learning algorithms that can solve many NLP tasks. Applications for t
hese algorithms include question answering\, representation learning\, gre
en AI\, knowledge extraction\, and conversational dialogue. Honors include
the NSF CAREER Award\, Sloan Fellowship\, Allen Distinguished Investigato
r Award\, Intel rising star award\, best paper and honorable mention award
s\, and several industry research faculty awards. Hanna received her PhD f
rom University of Illinois and spent a year as a postdoc at Disney Researc
h and CMU.
DTSTART;TZID=America/New_York:20220225T120000
DTEND;TZID=America/New_York:20220225T131500
LOCATION:Ames Hall 234 - Presented Virtually Via Zoom https://wse.zoom.us/j
/96735183473
SEQUENCE:0
SUMMARY:Hanna Hajishirzi (University of Washington & Allen Institute for AI
) “Toward Robust\, Knowledge-Rich NLP”
URL:https://www.clsp.jhu.edu/events/hanna-hajishirzi-university-of-washingt
on-allen-institute-for-ai-toward-robust-knowledge-rich-nlp/
X-COST-TYPE:free
X-TAGS;LANGUAGE=en-US:2022\,February\,Hajishirzi
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-21615@www.clsp.jhu.edu
DTSTAMP:20240328T094729Z
CATEGORIES;LANGUAGE=en-US:Student Seminars
CONTACT:
DESCRIPTION:Abstract
\n\n
\n
We conside
r a problem of data collection for semantically rich NLU tasks\, where det
ailed semantics of documents (or utterances) are captured using a complex
meaning representation. Previously\, data collection for such tasks was e
ither handled at the cost of extensive annotator training (e.g. in FrameNe
t or PropBank) or simplified meaning representation (e.g. in QA-SRL or Ove
rnight). In this talk\, we present two systems [1\, 2] that aim to suppor
t fast\, accurate\, and expressive semantic annotations by pairing human w
orkers with a trained model in the loop.
\n
\n
The firs
t system\, called Guided K-best [1]\, is an annotation toolkit for convers
ational semantic parsing. Instead of typing annotations from scratch\, da
ta specialists choose a correct parse from the K-best output of a few-shot
prototyped model. As the K-best list can be large (e.g. K=100)\, we guid
e the annotators’ exploration of the K-best list via explainable hierarchi
cal clustering. In addition\, we experiment with RoBERTa-based reranking
of the K-best list to recalibrate the few-shot model towards Accuracy@K.
The final system allows to annotate data up to 35% faster than the standar
d\, non-guided K-best and improves the few-shot model’s top-1 accuracy by
up to 18%. The second system\, called SchemaBlocks [2]\, is an annotation
toolkit for schemas\, or structured descriptions of frequent real-world s
cenarios (e.g.\, cooking a meal). It represents schemas in the annotation
UI as nested blocks. Using a novel Causal ARM model\, we further speed u
p the annotation process and guide data specialists towards expressive and
diverse schemas. As part of this work\, we collect 232 schemas\, evaluat
ing their internal coherence and their coverage on large-scale newswire co
rpora.
\n
\n
\n
DTSTART;TZID=America/New_York:20220311T120000
DTEND;TZID=America/New_York:20220311T131500
LOCATION:Virtual Seminar
SEQUENCE:0
SUMMARY:Student Seminar – Anton Belyy “Systems for Human-AI Cooperation on
Collecting Semantic Annotations”
URL:https://www.clsp.jhu.edu/events/student-seminar-anton-belyy-systems-for
-human-ai-cooperation-on-collecting-semantic-annotations/
X-COST-TYPE:free
X-TAGS;LANGUAGE=en-US:2022\,Belyy\,March
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-22423@www.clsp.jhu.edu
DTSTAMP:20240328T094729Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:
DTSTART;TZID=America/New_York:20221007T120000
DTEND;TZID=America/New_York:20221007T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Ariya Rastrow (Amazon)
URL:https://www.clsp.jhu.edu/events/ariya-rastrow-amazon-2/
X-COST-TYPE:free
X-TAGS;LANGUAGE=en-US:2022\,October\,Rastrow
END:VEVENT
END:VCALENDAR