BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-21487@www.clsp.jhu.edu
DTSTAMP:20240329T005613Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:
Abstract
\nEnormous amounts of ever-changing knowledge are a
vailable online in diverse textual styles and diverse formats. Recent adva
nces in deep learning algorithms and large-scale datasets are spurring pro
gress in many Natural Language Processing (NLP) tasks\, including question
answering. Nevertheless\, these models cannot scale up when task-annotate
d training data are scarce. This talk presents my lab’s work toward buildi
ng general-purpose models in NLP and how to systematically evaluate them.
First\, I present a general model for two known tasks of question answerin
g in English and multiple languages that are robust to small domain shifts
. Then\, I show a meta-training approach that can solve a variety of NLP
tasks with only using a few examples and introduce a benchmark to evaluate
cross-task generalization. Finally\, I discuss neuro-symbolic appr
oaches to address more complex tasks by eliciting knowledge from structure
d data and language models.
\n\nBiography
\n\nHanna Hajishirzi is an Assistant Professor in the Paul G. Allen Schoo
l of Computer Science & Engineering at the University of Washington and a
Senior Research Manager at the Allen Institute for AI. Her research spans
different areas in NLP and AI\, focusing on developing general-purpose mac
hine learning algorithms that can solve many NLP tasks. Applications for t
hese algorithms include question answering\, representation learning\, gre
en AI\, knowledge extraction\, and conversational dialogue. Honors include
the NSF CAREER Award\, Sloan Fellowship\, Allen Distinguished Investigato
r Award\, Intel rising star award\, best paper and honorable mention award
s\, and several industry research faculty awards. Hanna received her PhD f
rom University of Illinois and spent a year as a postdoc at Disney Researc
h and CMU.
DTSTART;TZID=America/New_York:20220225T120000
DTEND;TZID=America/New_York:20220225T131500
LOCATION:Ames Hall 234 - Presented Virtually Via Zoom https://wse.zoom.us/j
/96735183473
SEQUENCE:0
SUMMARY:Hanna Hajishirzi (University of Washington & Allen Institute for AI
) “Toward Robust\, Knowledge-Rich NLP”
URL:https://www.clsp.jhu.edu/events/hanna-hajishirzi-university-of-washingt
on-allen-institute-for-ai-toward-robust-knowledge-rich-nlp/
X-COST-TYPE:free
X-TAGS;LANGUAGE=en-US:2022\,February\,Hajishirzi
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-22408@www.clsp.jhu.edu
DTSTAMP:20240329T005613Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract
\nAI-powered appl
ications increasingly adopt Deep Neural Networks (DNNs) for solving many p
rediction tasks\, leading to more than one DNNs running on resource-constr
ained devices. Supporting many models simultaneously on a device is challe
nging due to the linearly increased computation\, energy\, and storage cos
ts. An effective approach to address the problem is multi-task learning (M
TL) where a set of tasks are learned jointly to allow some parameter shari
ng among tasks. MTL creates multi-task models based on common DNN architec
tures and has shown significantly reduced inference costs and improved gen
eralization performance in many machine learning applications. In this tal
k\, we will introduce our recent efforts on leveraging MTL to improve accu
racy and efficiency for edge computing. The talk will introduce multi-task
architecture design systems that can automatically identify resource-effi
cient multi-task models with low inference costs and high task accuracy.
div>\n
\n
Biography
\n
\n
\nHui Guan is an Assistant Professor in the
College
of Information and Computer Sciences (CICS) at the University o
f Massachusetts Amherst\, the flagship campus of the UMass system. She rec
eived her Ph.D. in Electrical Engineering from
North Carolina State Univer
sity in 2020. Her research lies in the intersection between mac
hine learning and systems\, with an emphasis on improving the speed\, scal
ability\, and reliability of machine learning through innovations in algor
ithms and programming systems. Her current research focuses on both algori
thm and system optimizations of deep multi-task learning and graph machine
learning.
\n\n \n
DTSTART;TZID=America/New_York:20221111T120000
DTEND;TZID=America/New_York:20221111T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Hui Guan (University of Massachusetts Amherst) “Towards Accurate an
d Efficient Edge Computing Via Multi-Task Learning”
URL:https://www.clsp.jhu.edu/events/hui-guan-university-of-massachusetts-am
herst/
X-COST-TYPE:free
X-TAGS;LANGUAGE=en-US:2022\,Guan\,November
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-24481@www.clsp.jhu.edu
DTSTAMP:20240329T005613Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract
\nNatural language provides
an intuitive and powerful interface to access knowledge at scale. Modern l
anguage systems draw information from two rich knowledge sources: (1) info
rmation stored in their parameters during massive pretraining and (2) docu
ments retrieved at inference time. Yet\, we are far from building systems
that can reliably provide information from such knowledge sources. In this
talk\, I will discuss paths for more robust systems. In the first part of
the talk\, I will present a module for scaling retrieval-based knowledge
augmentation. We learn a compressor that maps retrieved documents into tex
tual summaries prior to in-context integration. This not only reduces the
computational costs but also filters irrelevant or incorrect information.
In the second half of the talk\, I will discuss the challenges of updating
knowledge stored in model parameters and propose a method to prevent mode
ls from reciting outdated information by identifying facts that are prone
to rapid change. I will conclude my talk by proposing an interactive syste
m that can elicit information from users when needed.
\nBiog
raphy
\nEunsol Choi is an assistant pro
fessor in the Computer Science department at the University of Texas at Au
stin. Prior to UT\, she spent a year at Google AI as a visiting researcher
. Her research area spans natural language processing and machine learning
. She is particularly interested in interpreting and reasoning about text
in a dynamic real world context. She is a recipient of a Facebook research
fellowship\, Google faculty research award\, Sony faculty award\, and an
outstanding paper award at EMNLP. She received a Ph.D. in computer science
and engineering from University of Washington and B.A in mathematics and
computer science from Cornell University.
\n
DTSTART;TZID=America/New_York:20240315T120000
DTEND;TZID=America/New_York:20240315T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21209
SEQUENCE:0
SUMMARY:Eunsol Choi (University of Texas at Austin) “Knowledge-Rich Languag
e Systems in a Dynamic World”
URL:https://www.clsp.jhu.edu/events/eunsol-choi-university-of-texas-at-aust
in-knowledge-rich-language-systems-in-a-dynamic-world/
X-COST-TYPE:free
X-TAGS;LANGUAGE=en-US:2024\,Choi\,March
END:VEVENT
END:VCALENDAR