BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-21267@www.clsp.jhu.edu
DTSTAMP:20240329T012442Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nIn this talk\, I present a multipronged strategy for
zero-shot cross-lingual Information Extraction\, that is the construction
of an IE model for some target language\, given existing annotations exclu
sively in some other language. This work is part of the JHU team’s effort
under the IARPA BETTER program. I explore data augmentation techniques inc
luding data projection and self-training\, and how different pretrained en
coders impact them. We find through extensive experiments and extension of
techniques that a combination of approaches\, both new and old\, leads to
better performance than any one cross-lingual strategy in particular.\nBi
ography\nMahsa Yarmohammadi is an assistant research scientist in CLSP\, J
HU\, who leads state-of-the-art research in cross-lingual language and spe
ech applications and algorithms. A primary focus of Yarmohammadi’s researc
h is using deep learning techniques to transfer existing resources into ot
her languages and to learn representations of language from multilingual d
ata. She also works in automatic speech recognition and speech translation
. Yarmohammadi received her PhD in computer science and engineering from O
regon Health & Science University (2016). She joined CLSP as a post-doctor
al fellow in 2017.
DTSTART;TZID=America/New_York:20220204T120000
DTEND;TZID=America/New_York:20220204T131500
LOCATION:Ames 234 Presented Virtually via Zoom https://wse.zoom.us/j/967351
83473
SEQUENCE:0
SUMMARY:Mahsa Yarmohammadi (Johns Hopkins University) “Data Augmentation fo
r Zero-shot Cross-Lingual Information Extraction”
URL:https://www.clsp.jhu.edu/events/mahsa-yarmohammadi-johns-hopkins-univer
sity-data-augmentation-for-zero-shot-cross-lingual-information-extraction/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\n\\n\\nAbstr
act
\nIn this talk\, I present a multipronged strategy for
zero-shot cross-lingual Information Extraction\, that is the construction
of an IE model for some target language\, given existing annotations exclu
sively in some other language. This work is part of the JHU team’s effort
under the IARPA BETTER program. I explore data augmentation techniques inc
luding data projection and self-training\, and how different pretrained en
coders impact them. We find through extensive experiments and extension of
techniques that a combination of approaches\, both new and old\, leads to
better performance than any one cross-lingual strategy in particular.
\nBiography
\nMahsa Yarmohammadi is an assista
nt research scientist in CLSP\, JHU\, who leads state-of-the-art research
in cross-lingual language and speech applications and algorithms. A primar
y focus of Yarmohammadi’s research is using deep learning techniques to tr
ansfer existing resources into other languages and to learn representation
s of language from multilingual data. She also works in automatic speech r
ecognition and speech translation. Yarmohammadi received her PhD in comput
er science and engineering from Oregon Health & Science University (2016).
She joined CLSP as a post-doctoral fellow in 2017.
\n\n
BODY>
X-TAGS;LANGUAGE=en-US:2022\,February\,Yarmohammadi
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-22408@www.clsp.jhu.edu
DTSTAMP:20240329T012442Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nAI-powered applications increasingly adopt Deep Neura
l Networks (DNNs) for solving many prediction tasks\, leading to more than
one DNNs running on resource-constrained devices. Supporting many models
simultaneously on a device is challenging due to the linearly increased co
mputation\, energy\, and storage costs. An effective approach to address t
he problem is multi-task learning (MTL) where a set of tasks are learned j
ointly to allow some parameter sharing among tasks. MTL creates multi-task
models based on common DNN architectures and has shown significantly redu
ced inference costs and improved generalization performance in many machin
e learning applications. In this talk\, we will introduce our recent effor
ts on leveraging MTL to improve accuracy and efficiency for edge computing
. The talk will introduce multi-task architecture design systems that can
automatically identify resource-efficient multi-task models with low infer
ence costs and high task accuracy.\n\nBiography\n\n\nHui Guan is an Assist
ant Professor in the College of Information and Computer Sciences (CICS) a
t the University of Massachusetts Amherst\, the flagship campus of the UMa
ss system. She received her Ph.D. in Electrical Engineering from North Car
olina State University in 2020. Her research lies in the intersection betw
een machine learning and systems\, with an emphasis on improving the speed
\, scalability\, and reliability of machine learning through innovations i
n algorithms and programming systems. Her current research focuses on both
algorithm and system optimizations of deep multi-task learning and graph
machine learning.
DTSTART;TZID=America/New_York:20221111T120000
DTEND;TZID=America/New_York:20221111T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Hui Guan (University of Massachusetts Amherst) “Towards Accurate an
d Efficient Edge Computing Via Multi-Task Learning”
URL:https://www.clsp.jhu.edu/events/hui-guan-university-of-massachusetts-am
herst/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nAbstr
act
\nAI-powered applications increasingly adopt
Deep Neural Networks (DNNs) for solving many prediction tasks\, leading t
o more than one DNNs running on resource-constrained devices. Supporting m
any models simultaneously on a device is challenging due to the linearly i
ncreased computation\, energy\, and storage costs. An effective approach t
o address the problem is multi-task learning (MTL) where a set of tasks ar
e learned jointly to allow some parameter sharing among tasks. MTL creates
multi-task models based on common DNN architectures and has shown signifi
cantly reduced inference costs and improved generalization performance in
many machine learning applications. In this talk\, we will introduce our r
ecent efforts on leveraging MTL to improve accuracy and efficiency for edg
e computing. The talk will introduce multi-task architecture design system
s that can automatically identify resource-efficient multi-task models wit
h low inference costs and high task accuracy.
\n\nBiography
\n\n\n
Hui Guan
is an Assistant Professor in the
College of Information and Computer
Sciences (CICS) at the University of Massachusetts Amherst\, t
he flagship campus of the UMass system. She received her Ph.D. in Electric
al Engineering from
North Carolina State University in 2020. He
r research lies in the intersection between machine learning and systems\,
with an emphasis on improving the speed\, scalability\, and reliability o
f machine learning through innovations in algorithms and programming syste
ms. Her current research focuses on both algorithm and system optimization
s of deep multi-task learning and graph machine learning.
\n
\n
\n\n
X-TAGS;LANGUAGE=en-US:2022\,Guan\,November
END:VEVENT
END:VCALENDAR