BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-20723@www.clsp.jhu.edu
DTSTAMP:20240329T152449Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nText simplification aims to help audiences read and u
nderstand a piece of text through lexical\, syntactic\, and discourse modi
fications\, while remaining faithful to its central idea and meaning. Than
ks to large-scale parallel corpora derived from Wikipedia and News\, much
of modern-day text simplification research focuses on sentence simplificat
ion\, transforming original\, more complex sentences into simplified versi
ons. In this talk\, I present new frontiers that focus on discourse operat
ions. First\, we consider the challenging task of simplifying highly techn
ical language\, in our case\, medical texts. We introduce a new corpus of
parallel texts in English comprising technical and lay summaries of all pu
blished evidence pertaining to different clinical topics. We then propose
a new metric to quantify stylistic differentiates between the two\, and mo
dels for paragraph-level simplification. Second\, we present the first dat
a-driven study of inserting elaborations and explanations during simplific
ation\, and illustrate the richness and complexities of this phenomenon.\n
Biography\n\nJessy Li is an assistant professor in the Department of Lingu
istics at UT Austin where she works on in computational linguistics and na
tural language processing. Her work focuses on discourse processing\, text
generation\, and language pragmatics in social media. She received her Ph
.D. in 2017 from the University of Pennsylvania. She received an ACM SIGSO
FT Distinguished Paper Award at FSE 2019\, an Area Chair Favorite at COLIN
G 2018\, and a Best Paper nomination at SIGDIAL 2016.\nWeb: https://jessyl
i.com
DTSTART;TZID=America/New_York:20210917T120000
DTEND;TZID=America/New_York:20210917T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Jessy Li (University of Texas at Austin – Virtual Visit) “New Chall
enges in Text Simplification”
URL:https://www.clsp.jhu.edu/events/jessy-li-university-of-texas-at-austin/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nText simplification aims to help audiences read and u
nderstand a piece of text through lexical\, syntactic\, and discourse modi
fications\, while remaining faithful to its central idea and meaning. Than
ks to large-scale parallel corpora derived from Wikipedia and News\, much
of modern-day text simplification research focuses on sentence simplificat
ion\, transforming original\, more complex sentences into simplified versi
ons. In this talk\, I present new frontiers that focus on discourse operat
ions. First\, we consider the challenging task of simplifying highly techn
ical language\, in our case\, medical texts. We introduce a new corpus of
parallel texts in English comprising technical and lay summaries of all pu
blished evidence pertaining to different clinical topics. We then propose
a new metric to quantify stylistic differentiates between the two\, and mo
dels for paragraph-level simplification. Second\, we present the first dat
a-driven study of inserting elaborations and explanations during simplific
ation\, and illustrate the richness and complexities of this phenomenon.
p>\n
\n
Jessy Li is an
assistant professor in the Department of Linguistics at UT Austin where she works on in computational linguistics and natural language processing. Her work fo
cuses on discourse processing\, text generation\, and language pragmatics
in social media. She received her Ph.D. in 2017 from the University of Pen
nsylvania. She received an ACM SIGSOFT Distinguished Paper Award at FSE 20
19\, an Area Chair Favorite at COLING 2018\, and a Best Paper nomination a
t SIGDIAL 2016.
\n
\n\n
\n
X-TAGS;LANGUAGE=en-US:2021\,Li\,September
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-22408@www.clsp.jhu.edu
DTSTAMP:20240329T152449Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nAI-powered applications increasingly adopt Deep Neura
l Networks (DNNs) for solving many prediction tasks\, leading to more than
one DNNs running on resource-constrained devices. Supporting many models
simultaneously on a device is challenging due to the linearly increased co
mputation\, energy\, and storage costs. An effective approach to address t
he problem is multi-task learning (MTL) where a set of tasks are learned j
ointly to allow some parameter sharing among tasks. MTL creates multi-task
models based on common DNN architectures and has shown significantly redu
ced inference costs and improved generalization performance in many machin
e learning applications. In this talk\, we will introduce our recent effor
ts on leveraging MTL to improve accuracy and efficiency for edge computing
. The talk will introduce multi-task architecture design systems that can
automatically identify resource-efficient multi-task models with low infer
ence costs and high task accuracy.\n\nBiography\n\n\nHui Guan is an Assist
ant Professor in the College of Information and Computer Sciences (CICS) a
t the University of Massachusetts Amherst\, the flagship campus of the UMa
ss system. She received her Ph.D. in Electrical Engineering from North Car
olina State University in 2020. Her research lies in the intersection betw
een machine learning and systems\, with an emphasis on improving the speed
\, scalability\, and reliability of machine learning through innovations i
n algorithms and programming systems. Her current research focuses on both
algorithm and system optimizations of deep multi-task learning and graph
machine learning.
DTSTART;TZID=America/New_York:20221111T120000
DTEND;TZID=America/New_York:20221111T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Hui Guan (University of Massachusetts Amherst) “Towards Accurate an
d Efficient Edge Computing Via Multi-Task Learning”
URL:https://www.clsp.jhu.edu/events/hui-guan-university-of-massachusetts-am
herst/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
\\n\\n
Abstr
act
\n
AI-powered applications increasingly adopt
Deep Neural Networks (DNNs) for solving many prediction tasks\, leading t
o more than one DNNs running on resource-constrained devices. Supporting m
any models simultaneously on a device is challenging due to the linearly i
ncreased computation\, energy\, and storage costs. An effective approach t
o address the problem is multi-task learning (MTL) where a set of tasks ar
e learned jointly to allow some parameter sharing among tasks. MTL creates
multi-task models based on common DNN architectures and has shown signifi
cantly reduced inference costs and improved generalization performance in
many machine learning applications. In this talk\, we will introduce our r
ecent efforts on leveraging MTL to improve accuracy and efficiency for edg
e computing. The talk will introduce multi-task architecture design system
s that can automatically identify resource-efficient multi-task models wit
h low inference costs and high task accuracy.
\n
\n
Biography
\n
\n
\n
Hui Guan
is an Assistant Professor in the
College of Information and Computer
Sciences (CICS) at the University of Massachusetts Amherst\, t
he flagship campus of the UMass system. She received her Ph.D. in Electric
al Engineering from
North Carolina State University in 2020. He
r research lies in the intersection between machine learning and systems\,
with an emphasis on improving the speed\, scalability\, and reliability o
f machine learning through innovations in algorithms and programming syste
ms. Her current research focuses on both algorithm and system optimization
s of deep multi-task learning and graph machine learning.
\n
\n
\n
\n
X-TAGS;LANGUAGE=en-US:2022\,Guan\,November
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-23882@www.clsp.jhu.edu
DTSTAMP:20240329T152449Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nLarge language models (LLMs) have demonstrated incred
ible power\, but they also possess vulnerabilities that can lead to misuse
and potential attacks. In this presentation\, we will address two fundame
ntal questions regarding the responsible utilization of LLMs: (1) How can
we accurately identify AI-generated text? (2) What measures can safeguard
the intellectual property of LLMs? We will introduce two recent watermarki
ng techniques designed for text and models\, respectively. Our discussion
will encompass the theoretical underpinnings that ensure the correctness o
f watermark detection\, along with robustness against evasion attacks. Fur
thermore\, we will showcase empirical evidence validating their effectiven
ess. These findings establish a solid technical groundwork for policymaker
s\, legal professionals\, and generative AI practitioners alike.\nBiograph
y\nLei Li is an Assistant Professor in Language Technology Institute at Ca
rnegie Mellon University. He received Ph.D. from Carnegie Mellon Universit
y School of Computer Science. He is a recipient of ACL 2021 Best Paper Awa
rd\, CCF Young Elite Award in 2019\, CCF distinguished speaker in 2017\, W
u Wen-tsün AI prize in 2017\, and 2012 ACM SIGKDD dissertation award (runn
er-up)\, and is recognized as Notable Area Chair of ICLR 2023. Previously\
, he was a faculty member at UC Santa Barbara. Prior to that\, he founded
ByteDance AI Lab in 2016 and led its research in NLP\, ML\, Robotics\, an
d Drug Discovery. He launched ByteDance’s machine translation system VolcT
rans and AI writing system Xiaomingbot\, serving one billion users.
DTSTART;TZID=America/New_York:20230901T120000
DTEND;TZID=America/New_York:20230901T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Lei Li (Carnegie Mellon University) “Empowering Responsible Use of
Large Language Models”
URL:https://www.clsp.jhu.edu/events/lei-li-carnegie-mellon-university-empow
ering-responsible-use-of-large-language-models/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
\\n\\n
Abstr
act
\n
Large language models (LLMs) have demonstrated incred
ible power\, but they also possess vulnerabilities that can lead to misuse
and potential attacks. In this presentation\, we will address two fundame
ntal questions regarding the responsible utilization of LLMs: (1) How can
we accurately identify AI-generated text? (2) What measures can safeguard
the intellectual property of LLMs? We will introduce two recent watermarki
ng techniques designed for text and models\, respectively. Our discussion
will encompass the theoretical underpinnings that ensure the correctness o
f watermark detection\, along with robustness against evasion attacks. Fur
thermore\, we will showcase empirical evidence validating their effectiven
ess. These findings establish a solid technical groundwork for policymaker
s\, legal professionals\, and generative AI practitioners alike.
\n
<
strong>Biography
\n
Lei Li is an Assistant Professor in Lang
uage Technology Institute at Carnegie Mellon University. He received Ph.D.
from Carnegie Mellon University School of Computer Science. He is a recip
ient of ACL 2021 Best Paper Award\, CCF Young Elite Award in 2019\, CCF di
stinguished speaker in 2017\, Wu Wen-tsün AI prize in 2017\, and 2012 ACM
SIGKDD dissertation award (runner-up)\, and is recognized as Notable Area
Chair of ICLR 2023. Previously\, he was a faculty member at UC Santa Barba
ra. Prior to that\, he founded ByteDance AI Lab in 2016 and led its resea
rch in NLP\, ML\, Robotics\, and Drug Discovery. He launched ByteDance’s m
achine translation system VolcTrans and AI writing system Xiaomingbot\, se
rving one billion users.
\n
X-TAGS;LANGUAGE=en-US:2023\,Li\,September
END:VEVENT
END:VCALENDAR