BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-21487@www.clsp.jhu.edu
DTSTAMP:20240329T142130Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nEnormous amounts of ever-changing knowledge are avai
lable online in diverse textual styles and diverse formats. Recent advance
s in deep learning algorithms and large-scale datasets are spurring progre
ss in many Natural Language Processing (NLP) tasks\, including question an
swering. Nevertheless\, these models cannot scale up when task-annotated t
raining data are scarce. This talk presents my lab’s work toward building
general-purpose models in NLP and how to systematically evaluate them. Fir
st\, I present a general model for two known tasks of question answering i
n English and multiple languages that are robust to small domain shifts.
Then\, I show a meta-training approach that can solve a variety of NLP tas
ks with only using a few examples and introduce a benchmark to evaluate cr
oss-task generalization. Finally\, I discuss neuro-symbolic approaches to
address more complex tasks by eliciting knowledge from structured data and
language models.\n\nBiography\n\nHanna Hajishirzi is an Assistant Profess
or in the Paul G. Allen School of Computer Science & Engineering at the Un
iversity of Washington and a Senior Research Manager at the Allen Institut
e for AI. Her research spans different areas in NLP and AI\, focusing on d
eveloping general-purpose machine learning algorithms that can solve many
NLP tasks. Applications for these algorithms include question answering\,
representation learning\, green AI\, knowledge extraction\, and conversati
onal dialogue. Honors include the NSF CAREER Award\, Sloan Fellowship\, Al
len Distinguished Investigator Award\, Intel rising star award\, best pape
r and honorable mention awards\, and several industry research faculty awa
rds. Hanna received her PhD from University of Illinois and spent a year a
s a postdoc at Disney Research and CMU.
DTSTART;TZID=America/New_York:20220225T120000
DTEND;TZID=America/New_York:20220225T131500
LOCATION:Ames Hall 234 - Presented Virtually Via Zoom https://wse.zoom.us/j
/96735183473
SEQUENCE:0
SUMMARY:Hanna Hajishirzi (University of Washington & Allen Institute for AI
) “Toward Robust\, Knowledge-Rich NLP”
URL:https://www.clsp.jhu.edu/events/hanna-hajishirzi-university-of-washingt
on-allen-institute-for-ai-toward-robust-knowledge-rich-nlp/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\n\\n\\nAbstr
act
\nEno
rmous amounts of ever-changing knowledge are available online in diverse
textual styles and diverse formats. Recent advances in deep learning algor
ithms and large-scale datasets are spurring progress in many Natural Langu
age Processing (NLP) tasks\, including question answering. Nevertheless\,
these models cannot scale up when task-annotated training data are scarce.
This talk presents my lab’s work toward building general-purpose models i
n NLP and how to systematically evaluate them. First\, I present a general
model for two known tasks of question answering in English and multiple l
anguages that are robust to small domain shifts. Then\, I show a meta-tra
ining approach that can solve a variety of NLP tasks with only using a few
examples and introduce a benchmark to evaluate cross-task generalization.
Finally\, I discuss neuro-symbolic approaches to address more comp
lex tasks by eliciting knowledge from structured data and language models.
\n\nBiography
\n\n<
div>Hanna Hajishirzi is an
Assistant Professor in the Paul G. Allen School of Computer Science & Eng
ineering at the University of Washington and a Senior Research Manager at
the Allen Institute for AI. Her research spans different areas in NLP and
AI\, focusing on developing general-purpose machine learning algorithms th
at can solve many NLP tasks. Applications for these algorithms include que
stion answering\, representation learning\, green AI\, knowledge extractio
n\, and conversational dialogue. Honors include the NSF CAREER Award\, Slo
an Fellowship\, Allen Distinguished Investigator Award\, Intel rising star
award\, best paper and honorable mention awards\, and several industry re
search faculty awards. Hanna received her PhD from University of Illinois
and spent a year as a postdoc at Disney Research and CMU.\n
BODY>
X-TAGS;LANGUAGE=en-US:2022\,February\,Hajishirzi
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-22417@www.clsp.jhu.edu
DTSTAMP:20240329T142130Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nOne of the keys to success in machine learning applic
ations is to improve each user’s personal experience via personalized mode
ls. A personalized model can be a more resource-efficient solution than a
general-purpose model\, too\, because it focuses on a particular sub-probl
em\, for which a smaller model architecture can be good enough. However\,
training a personalized model requires data from the particular test-time
user\, which are not always available due to their private nature and tech
nical challenges. Furthermore\, such data tend to be unlabeled as they can
be collected only during the test time\, once after the system is deploye
d to user devices. One could rely on the generalization power of a generic
model\, but such a model can be too computationally/spatially complex for
real-time processing in a resource-constrained device. In this talk\, I w
ill present some techniques to circumvent the lack of labeled personal dat
a in the context of speech enhancement. Our machine learning models will r
equire zero or few data samples from the test-time users\, while they can
still achieve the personalization goal. To this end\, we will investigate
modularized speech enhancement models as well as the potential of self-sup
ervised learning for personalized speech enhancement. Because our research
achieves the personalization goal in a data- and resource-efficient way\,
it is a step towards a more available and affordable AI for society.\nBio
graphy\nMinje Kim is an associate professor in the Dept. of Intelligent Sy
stems Engineering at Indiana University\, where he leads his research grou
p\, Signals and AI Group in Engineering (SAIGE). He is also an Amazon Visi
ting Academic\, consulting for Amazon Lab126. At IU\, he is affiliated wit
h various programs and labs such as Data Science\, Cognitive Science\, Dep
t. of Statistics\, and Center for Machine Learning. He earned his Ph.D. in
the Dept. of Computer Science at the University of Illinois at Urbana-Cha
mpaign. Before joining UIUC\, He worked as a researcher at ETRI\, a nation
al lab in Korea\, from 2006 to 2011. Before then\, he received his Master’
s and Bachelor’s degrees in the Dept. of Computer Science and Engineering
at POSTECH (Summa Cum Laude) and in the Division of Information and Comput
er Engineering at Ajou University (with honor) in 2006 and 2004\, respecti
vely. He is a recipient of various awards including NSF Career Award (2021
)\, IU Trustees Teaching Award (2021)\, IEEE SPS Best Paper Award (2020)\,
and Google and Starkey’s grants for outstanding student papers in ICASSP
2013 and 2014\, respectively. He is an IEEE Senior Member and also a membe
r of the IEEE Audio and Acoustic Signal Processing Technical Committee (20
18-2023). He is serving as an Associate Editor for EURASIP Journal of Audi
o\, Speech\, and Music Processing\, and as a Consulting Associate Editor f
or IEEE Open Journal of Signal Processing. He is also a reviewer\, program
committee member\, or area chair for the major machine learning and signa
l processing. He filed more than 50 patent applications as an inventor.
DTSTART;TZID=America/New_York:20221202T120000
DTEND;TZID=America/New_York:20221202T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Minje Kim (Indiana University) “Personalized Speech Enhancement: Da
ta- and Resource-Efficient Machine Learning”
URL:https://www.clsp.jhu.edu/events/minje-kim-indiana-university/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nAbstr
act
\nOne of the keys to success in machine learning applic
ations is to improve each user’s personal experience via personalized mode
ls. A personalized model can be a more resource-efficient solution than a
general-purpose model\, too\, because it focuses on a particular sub-probl
em\, for which a smaller model architecture can be good enough. However\,
training a personalized model requires data from the particular test-time
user\, which are not always available due to their private nature and tech
nical challenges. Furthermore\, such data tend to be unlabeled as they can
be collected only during the test time\, once after the system is deploye
d to user devices. One could rely on the generalization power of a generic
model\, but such a model can be too computationally/spatially complex for
real-time processing in a resource-constrained device. In this talk\, I will present some techniques to circumvent
the lack of labeled personal data in the context of speech enhancement. Ou
r machine learning models will require zero or few data samples from the t
est-time users\, while they can still achieve the personalization goal. To
this end\, we will investigate modularized speech enhancement models as w
ell as the potential of self-supervised learning for personalized speech e
nhancement. Because our research achieves the personalization goal in a da
ta- and resource-efficient way\, it is a step towards a more available and
affordable AI for society.
\nBiography
\nMinje Kim is
an associate professor in the Dept. of Intelligent Systems Engineering at
Indiana University\, where he leads his research group\, Signals and AI Gr
oup in Engineering (SAIGE). He is also an Amazon Visiting Academic\, consu
lting for Amazon Lab126. At IU\, he is affiliated with various programs an
d labs such as Data Science\, Cognitive Science\, Dept. of Statistics\, an
d Center for Machine Learning. He earned his Ph.D. in the Dept. of Compute
r Science at the University of Illinois at Urbana-Champaign. Before joinin
g UIUC\, He worked as a researcher at ETRI\, a national lab in Korea\, fro
m 2006 to 2011. Before then\, he received his Master’s and Bachelor’s degr
ees in the Dept. of Computer Science and Engineering at POSTECH (Summa Cum
Laude) and in the Division of Information
and Computer Engineering at Ajou University (with honor) in 2006 and 2004
\, respectively. He is a recipient of various awards including NSF Career
Award (2021)\, IU Trustees Teaching Award (2021)\, IEEE SPS Best Paper Awa
rd (2020)\, and Google and Starkey’s grants for outstanding student papers
in ICASSP 2013 and 2014\, respectively. He is an IEEE Senior Member and a
lso a member of the IEEE Audio and Acoustic Signal Processing Technical Co
mmittee (2018-2023). He is serving as an Associate Editor for EURASIP Jour
nal of Audio\, Speech\, and Music Processing\, and as a Consulting Associa
te Editor for IEEE Open Journal of Signal Processing. He is also a reviewe
r\, program committee member\, or area chair for the major machine learnin
g and signal processing. He filed more than 50 patent applications as an i
nventor.
\n
X-TAGS;LANGUAGE=en-US:2022\,December\,Kim
END:VEVENT
END:VCALENDAR