BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-20730@www.clsp.jhu.edu
DTSTAMP:20240329T145656Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nRaytheon BBN participated in the IARPA MATERIAL progr
am\, whose objective is to enable rapid development of language-independen
t methods for cross-lingual information retrieval (CLIR). The challenging
CLIR task of retrieving documents written (or spoken) in one language so t
hat they satisfy an information need expressed in a different language is
exacerbated by unique challenges posed by the MATERIAL program: limited tr
aining data for automatic speech recognition and machine translation\, sca
nt lexical resources\, non-standardized orthography\, etc. Furthermore\, t
he format of the queries and the “Query-Weighted Value” performance measur
e are non-standard and not previously studied in the IR community. In this
talk\, we will describe the Raytheon BBN CLIR system\, which was successf
ul at addressing the above challenges and unique characteristics of the pr
ogram.\nBiography\n\nDamianos Karakos has been at Raytheon BBN for the pas
t nine years\, where he is currently a Senior Principal Engineer\, Researc
h. Before that\, he was research faculty at Johns Hopkins University. He h
as worked on several Government projects (e.g.\, DARPA GALE\, DARPA RATS\,
IARPA BABEL\, IARPA MATERIAL\, IARPA BETTER) and on a variety of HLT-rela
ted topics (e.g.\, speech recognition\, speech activity detection\, keywor
d search\, information retrieval). He has published more than 60 peer-revi
ewed papers. His research interests lie at the intersection of human langu
age technology and machine learning\, with an emphasis on statistical meth
ods. He obtained a PhD in Electrical Engineering from the University of Ma
ryland\, College Park\, in 2002.
DTSTART;TZID=America/New_York:20210924T120000
DTEND;TZID=America/New_York:20210924T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Damianos Karakos (Raytheon BBN) “The Raytheon BBN Cross-lingual Inf
ormation Retrieval System developed under the IARPA MATERIAL Program”
URL:https://www.clsp.jhu.edu/events/damianos-karakos/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\n\\n\\nAbstr
act
\nRaytheon BBN participated in the IARPA MATERIAL progr
am\, whose objective is to enable rapid development of language-independen
t methods for cross-lingual information retrieval (CLIR). The challenging
CLIR task of retrieving documents written (or spoken) in one language so t
hat they satisfy an information need expressed in a different language is
exacerbated by unique challenges posed by the MATERIAL program: limited tr
aining data for automatic speech recognition and machine translation\, sca
nt lexical resources\, non-standardized orthography\, etc. Furthermore\, t
he format of the queries and the “Query-Weighted Value” performance measur
e are non-standard and not previously studied in the IR community. In this
talk\, we will describe the Raytheon BBN CLIR system\, which was successf
ul at addressing the above challenges and unique characteristics of the pr
ogram.
\nBiography
\n\n
Damianos Karakos has been at Raytheon BBN for the past nine years\, wh
ere he is currently a Senior Principal Engineer\, Research. Before that\,
he was research faculty at Johns Hopkins University. He has worked on seve
ral Government projects (e.g.\, DARPA GALE\, DARPA RATS\, IARPA BABEL\, IA
RPA MATERIAL\, IARPA BETTER) and on a variety of HLT-related topics (e.g.\
, speech recognition\, speech activity detection\, keyword search\, inform
ation retrieval). He has published more than 60 peer-reviewed papers. His
research interests lie at the intersection of human language technology an
d machine learning\, with an emphasis on statistical methods. He obtained
a PhD in Electrical Engineering from the University of Maryland\, College
Park\, in 2002.
\n
\n\n
X-TAGS;LANGUAGE=en-US:2021\,Karakos\,September
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-22408@www.clsp.jhu.edu
DTSTAMP:20240329T145656Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nAI-powered applications increasingly adopt Deep Neura
l Networks (DNNs) for solving many prediction tasks\, leading to more than
one DNNs running on resource-constrained devices. Supporting many models
simultaneously on a device is challenging due to the linearly increased co
mputation\, energy\, and storage costs. An effective approach to address t
he problem is multi-task learning (MTL) where a set of tasks are learned j
ointly to allow some parameter sharing among tasks. MTL creates multi-task
models based on common DNN architectures and has shown significantly redu
ced inference costs and improved generalization performance in many machin
e learning applications. In this talk\, we will introduce our recent effor
ts on leveraging MTL to improve accuracy and efficiency for edge computing
. The talk will introduce multi-task architecture design systems that can
automatically identify resource-efficient multi-task models with low infer
ence costs and high task accuracy.\n\nBiography\n\n\nHui Guan is an Assist
ant Professor in the College of Information and Computer Sciences (CICS) a
t the University of Massachusetts Amherst\, the flagship campus of the UMa
ss system. She received her Ph.D. in Electrical Engineering from North Car
olina State University in 2020. Her research lies in the intersection betw
een machine learning and systems\, with an emphasis on improving the speed
\, scalability\, and reliability of machine learning through innovations i
n algorithms and programming systems. Her current research focuses on both
algorithm and system optimizations of deep multi-task learning and graph
machine learning.
DTSTART;TZID=America/New_York:20221111T120000
DTEND;TZID=America/New_York:20221111T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Hui Guan (University of Massachusetts Amherst) “Towards Accurate an
d Efficient Edge Computing Via Multi-Task Learning”
URL:https://www.clsp.jhu.edu/events/hui-guan-university-of-massachusetts-am
herst/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nAbstr
act
\nAI-powered applications increasingly adopt
Deep Neural Networks (DNNs) for solving many prediction tasks\, leading t
o more than one DNNs running on resource-constrained devices. Supporting m
any models simultaneously on a device is challenging due to the linearly i
ncreased computation\, energy\, and storage costs. An effective approach t
o address the problem is multi-task learning (MTL) where a set of tasks ar
e learned jointly to allow some parameter sharing among tasks. MTL creates
multi-task models based on common DNN architectures and has shown signifi
cantly reduced inference costs and improved generalization performance in
many machine learning applications. In this talk\, we will introduce our r
ecent efforts on leveraging MTL to improve accuracy and efficiency for edg
e computing. The talk will introduce multi-task architecture design system
s that can automatically identify resource-efficient multi-task models wit
h low inference costs and high task accuracy.
\n\nBiography
\n\n\n
Hui Guan
is an Assistant Professor in the
College of Information and Computer
Sciences (CICS) at the University of Massachusetts Amherst\, t
he flagship campus of the UMass system. She received her Ph.D. in Electric
al Engineering from
North Carolina State University in 2020. He
r research lies in the intersection between machine learning and systems\,
with an emphasis on improving the speed\, scalability\, and reliability o
f machine learning through innovations in algorithms and programming syste
ms. Her current research focuses on both algorithm and system optimization
s of deep multi-task learning and graph machine learning.
\n
\n
\n\n
X-TAGS;LANGUAGE=en-US:2022\,Guan\,November
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-22417@www.clsp.jhu.edu
DTSTAMP:20240329T145656Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nOne of the keys to success in machine learning applic
ations is to improve each user’s personal experience via personalized mode
ls. A personalized model can be a more resource-efficient solution than a
general-purpose model\, too\, because it focuses on a particular sub-probl
em\, for which a smaller model architecture can be good enough. However\,
training a personalized model requires data from the particular test-time
user\, which are not always available due to their private nature and tech
nical challenges. Furthermore\, such data tend to be unlabeled as they can
be collected only during the test time\, once after the system is deploye
d to user devices. One could rely on the generalization power of a generic
model\, but such a model can be too computationally/spatially complex for
real-time processing in a resource-constrained device. In this talk\, I w
ill present some techniques to circumvent the lack of labeled personal dat
a in the context of speech enhancement. Our machine learning models will r
equire zero or few data samples from the test-time users\, while they can
still achieve the personalization goal. To this end\, we will investigate
modularized speech enhancement models as well as the potential of self-sup
ervised learning for personalized speech enhancement. Because our research
achieves the personalization goal in a data- and resource-efficient way\,
it is a step towards a more available and affordable AI for society.\nBio
graphy\nMinje Kim is an associate professor in the Dept. of Intelligent Sy
stems Engineering at Indiana University\, where he leads his research grou
p\, Signals and AI Group in Engineering (SAIGE). He is also an Amazon Visi
ting Academic\, consulting for Amazon Lab126. At IU\, he is affiliated wit
h various programs and labs such as Data Science\, Cognitive Science\, Dep
t. of Statistics\, and Center for Machine Learning. He earned his Ph.D. in
the Dept. of Computer Science at the University of Illinois at Urbana-Cha
mpaign. Before joining UIUC\, He worked as a researcher at ETRI\, a nation
al lab in Korea\, from 2006 to 2011. Before then\, he received his Master’
s and Bachelor’s degrees in the Dept. of Computer Science and Engineering
at POSTECH (Summa Cum Laude) and in the Division of Information and Comput
er Engineering at Ajou University (with honor) in 2006 and 2004\, respecti
vely. He is a recipient of various awards including NSF Career Award (2021
)\, IU Trustees Teaching Award (2021)\, IEEE SPS Best Paper Award (2020)\,
and Google and Starkey’s grants for outstanding student papers in ICASSP
2013 and 2014\, respectively. He is an IEEE Senior Member and also a membe
r of the IEEE Audio and Acoustic Signal Processing Technical Committee (20
18-2023). He is serving as an Associate Editor for EURASIP Journal of Audi
o\, Speech\, and Music Processing\, and as a Consulting Associate Editor f
or IEEE Open Journal of Signal Processing. He is also a reviewer\, program
committee member\, or area chair for the major machine learning and signa
l processing. He filed more than 50 patent applications as an inventor.
DTSTART;TZID=America/New_York:20221202T120000
DTEND;TZID=America/New_York:20221202T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Minje Kim (Indiana University) “Personalized Speech Enhancement: Da
ta- and Resource-Efficient Machine Learning”
URL:https://www.clsp.jhu.edu/events/minje-kim-indiana-university/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nAbstr
act
\nOne of the keys to success in machine learning applic
ations is to improve each user’s personal experience via personalized mode
ls. A personalized model can be a more resource-efficient solution than a
general-purpose model\, too\, because it focuses on a particular sub-probl
em\, for which a smaller model architecture can be good enough. However\,
training a personalized model requires data from the particular test-time
user\, which are not always available due to their private nature and tech
nical challenges. Furthermore\, such data tend to be unlabeled as they can
be collected only during the test time\, once after the system is deploye
d to user devices. One could rely on the generalization power of a generic
model\, but such a model can be too computationally/spatially complex for
real-time processing in a resource-constrained device. In this talk\, I will present some techniques to circumvent
the lack of labeled personal data in the context of speech enhancement. Ou
r machine learning models will require zero or few data samples from the t
est-time users\, while they can still achieve the personalization goal. To
this end\, we will investigate modularized speech enhancement models as w
ell as the potential of self-supervised learning for personalized speech e
nhancement. Because our research achieves the personalization goal in a da
ta- and resource-efficient way\, it is a step towards a more available and
affordable AI for society.
\nBiography
\nMinje Kim is
an associate professor in the Dept. of Intelligent Systems Engineering at
Indiana University\, where he leads his research group\, Signals and AI Gr
oup in Engineering (SAIGE). He is also an Amazon Visiting Academic\, consu
lting for Amazon Lab126. At IU\, he is affiliated with various programs an
d labs such as Data Science\, Cognitive Science\, Dept. of Statistics\, an
d Center for Machine Learning. He earned his Ph.D. in the Dept. of Compute
r Science at the University of Illinois at Urbana-Champaign. Before joinin
g UIUC\, He worked as a researcher at ETRI\, a national lab in Korea\, fro
m 2006 to 2011. Before then\, he received his Master’s and Bachelor’s degr
ees in the Dept. of Computer Science and Engineering at POSTECH (Summa Cum
Laude) and in the Division of Information
and Computer Engineering at Ajou University (with honor) in 2006 and 2004
\, respectively. He is a recipient of various awards including NSF Career
Award (2021)\, IU Trustees Teaching Award (2021)\, IEEE SPS Best Paper Awa
rd (2020)\, and Google and Starkey’s grants for outstanding student papers
in ICASSP 2013 and 2014\, respectively. He is an IEEE Senior Member and a
lso a member of the IEEE Audio and Acoustic Signal Processing Technical Co
mmittee (2018-2023). He is serving as an Associate Editor for EURASIP Jour
nal of Audio\, Speech\, and Music Processing\, and as a Consulting Associa
te Editor for IEEE Open Journal of Signal Processing. He is also a reviewe
r\, program committee member\, or area chair for the major machine learnin
g and signal processing. He filed more than 50 patent applications as an i
nventor.
\n
X-TAGS;LANGUAGE=en-US:2022\,December\,Kim
END:VEVENT
END:VCALENDAR