BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-21275@www.clsp.jhu.edu
DTSTAMP:20240328T180400Z
CATEGORIES;LANGUAGE=en-US:Student Seminars
CONTACT:
DESCRIPTION:Abstract\n\n\n\nAutomatic discovery of phone or word-like units
is one of the core objectives in zero-resource speech processing. Recent
attempts employ contrastive predictive coding (CPC)\, where the model lear
ns representations by predicting the next frame given past context. Howeve
r\, CPC only looks at the audio signal’s structure at the frame level. The
speech structure exists beyond frame-level\, i.e.\, at phone level or eve
n higher. We propose a segmental contrastive predictive coding (SCPC) fram
ework to learn from the signal structure at both the frame and phone level
s.\n\nSCPC is a hierarchical model with three stages trained in an end-to-
end manner. In the first stage\, the model predicts future feature frames
and extracts frame-level representation from the raw waveform. In the seco
nd stage\, a differentiable boundary detector finds variable-length segmen
ts. In the last stage\, the model predicts future segments to learn segmen
t representations. Experiments show that our model outperforms existing ph
one and word segmentation methods on TIMIT and Buckeye datasets.
DTSTART;TZID=America/New_York:20220211T120000
DTEND;TZID=America/New_York:20220211T131500
LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Student Seminar – Saurabhchand Bhati “Segmental Contrastive Predict
ive Coding for Unsupervised Acoustic Segmentation”
URL:https://www.clsp.jhu.edu/events/student-seminar-saurabhchand-bhati/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\n\\n\\nAbstr
act
\n\n
\n\n
Automatic discovery of phone or word-like units is one
of the core objectives in zero-resource speech processing. Recent attempts
employ contrastive predictive coding (CPC)\, where the model learns repre
sentations by predicting the next frame given past context. However\, CPC
only looks at the audio signal’s structure at the frame level. The speech
structure exists beyond frame-level\, i.e.\, at phone level or even higher
. We propose a segmental contrastive predictive coding (SCPC) framework to
learn from the signal structure at both the frame and phone levels.\n
\n
SCPC is a hierarchical mode
l with three stages trained in an end-to-end manner. In the first stage\,
the model predicts future feature frames and extracts frame-level represen
tation from the raw waveform. In the second stage\, a differentiable bound
ary detector finds variable-length segments. In the last stage\, the model
predicts future segments to learn segment representations. Experiments sh
ow that our model outperforms existing phone and word segmentation methods
on TIMIT and Buckeye datasets.
\n
\n
\n
\n
X-TAGS;LANGUAGE=en-US:2022\,Bhati\,Februray
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-22408@www.clsp.jhu.edu
DTSTAMP:20240328T180400Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nAI-powered applications increasingly adopt Deep Neura
l Networks (DNNs) for solving many prediction tasks\, leading to more than
one DNNs running on resource-constrained devices. Supporting many models
simultaneously on a device is challenging due to the linearly increased co
mputation\, energy\, and storage costs. An effective approach to address t
he problem is multi-task learning (MTL) where a set of tasks are learned j
ointly to allow some parameter sharing among tasks. MTL creates multi-task
models based on common DNN architectures and has shown significantly redu
ced inference costs and improved generalization performance in many machin
e learning applications. In this talk\, we will introduce our recent effor
ts on leveraging MTL to improve accuracy and efficiency for edge computing
. The talk will introduce multi-task architecture design systems that can
automatically identify resource-efficient multi-task models with low infer
ence costs and high task accuracy.\n\nBiography\n\n\nHui Guan is an Assist
ant Professor in the College of Information and Computer Sciences (CICS) a
t the University of Massachusetts Amherst\, the flagship campus of the UMa
ss system. She received her Ph.D. in Electrical Engineering from North Car
olina State University in 2020. Her research lies in the intersection betw
een machine learning and systems\, with an emphasis on improving the speed
\, scalability\, and reliability of machine learning through innovations i
n algorithms and programming systems. Her current research focuses on both
algorithm and system optimizations of deep multi-task learning and graph
machine learning.
DTSTART;TZID=America/New_York:20221111T120000
DTEND;TZID=America/New_York:20221111T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Hui Guan (University of Massachusetts Amherst) “Towards Accurate an
d Efficient Edge Computing Via Multi-Task Learning”
URL:https://www.clsp.jhu.edu/events/hui-guan-university-of-massachusetts-am
herst/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nAbstr
act
\nAI-powered applications increasingly adopt
Deep Neural Networks (DNNs) for solving many prediction tasks\, leading t
o more than one DNNs running on resource-constrained devices. Supporting m
any models simultaneously on a device is challenging due to the linearly i
ncreased computation\, energy\, and storage costs. An effective approach t
o address the problem is multi-task learning (MTL) where a set of tasks ar
e learned jointly to allow some parameter sharing among tasks. MTL creates
multi-task models based on common DNN architectures and has shown signifi
cantly reduced inference costs and improved generalization performance in
many machine learning applications. In this talk\, we will introduce our r
ecent efforts on leveraging MTL to improve accuracy and efficiency for edg
e computing. The talk will introduce multi-task architecture design system
s that can automatically identify resource-efficient multi-task models wit
h low inference costs and high task accuracy.
\n\nBiography
\n\n\n
Hui Guan
is an Assistant Professor in the
College of Information and Computer
Sciences (CICS) at the University of Massachusetts Amherst\, t
he flagship campus of the UMass system. She received her Ph.D. in Electric
al Engineering from
North Carolina State University in 2020. He
r research lies in the intersection between machine learning and systems\,
with an emphasis on improving the speed\, scalability\, and reliability o
f machine learning through innovations in algorithms and programming syste
ms. Her current research focuses on both algorithm and system optimization
s of deep multi-task learning and graph machine learning.
\n
\n
\n\n
X-TAGS;LANGUAGE=en-US:2022\,Guan\,November
END:VEVENT
END:VCALENDAR