BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-21275@www.clsp.jhu.edu
DTSTAMP:20240329T111717Z
CATEGORIES;LANGUAGE=en-US:Student Seminars
CONTACT:
DESCRIPTION:
Abstract
\n\n
\n\n
Automatic discovery of phon
e or word-like units is one of the core objectives in zero-resource speech
processing. Recent attempts employ contrastive predictive coding (CPC)\,
where the model learns representations by predicting the next frame given
past context. However\, CPC only looks at the audio signal’s structure at
the frame level. The speech structure exists beyond frame-level\, i.e.\, a
t phone level or even higher. We propose a segmental contrastive predictiv
e coding (SCPC) framework to learn from the signal structure at both the f
rame and phone levels.\n
\n
SCPC is a hierarchical model with three stages trained in an end-to-end m
anner. In the first stage\, the model predicts future feature frames and e
xtracts frame-level representation from the raw waveform. In the second st
age\, a differentiable boundary detector finds variable-length segments. I
n the last stage\, the model predicts future segments to learn segment rep
resentations. Experiments show that our model outperforms existing phone a
nd word segmentation methods on TIMIT and Buckeye datasets.
\n
\n
\n
DTSTART;TZID=America/New_York:20220211T120000
DTEND;TZID=America/New_York:20220211T131500
LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Student Seminar – Saurabhchand Bhati “Segmental Contrastive Predict
ive Coding for Unsupervised Acoustic Segmentation”
URL:https://www.clsp.jhu.edu/events/student-seminar-saurabhchand-bhati/
X-COST-TYPE:free
X-TAGS;LANGUAGE=en-US:2022\,Bhati\,Februray
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-21487@www.clsp.jhu.edu
DTSTAMP:20240329T111717Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract
\nEnormous amounts of ever-changing knowledge are a
vailable online in diverse textual styles and diverse formats. Recent adva
nces in deep learning algorithms and large-scale datasets are spurring pro
gress in many Natural Language Processing (NLP) tasks\, including question
answering. Nevertheless\, these models cannot scale up when task-annotate
d training data are scarce. This talk presents my lab’s work toward buildi
ng general-purpose models in NLP and how to systematically evaluate them.
First\, I present a general model for two known tasks of question answerin
g in English and multiple languages that are robust to small domain shifts
. Then\, I show a meta-training approach that can solve a variety of NLP
tasks with only using a few examples and introduce a benchmark to evaluate
cross-task generalization. Finally\, I discuss neuro-symbolic appr
oaches to address more complex tasks by eliciting knowledge from structure
d data and language models.
\n\nBiography
\n\nHanna Hajishirzi is an Assistant Professor in the Paul G. Allen Schoo
l of Computer Science & Engineering at the University of Washington and a
Senior Research Manager at the Allen Institute for AI. Her research spans
different areas in NLP and AI\, focusing on developing general-purpose mac
hine learning algorithms that can solve many NLP tasks. Applications for t
hese algorithms include question answering\, representation learning\, gre
en AI\, knowledge extraction\, and conversational dialogue. Honors include
the NSF CAREER Award\, Sloan Fellowship\, Allen Distinguished Investigato
r Award\, Intel rising star award\, best paper and honorable mention award
s\, and several industry research faculty awards. Hanna received her PhD f
rom University of Illinois and spent a year as a postdoc at Disney Researc
h and CMU.
DTSTART;TZID=America/New_York:20220225T120000
DTEND;TZID=America/New_York:20220225T131500
LOCATION:Ames Hall 234 - Presented Virtually Via Zoom https://wse.zoom.us/j
/96735183473
SEQUENCE:0
SUMMARY:Hanna Hajishirzi (University of Washington & Allen Institute for AI
) “Toward Robust\, Knowledge-Rich NLP”
URL:https://www.clsp.jhu.edu/events/hanna-hajishirzi-university-of-washingt
on-allen-institute-for-ai-toward-robust-knowledge-rich-nlp/
X-COST-TYPE:free
X-TAGS;LANGUAGE=en-US:2022\,February\,Hajishirzi
END:VEVENT
END:VCALENDAR