BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-21072@www.clsp.jhu.edu
DTSTAMP:20240328T100828Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nEmotion has intrigued researchers for generations. Th
is fascination has permeated the engineering community\, motivating the de
velopment of affective computing methods. However\, human emotion remains
notoriously difficult to accurately detect. As a result\, emotion classifi
cation techniques are not always effective when deployed. This is a probl
em because we are missing out on the potential that emotion recognition pr
ovides: the opportunity to automatically measure an aspect of behavior tha
t provides critical insight into our health and wellbeing\, insight that i
s not always easily accessible. In this talk\, I will discuss our efforts
in developing emotion recognition approaches that are effective in natura
l environments and demonstrate how these approaches can be used to support
mental health.\n\nBiography\n\nEmily Mower Provost is an Associate Profes
sor in Computer Science and Engineering and Toyota Faculty Scholar at the
University of Michigan. She received her Ph.D. in Electrical Engineering f
rom the University of Southern California (USC)\, Los Angeles\, CA in 2010
. She has been awarded a National Science Foundation CAREER Award (2017)\,
the Oscar Stern Award for Depression Research (2015)\, a National Science
Foundation Graduate Research Fellowship (2004-2007). She is a co-author o
n the paper\, “Say Cheese vs. Smile: Reducing Speech-Related Variability f
or Facial Emotion Recognition\,” winner of Best Student Paper at ACM Multi
media\, 2014\, and a co-author of the winner of the Classifier Sub-Challen
ge event at the Interspeech 2009 emotion challenge. Her research interests
are in human-centered speech and video processing\, multimodal interfaces
design\, and speech-based assistive technology. The goals of her research
are motivated by the complexities of the perception and expression of hum
an behavior.
DTSTART;TZID=America/New_York:20211206T120000
DTEND;TZID=America/New_York:20211206T131500
LOCATION:Maryland Hall 110 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Emily Mower-Provost (University of Michigan) “Automatically Measuri
ng Emotion from Speech: New Methods to Move from the Lab to the Real World
”
URL:https://www.clsp.jhu.edu/events/emily-mower-provost-university-of-michi
gan/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\n\\n\\nAbstr
act
\nEmotion has intrigued researchers for generations.
This fascination has permeated the engineering community\, motivating the
development of affective computing methods. However\, human emotion remain
s notoriously difficult to accurately detect. As a result\, emotion classi
fication techniques are not always effective when deployed. This is a pro
blem because we are missing out on the potential that emotion recognition
provides: the opportunity to automatically measure an aspect of behavior t
hat provides critical insight into our health and wellbeing\, insight that
is not always easily accessible. In this talk\, I will discuss our effor
ts in developing emotion recognition approaches that are effective in natu
ral environments and demonstrate how these approaches can be used to suppo
rt mental health.
\n\nBiography
\n\nEmily Mower Provost is an Associate Professor in Comp
uter Science and Engineering and Toyota Faculty Scholar at the University
of Michigan. She received her Ph.D. in Electrical Engineering from the Uni
versity of Southern California (USC)\, Los Angeles\, CA in 2010. She has b
een awarded a National Science Foundation CAREER Award (2017)\, the Oscar
Stern Award for Depression Research (2015)\, a National Science Foundation
Graduate Research Fellowship (2004-2007). She is a co-author on the paper
\, “Say Cheese vs. Smile: Reducing Speech-Related Variability for Facial E
motion Recognition\,” winner of Best Student Paper at ACM Multimedia\, 201
4\, and a co-author of the winner of the Classifier Sub-Challenge event at
the Interspeech 2009 emotion challenge. Her research interests are in hum
an-centered speech and video processing\, multimodal interfaces design\, a
nd speech-based assistive technology. The goals of her research are motiva
ted by the complexities of the perception and expression of human behavior
.
\n
X-TAGS;LANGUAGE=en-US:2021\,December\,Mower-Provost
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-21615@www.clsp.jhu.edu
DTSTAMP:20240328T100828Z
CATEGORIES;LANGUAGE=en-US:Student Seminars
CONTACT:
DESCRIPTION:Abstract\n\n\nWe consider a problem of data collection for sema
ntically rich NLU tasks\, where detailed semantics of documents (or uttera
nces) are captured using a complex meaning representation. Previously\, d
ata collection for such tasks was either handled at the cost of extensive
annotator training (e.g. in FrameNet or PropBank) or simplified meaning re
presentation (e.g. in QA-SRL or Overnight). In this talk\, we present two
systems [1\, 2] that aim to support fast\, accurate\, and expressive sema
ntic annotations by pairing human workers with a trained model in the loop
.\n\nThe first system\, called Guided K-best [1]\, is an annotation toolki
t for conversational semantic parsing. Instead of typing annotations from
scratch\, data specialists choose a correct parse from the K-best output
of a few-shot prototyped model. As the K-best list can be large (e.g. K=1
00)\, we guide the annotators’ exploration of the K-best list via explaina
ble hierarchical clustering. In addition\, we experiment with RoBERTa-bas
ed reranking of the K-best list to recalibrate the few-shot model towards
Accuracy@K. The final system allows to annotate data up to 35% faster tha
n the standard\, non-guided K-best and improves the few-shot model’s top-1
accuracy by up to 18%. The second system\, called SchemaBlocks [2]\, is
an annotation toolkit for schemas\, or structured descriptions of frequent
real-world scenarios (e.g.\, cooking a meal). It represents schemas in t
he annotation UI as nested blocks. Using a novel Causal ARM model\, we fu
rther speed up the annotation process and guide data specialists towards e
xpressive and diverse schemas. As part of this work\, we collect 232 sche
mas\, evaluating their internal coherence and their coverage on large-scal
e newswire corpora.\n\n\n
DTSTART;TZID=America/New_York:20220311T120000
DTEND;TZID=America/New_York:20220311T131500
LOCATION:Virtual Seminar
SEQUENCE:0
SUMMARY:Student Seminar – Anton Belyy “Systems for Human-AI Cooperation on
Collecting Semantic Annotations”
URL:https://www.clsp.jhu.edu/events/student-seminar-anton-belyy-systems-for
-human-ai-cooperation-on-collecting-semantic-annotations/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nAbstr
act
\n\n
\n
We consider a problem of data collect
ion for semantically rich NLU tasks\, where detailed semantics of document
s (or utterances) are captured using a complex meaning representation. Pr
eviously\, data collection for such tasks was either handled at the cost o
f extensive annotator training (e.g. in FrameNet or PropBank) or simplifie
d meaning representation (e.g. in QA-SRL or Overnight). In this talk\, we
present two systems [1\, 2] that aim to support fast\, accurate\, and exp
ressive semantic annotations by pairing human workers with a trained model
in the loop.
\n
\n
The first system\, called Guided K-
best [1]\, is an annotation toolkit for conversational semantic parsing.
Instead of typing annotations from scratch\, data specialists choose a cor
rect parse from the K-best output of a few-shot prototyped model. As the
K-best list can be large (e.g. K=100)\, we guide the annotators’ explorati
on of the K-best list via explainable hierarchical clustering. In additio
n\, we experiment with RoBERTa-based reranking of the K-best list to recal
ibrate the few-shot model towards Accuracy@K. The final system allows to
annotate data up to 35% faster than the standard\, non-guided K-best and i
mproves the few-shot model’s top-1 accuracy by up to 18%. The second syst
em\, called SchemaBlocks [2]\, is an annotation toolkit for schemas\, or s
tructured descriptions of frequent real-world scenarios (e.g.\, cooking a
meal). It represents schemas in the annotation UI as nested blocks. Usin
g a novel Causal ARM model\, we further speed up the annotation process an
d guide data specialists towards expressive and diverse schemas. As part
of this work\, we collect 232 schemas\, evaluating their internal coherenc
e and their coverage on large-scale newswire corpora.
\n
\n
\n
\n
X-TAGS;LANGUAGE=en-US:2022\,Belyy\,March
END:VEVENT
END:VCALENDAR