BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-21072@www.clsp.jhu.edu
DTSTAMP:20240328T235725Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nEmotion has intrigued researchers for generations. Th
is fascination has permeated the engineering community\, motivating the de
velopment of affective computing methods. However\, human emotion remains
notoriously difficult to accurately detect. As a result\, emotion classifi
cation techniques are not always effective when deployed. This is a probl
em because we are missing out on the potential that emotion recognition pr
ovides: the opportunity to automatically measure an aspect of behavior tha
t provides critical insight into our health and wellbeing\, insight that i
s not always easily accessible. In this talk\, I will discuss our efforts
in developing emotion recognition approaches that are effective in natura
l environments and demonstrate how these approaches can be used to support
mental health.\n\nBiography\n\nEmily Mower Provost is an Associate Profes
sor in Computer Science and Engineering and Toyota Faculty Scholar at the
University of Michigan. She received her Ph.D. in Electrical Engineering f
rom the University of Southern California (USC)\, Los Angeles\, CA in 2010
. She has been awarded a National Science Foundation CAREER Award (2017)\,
the Oscar Stern Award for Depression Research (2015)\, a National Science
Foundation Graduate Research Fellowship (2004-2007). She is a co-author o
n the paper\, “Say Cheese vs. Smile: Reducing Speech-Related Variability f
or Facial Emotion Recognition\,” winner of Best Student Paper at ACM Multi
media\, 2014\, and a co-author of the winner of the Classifier Sub-Challen
ge event at the Interspeech 2009 emotion challenge. Her research interests
are in human-centered speech and video processing\, multimodal interfaces
design\, and speech-based assistive technology. The goals of her research
are motivated by the complexities of the perception and expression of hum
an behavior.
DTSTART;TZID=America/New_York:20211206T120000
DTEND;TZID=America/New_York:20211206T131500
LOCATION:Maryland Hall 110 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Emily Mower-Provost (University of Michigan) “Automatically Measuri
ng Emotion from Speech: New Methods to Move from the Lab to the Real World
”
URL:https://www.clsp.jhu.edu/events/emily-mower-provost-university-of-michi
gan/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\n\\n\\nAbstr
act
\nEmotion has intrigued researchers for generations.
This fascination has permeated the engineering community\, motivating the
development of affective computing methods. However\, human emotion remain
s notoriously difficult to accurately detect. As a result\, emotion classi
fication techniques are not always effective when deployed. This is a pro
blem because we are missing out on the potential that emotion recognition
provides: the opportunity to automatically measure an aspect of behavior t
hat provides critical insight into our health and wellbeing\, insight that
is not always easily accessible. In this talk\, I will discuss our effor
ts in developing emotion recognition approaches that are effective in natu
ral environments and demonstrate how these approaches can be used to suppo
rt mental health.
\n\nBiography
\n\nEmily Mower Provost is an Associate Professor in Comp
uter Science and Engineering and Toyota Faculty Scholar at the University
of Michigan. She received her Ph.D. in Electrical Engineering from the Uni
versity of Southern California (USC)\, Los Angeles\, CA in 2010. She has b
een awarded a National Science Foundation CAREER Award (2017)\, the Oscar
Stern Award for Depression Research (2015)\, a National Science Foundation
Graduate Research Fellowship (2004-2007). She is a co-author on the paper
\, “Say Cheese vs. Smile: Reducing Speech-Related Variability for Facial E
motion Recognition\,” winner of Best Student Paper at ACM Multimedia\, 201
4\, and a co-author of the winner of the Classifier Sub-Challenge event at
the Interspeech 2009 emotion challenge. Her research interests are in hum
an-centered speech and video processing\, multimodal interfaces design\, a
nd speech-based assistive technology. The goals of her research are motiva
ted by the complexities of the perception and expression of human behavior
.
\n
X-TAGS;LANGUAGE=en-US:2021\,December\,Mower-Provost
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-23304@www.clsp.jhu.edu
DTSTAMP:20240328T235725Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nTransformers are essential to pretraining. As we appr
oach 5 years of BERT\, the connection between attention as architecture an
d transfer learning remains key to this central thread in NLP. Other archi
tectures such as CNNs and RNNs have been used to replicate pretraining res
ults\, but these either fail to reach the same accuracy or require supplem
ental attention layers. This work revisits the semanal BERT result and con
siders pretraining without attention. We consider replacing self-attention
layers with recently developed approach for long-range sequence modeling
and transformer architecture variants. Specifically\, inspired by recent p
apers like the structured space space sequence model (S4)\, we use simple
routing layers based on state-space models (SSM) and a bidirectional model
architecture based on multiplicative gating. We discuss the results of th
e proposed Bidirectional Gated SSM (BiGS) and present a range of analysis
into its properties. Results show that architecture does seem to have a no
table impact on downstream performance and a different inductive bias that
is worth exploring further.\nBiography\nAlexander “Sasha” Rush is an Asso
ciate Professor at Cornell Tech. His work is at the intersection of natura
l language processing and generative modeling with applications in text ge
neration\, efficient inference\, and controllability. He has written sever
al popular open-source software projects supporting NLP research and data
science\, and works part-time as a researcher at Hugging Face. He is the s
ecretary of ICLR and developed software used to run virtual conferences du
ring COVID. His work has received paper and demo awards at major NLP\, vis
ualization\, and hardware conferences\, an NSF Career Award\, and a Sloan
Fellowship. He tweets and blogs\, mostly about coding and ML\, at @srush_n
lp.
DTSTART;TZID=America/New_York:20230203T120000
DTEND;TZID=America/New_York:20230203T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Sasha Rush (Cornell University) “Pretraining Without Attention”
URL:https://www.clsp.jhu.edu/events/sasha-rush-cornell-university/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nAbstr
act
\nTransformers are essential to pretraining. As we appr
oach 5 years of BERT\, the connection between attention as architecture an
d transfer learning remains key to this central thread in NLP. Other archi
tectures such as CNNs and RNNs have been used to replicate pretraining res
ults\, but these either fail to reach the same accuracy or require supplem
ental attention layers. This work revisits the semanal BERT result and con
siders pretraining without attention. We consider replacing self-attention
layers with recently developed approach for long-range sequence modeling
and transformer architecture variants. Specifically\, inspired by recent p
apers like the structured space space sequence model (S4)\, we use simple
routing layers based on state-space models (SSM) and a bidirectional model
architecture based on multiplicative gating. We discuss the results of th
e proposed Bidirectional Gated SSM (BiGS) and present a range of analysis
into its properties. Results show that architecture does seem to have a no
table impact on downstream performance and a different inductive bias that
is worth exploring further.
\nBiography
\n
Alexander “Sasha”
Rush is an Associate Professor at Cornell Tech. His work is at the
intersection of natural language processing and generative modeling with
applications in text generation\, efficient inference\, and controllabilit
y. He has written several popular open-source software projects supporting
NLP research and data science\, and works part-time as a researcher at Hu
gging Face. He is the secretary of ICLR and developed software used to run
virtual conferences during COVID. His work has received paper and demo aw
ards at major NLP\, visualization\, and hardware conferences\, an NSF Care
er Award\, and a Sloan Fellowship. He tweets and blogs\, mostly about codi
ng and ML\, at
@srush_nlp.
\n\n
X-TAGS;LANGUAGE=en-US:2023\,February\,Rush
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-23308@www.clsp.jhu.edu
DTSTAMP:20240328T235725Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nBiases in datasets\, or unintentionally introduced sp
urious cues\, are a common source of misspecification in machine learning.
Performant models trained on such data can gender stereotype or be brittl
e under distribution shift. In this talk\, we present several results in
multimodal and question answering applications studying sources of dataset
bias\, and several mitigation methods. We propose approaches where known
dimensions of dataset bias are explicitly factored out of a model during
learning\, without needing to modify data. Finally\, we ask whether datase
t biases can be attributable to annotator behavior during annotation. Draw
ing inspiration from work in psychology on cognitive biases\, we show cert
ain behavioral patterns are highly indicative of the creation of problemat
ic (but valid) data instances in question answering. We give evidence that
many existing observations around how dataset bias propagates to models c
an be attributed to data samples created by annotators we identify.\nBiogr
aphy\nMark Yatskar is an Assistant Professor at University of Pennsylvania
in the department of Computer and Information Science. He did his PhD at
University of Washington co-advised by Luke Zettlemoyer and Ali Farhadi. H
e was a Young Investigator at the Allen Institute for Artificial Intellige
nce for several years working with their computer vision team\, Prior. His
work spans Natural Language Processing\, Computer Vision\, and Fairness i
n Machine Learning. He received a Best Paper Award at EMNLP for work on ge
nder bias amplification\, and his work has been featured in Wired and the
New York Times.
DTSTART;TZID=America/New_York:20230210T120000
DTEND;TZID=America/New_York:20230210T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Mark Yatskar (University of Pennsylvania) “Understanding Dataset Bi
ases: Behavioral Indicators During Annotation and Contrastive Mitigations”
URL:https://www.clsp.jhu.edu/events/mark-yatskar-university-of-pennsylvania
/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nAbstr
act
\nBiases in datasets\, or unintentionally introduced sp
urious cues\, are a common source of misspecification in machine learning.
Performant models trained on such data can gender stereotype or be brittl
e under distribution shift. In this talk\, we present several results in
multimodal and question answering applications studying sources of dataset
bias\, and several mitigation methods. We propose approaches where known
dimensions of dataset bias are explicitly factored out of a model during
learning\, without needing to modify data. Finally\, we ask whether datase
t biases can be attributable to annotator behavior during annotation. Draw
ing inspiration from work in psychology on cognitive biases\, we show cert
ain behavioral patterns are highly indicative of the creation of problemat
ic (but valid) data instances in question answering. We give evidence that
many existing observations around how dataset bias propagates to models c
an be attributed to data samples created by annotators we identify.
\n<
p>Biography\nMark Yatskar is an Assistan
t Professor at University of Pennsylvania in the department of Computer an
d Information Science. He did his PhD at University of Washington co-advis
ed by Luke Zettlemoyer and Ali Farhadi. He was a Young Investigator at the
Allen Institute for Artificial Intelligence for several years working wit
h their computer vision team\, Prior. His work spans Natural Language Proc
essing\, Computer Vision\, and Fairness in Machine Learning. He received a
Best Paper Award at EMNLP for work on gender bias amplification\, and his
work has been featured in Wired and the New York Times.
\n\n
X-TAGS;LANGUAGE=en-US:2023\,February\,Yatskar
END:VEVENT
END:VCALENDAR