BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-21072@www.clsp.jhu.edu
DTSTAMP:20240329T013741Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nEmotion has intrigued researchers for generations. Th
is fascination has permeated the engineering community\, motivating the de
velopment of affective computing methods. However\, human emotion remains
notoriously difficult to accurately detect. As a result\, emotion classifi
cation techniques are not always effective when deployed. This is a probl
em because we are missing out on the potential that emotion recognition pr
ovides: the opportunity to automatically measure an aspect of behavior tha
t provides critical insight into our health and wellbeing\, insight that i
s not always easily accessible. In this talk\, I will discuss our efforts
in developing emotion recognition approaches that are effective in natura
l environments and demonstrate how these approaches can be used to support
mental health.\n\nBiography\n\nEmily Mower Provost is an Associate Profes
sor in Computer Science and Engineering and Toyota Faculty Scholar at the
University of Michigan. She received her Ph.D. in Electrical Engineering f
rom the University of Southern California (USC)\, Los Angeles\, CA in 2010
. She has been awarded a National Science Foundation CAREER Award (2017)\,
the Oscar Stern Award for Depression Research (2015)\, a National Science
Foundation Graduate Research Fellowship (2004-2007). She is a co-author o
n the paper\, “Say Cheese vs. Smile: Reducing Speech-Related Variability f
or Facial Emotion Recognition\,” winner of Best Student Paper at ACM Multi
media\, 2014\, and a co-author of the winner of the Classifier Sub-Challen
ge event at the Interspeech 2009 emotion challenge. Her research interests
are in human-centered speech and video processing\, multimodal interfaces
design\, and speech-based assistive technology. The goals of her research
are motivated by the complexities of the perception and expression of hum
an behavior.
DTSTART;TZID=America/New_York:20211206T120000
DTEND;TZID=America/New_York:20211206T131500
LOCATION:Maryland Hall 110 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Emily Mower-Provost (University of Michigan) “Automatically Measuri
ng Emotion from Speech: New Methods to Move from the Lab to the Real World
”
URL:https://www.clsp.jhu.edu/events/emily-mower-provost-university-of-michi
gan/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\n\\n\\nAbstr
act
\nEmotion has intrigued researchers for generations.
This fascination has permeated the engineering community\, motivating the
development of affective computing methods. However\, human emotion remain
s notoriously difficult to accurately detect. As a result\, emotion classi
fication techniques are not always effective when deployed. This is a pro
blem because we are missing out on the potential that emotion recognition
provides: the opportunity to automatically measure an aspect of behavior t
hat provides critical insight into our health and wellbeing\, insight that
is not always easily accessible. In this talk\, I will discuss our effor
ts in developing emotion recognition approaches that are effective in natu
ral environments and demonstrate how these approaches can be used to suppo
rt mental health.
\n\nBiography
\n\nEmily Mower Provost is an Associate Professor in Comp
uter Science and Engineering and Toyota Faculty Scholar at the University
of Michigan. She received her Ph.D. in Electrical Engineering from the Uni
versity of Southern California (USC)\, Los Angeles\, CA in 2010. She has b
een awarded a National Science Foundation CAREER Award (2017)\, the Oscar
Stern Award for Depression Research (2015)\, a National Science Foundation
Graduate Research Fellowship (2004-2007). She is a co-author on the paper
\, “Say Cheese vs. Smile: Reducing Speech-Related Variability for Facial E
motion Recognition\,” winner of Best Student Paper at ACM Multimedia\, 201
4\, and a co-author of the winner of the Classifier Sub-Challenge event at
the Interspeech 2009 emotion challenge. Her research interests are in hum
an-centered speech and video processing\, multimodal interfaces design\, a
nd speech-based assistive technology. The goals of her research are motiva
ted by the complexities of the perception and expression of human behavior
.
\n
X-TAGS;LANGUAGE=en-US:2021\,December\,Mower-Provost
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-23304@www.clsp.jhu.edu
DTSTAMP:20240329T013741Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nTransformers are essential to pretraining. As we appr
oach 5 years of BERT\, the connection between attention as architecture an
d transfer learning remains key to this central thread in NLP. Other archi
tectures such as CNNs and RNNs have been used to replicate pretraining res
ults\, but these either fail to reach the same accuracy or require supplem
ental attention layers. This work revisits the semanal BERT result and con
siders pretraining without attention. We consider replacing self-attention
layers with recently developed approach for long-range sequence modeling
and transformer architecture variants. Specifically\, inspired by recent p
apers like the structured space space sequence model (S4)\, we use simple
routing layers based on state-space models (SSM) and a bidirectional model
architecture based on multiplicative gating. We discuss the results of th
e proposed Bidirectional Gated SSM (BiGS) and present a range of analysis
into its properties. Results show that architecture does seem to have a no
table impact on downstream performance and a different inductive bias that
is worth exploring further.\nBiography\nAlexander “Sasha” Rush is an Asso
ciate Professor at Cornell Tech. His work is at the intersection of natura
l language processing and generative modeling with applications in text ge
neration\, efficient inference\, and controllability. He has written sever
al popular open-source software projects supporting NLP research and data
science\, and works part-time as a researcher at Hugging Face. He is the s
ecretary of ICLR and developed software used to run virtual conferences du
ring COVID. His work has received paper and demo awards at major NLP\, vis
ualization\, and hardware conferences\, an NSF Career Award\, and a Sloan
Fellowship. He tweets and blogs\, mostly about coding and ML\, at @srush_n
lp.
DTSTART;TZID=America/New_York:20230203T120000
DTEND;TZID=America/New_York:20230203T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Sasha Rush (Cornell University) “Pretraining Without Attention”
URL:https://www.clsp.jhu.edu/events/sasha-rush-cornell-university/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nAbstr
act
\nTransformers are essential to pretraining. As we appr
oach 5 years of BERT\, the connection between attention as architecture an
d transfer learning remains key to this central thread in NLP. Other archi
tectures such as CNNs and RNNs have been used to replicate pretraining res
ults\, but these either fail to reach the same accuracy or require supplem
ental attention layers. This work revisits the semanal BERT result and con
siders pretraining without attention. We consider replacing self-attention
layers with recently developed approach for long-range sequence modeling
and transformer architecture variants. Specifically\, inspired by recent p
apers like the structured space space sequence model (S4)\, we use simple
routing layers based on state-space models (SSM) and a bidirectional model
architecture based on multiplicative gating. We discuss the results of th
e proposed Bidirectional Gated SSM (BiGS) and present a range of analysis
into its properties. Results show that architecture does seem to have a no
table impact on downstream performance and a different inductive bias that
is worth exploring further.
\nBiography
\n
Alexander “Sasha”
Rush is an Associate Professor at Cornell Tech. His work is at the
intersection of natural language processing and generative modeling with
applications in text generation\, efficient inference\, and controllabilit
y. He has written several popular open-source software projects supporting
NLP research and data science\, and works part-time as a researcher at Hu
gging Face. He is the secretary of ICLR and developed software used to run
virtual conferences during COVID. His work has received paper and demo aw
ards at major NLP\, visualization\, and hardware conferences\, an NSF Care
er Award\, and a Sloan Fellowship. He tweets and blogs\, mostly about codi
ng and ML\, at
@srush_nlp.
\n\n
X-TAGS;LANGUAGE=en-US:2023\,February\,Rush
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-23306@www.clsp.jhu.edu
DTSTAMP:20240329T013741Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nWhile large language models have advanced the state-o
f-the-art in natural language processing\, these models are trained on lar
ge-scale datasets\, which may include harmful information. Studies have sh
own that as a result\, the models exhibit social biases and generate misin
formation after training. In this talk\, I will discuss my work on analyzi
ng and interpreting the risks of large language models across the areas of
fairness\, trustworthiness\, and safety. I will first describe my researc
h in the detection of dialect bias between African American English (AAE)
vs. Standard American English (SAE). The second part investigates the trus
tworthiness of models through the memorization and subsequent generation o
f conspiracy theories. I will end my talk with recent work in AI safety re
garding text that may lead to physical harm.\nBiography\nSharon is a 5th-y
ear Ph.D. candidate at the University of California\, Santa Barbara\, wher
e she is advised by Professor William Wang. Her research interests lie in
natural language processing\, with a focus on Responsible AI. Sharon’s res
earch spans the subareas of fairness\, trustworthiness\, and safety\, with
publications in ACL\, EMNLP\, WWW\, and LREC. She has spent summers inter
ning at AWS\, Meta\, and Pinterest. Sharon is a 2022 EECS Rising Star and
a current recipient of the Amazon Alexa AI Fellowship for Responsible AI.
DTSTART;TZID=America/New_York:20230206T120000
DTEND;TZID=America/New_York:20230206T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Sharon Levy (University of California\, Santa Barbara) “Responsible
AI via Responsible Large Language Models”
URL:https://www.clsp.jhu.edu/events/sharon-levy-university-of-california-sa
nta-barbara-responsible-ai-via-responsible-large-language-models/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nAbstr
act
\nWhile large language models have advanced the state-o
f-the-art in natural language processing\, these models are trained on lar
ge-scale datasets\, which may include harmful information. Studies have sh
own that as a result\, the models exhibit social biases and generate misin
formation after training. In this talk\, I will discuss my work on analyzi
ng and interpreting the risks of large language models across the areas of
fairness\, trustworthiness\, and safety. I will first describe my researc
h in the detection of dialect bias between African American English (AAE)
vs. Standard American English (SAE). The second part investigates the trus
tworthiness of models through the memorization and subsequent generation o
f conspiracy theories. I will end my talk with recent work in AI safety re
garding text that may lead to physical harm.
\nBiography
\nSharon is a 5th-year Ph.D. candidate at the University of Ca
lifornia\, Santa Barbara\, where she is advised by Professor William Wang.
Her research interests lie in natural language processing\, with a focus
on Responsible AI. Sharon’s research spans the subareas of fairness\, trus
tworthiness\, and safety\, with publications in ACL\, EMNLP\, WWW\, and LR
EC. She has spent summers interning at AWS\, Meta\, and Pinterest. Sharon
is a 2022 EECS Rising Star and a current recipient of the Amazon Alexa AI
Fellowship for Responsible AI.
\n
X-TAGS;LANGUAGE=en-US:2023\,February\,Levy
END:VEVENT
END:VCALENDAR