BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-22394@www.clsp.jhu.edu
DTSTAMP:20240329T112435Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\n\nModel robustness and spurious correlations have rec
eived increasing attention in the NLP community\, both in methods and eval
uation. The term “spurious correlation” is overloaded though and can refer
to any undesirable shortcuts learned by the model\, as judged by domain e
xperts.\n\n\nWhen designing mitigation algorithms\, we often (implicitly)
assume that a spurious feature is irrelevant for prediction. However\, man
y features in NLP (e.g. word overlap and negation) are not spurious in the
sense that the background is spurious for classifying objects in an image
. In contrast\, they carry important information that’s needed to make pre
dictions by humans. In this talk\, we argue that it is more productive to
characterize features in terms of their necessity and sufficiency for pred
iction. We then discuss the implications of this categorization in represe
ntation\, learning\, and evaluation.\nBiography\nHe He is an Assistant Pro
fessor in the Department of Computer Science and the Center for Data Scien
ce at New York University. She obtained her PhD in Computer Science at the
University of Maryland\, College Park. Before joining NYU\, she spent a y
ear at AWS AI and was a post-doc at Stanford University before that. She i
s interested in building robust and trustworthy NLP systems in human-cente
red settings. Her recent research focus includes robust language understan
ding\, collaborative text generation\, and understanding capabilities and
issues of large language models.
DTSTART;TZID=America/New_York:20221014T120000
DTEND;TZID=America/New_York:20221014T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:He He (New York University) “What We Talk about When We Talk about
Spurious Correlations in NLP”
URL:https://www.clsp.jhu.edu/events/he-he-new-york-university/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\n\\n\\nAbstr
act
\n\n
Model robustness and spuri
ous correlations have received increasing attention in the NLP community\,
both in methods and evaluation. The term “spurious correlation” is overlo
aded though and can refer to any undesirable shortcuts learned by the mode
l\, as judged by domain experts.
\n
\n\n
When designing mitigation algorithms\, we often (implicitly) assume that
a spurious feature is irrelevant for prediction. However\, many features
in NLP (e.g. word overlap and negation) are not spurious in the sense that
the background is spurious for classifying objects in an image. In contra
st\, they carry important information that’s needed to make predictions by
humans. In this talk\, we argue that it is more productive to characteriz
e features in terms of their necessity and sufficiency for prediction. We
then discuss the implications of this categorization in representation\, l
earning\, and evaluation.
\n
Biography
\n
He He
is an Assistant Professor in the Department of Computer Science and the C
enter for Data Science at New York University. She obtained her PhD in Com
puter Science at the University of Maryland\, College Park. Before joining
NYU\, she spent a year at AWS AI and was a post-doc at Stanford Universit
y before that. She is interested in building robust and trustworthy NLP sy
stems in human-centered settings. Her recent research focus includes robus
t language understanding\, collaborative text generation\, and understandi
ng capabilities and issues of large language models.
\n
\n<
/HTML>
X-TAGS;LANGUAGE=en-US:2022\,He\,October
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-22395@www.clsp.jhu.edu
DTSTAMP:20240329T112435Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nRecursive calls over recursive data are widely useful
for generating probability distributions\, and probabilistic programming
allows computations over these distributions to be expressed in a modular
and intuitive way. Exact inference is also useful\, but unfortunately\, ex
isting probabilistic programming languages do not perform exact inference
on recursive calls over recursive data\, forcing programmers to code many
applications manually. We introduce a probabilistic language in which a wi
de variety of recursion can be expressed naturally\, and inference carried
out exactly. For instance\, probabilistic pushdown automata and their gen
eralizations are easy to express\, and polynomial-time parsing algorithms
for them are derived automatically. We eliminate recursive data types usin
g program transformations related to defunctionalization and refunctionali
zation. These transformations are assured correct by a linear type system\
, and a successful choice of transformations\, if there is one\, is guaran
teed to be found by a greedy algorithm. I will also describe the implement
ation of this language in two phases: first\, compilation to a factor grap
h grammar\, and second\, computing the sum-product of the factor graph gra
mmar.\n\nBiography\nDavid Chiang (PhD\, University of Pennsylvania\, 2004)
is an associate professor in the Department of Computer Science and Engin
eering at the University of Notre Dame. His research is on computational m
odels for learning human languages\, particularly how to translate from on
e language to another. His work on applying formal grammars and machine le
arning to translation has been recognized with two best paper awards (at A
CL 2005 and NAACL HLT 2009). He has received research grants from DARPA\,
NSF\, Google\, and Amazon\, has served on the executive board of NAACL and
the editorial board of Computational Linguistics and JAIR\, and is curren
tly on the editorial board of Transactions of the ACL.
DTSTART;TZID=America/New_York:20221017T120000
DTEND;TZID=America/New_York:20221017T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:David Chiang (University of Notre Dame) “Exact Recursive Probabilis
tic Programming with Colin McDonald\, Darcey Riley\, Kenneth Sible (Notre
Dame) and Chung-chieh Shan (Indiana)”
URL:https://www.clsp.jhu.edu/events/david-chiang-university-of-notre-dame/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nAbstr
act
\nRecursive calls over recursive data are w
idely useful for generating probability distributions\, and probabilistic
programming allows computations over these distributions to be expressed i
n a modular and intuitive way. Exact inference is also useful\, but unfort
unately\, existing probabilistic programming languages do not perform exac
t inference on recursive calls over recursive data\, forcing programmers t
o code many applications manually. We introduce a probabilistic language i
n which a wide variety of recursion can be expressed naturally\, and infer
ence carried out exactly. For instance\, probabilistic pushdown automata a
nd their generalizations are easy to express\, and polynomial-time parsing
algorithms for them are derived automatically. We eliminate recursive dat
a types using program transformations related to defunctionalization and r
efunctionalization. These transformations are assured correct by a linear
type system\, and a successful choice of transformations\, if there is one
\, is guaranteed to be found by a greedy algorithm. I will also describe t
he implementation of this language in two phases: first\, compilation to a
factor graph grammar\, and second\, computing the sum-product of the fact
or graph grammar.
\n\nBio
graphy
\nDavid Chiang (PhD\,
University of Pennsylvania\, 2004) is an associate professor in the Depart
ment of Computer Science and Engineering at the University of Notre Dame.
His research is on computational models for learning human languages\, par
ticularly how to translate from one language to another. His work on apply
ing formal grammars and machine learning to translation has been recognize
d with two best paper awards (at ACL 2005 and NAACL HLT 2009). He has rece
ived research grants from DARPA\, NSF\, Google\, and Amazon\, has served o
n the executive board of NAACL and the editorial board of Computational Li
nguistics and JAIR\, and is currently on the editorial board of Transactio
ns of the ACL.
\n
X-TAGS;LANGUAGE=en-US:2022\,Chiang\,October
END:VEVENT
END:VCALENDAR