**Abstr
act**

Never before was it so easy to write a powerful NLP s ystem\, never before did it have such a potential impact. However\, these systems are now increasingly used in applications they were not intended f or\, by people who treat them as interchangeable black boxes. The results can be simple performance drops\, but also systematic biases against vario us user groups.

\n\n– bias stemming from data\, i.e.\, selection bias (if our text
s do not adequately reflect the population we want to study)\, label – – –
bias (if the labels we use are skewed)\, and semantic bias (the latent st
ereotypes encoded in embeddings).

\n\n
\n

\nIn this talk\, I will discuss several types of bia
ses that affect NLP models (based on Shah et al. 2020 and Hovy & Spruit\,
2016)\, what their sources are\, and potential counter measures.

\n– biases deriving f
rom the models themselves\, i.e.\, their tendency to amplify any imbalance
s that are present in the data.

\n– design bias\, i.e.\
, the biases arising from our (the practitioners) decisions which topics t
o explore\, which data sets to use\, and what to do with them.

\n\nAs a consequence\, we as NLP practitioners
suddenly have a new role\, in addition to researcher and developer: consi
dering the ethical implications of our systems\, and educating the public
about the possibilities and limitations of our work.The time of academic i
nnocence is over\, and we need to address this newfound responsibility as
a community.

\n\n\n

\n\n
\n

\n\n

\n\n\n

\nFor
each bias\, I will provide real examples and discuss the possible ramifica
tions for a wide range of applications\, and the various ways to address a
nd counteract these biases\, ranging from simple labeling considerations t
o new types of models.

\nI conclude with some p
rovocations for future directions.

\n\n**Biography**\n

Reference:

\n\n

\n\n\n

\n
– Deven Shah\, H. Andrew Schwartz\, & Dirk Hovy. 2020. Predictive Biases i
n Natural Language Processing Models: A Conceptual Framework and Overview.
In Proceedings of ACL. [https://www.aclweb.org/anthology/2020.acl-main.468/]\n

\n– Dirk Hovy & Shannon L. Spruit. 2016. The Social Impact
of Natural Language Processing. [https://www.aclweb.org/anthology/P16-2096.pdf]

\nDirk Hovy is associate professor of computer science at Bocconi Universit
y in Milan\, Italy. Before that\, he was faculty and a postdoc in Copenhag
en\, got a PhD from USC\, and a linguistics masters in Germany. He is inte
rested in the interaction between language\, society\, and machine learnin
g\, or what language can tell us about society\, and what computers can te
ll us about language. He has authored over 50 articles on these topics\, i
ncluding 3 best paper awards. He has organized one conference and several
workshops (on abusive language\, ethics in NLP\, and computational social
science). Outside of work\, Dirk enjoys cooking\, running\, and leather-cr
afting. For updated information\, see http://www.dirkhovy.com\n\n\n
X-TAGS;LANGUAGE=en-US:2020\,Hovy\,November
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-22395@www.clsp.jhu.edu
DTSTAMP:20230329T124723Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nRecursive calls over recursive data are widely useful
for generating probability distributions\, and probabilistic programming
allows computations over these distributions to be expressed in a modular
and intuitive way. Exact inference is also useful\, but unfortunately\, ex
isting probabilistic programming languages do not perform exact inference
on recursive calls over recursive data\, forcing programmers to code many
applications manually. We introduce a probabilistic language in which a wi
de variety of recursion can be expressed naturally\, and inference carried
out exactly. For instance\, probabilistic pushdown automata and their gen
eralizations are easy to express\, and polynomial-time parsing algorithms
for them are derived automatically. We eliminate recursive data types usin
g program transformations related to defunctionalization and refunctionali
zation. These transformations are assured correct by a linear type system\
, and a successful choice of transformations\, if there is one\, is guaran
teed to be found by a greedy algorithm. I will also describe the implement
ation of this language in two phases: first\, compilation to a factor grap
h grammar\, and second\, computing the sum-product of the factor graph gra
mmar.\n\nBiography\nDavid Chiang (PhD\, University of Pennsylvania\, 2004)
is an associate professor in the Department of Computer Science and Engin
eering at the University of Notre Dame. His research is on computational m
odels for learning human languages\, particularly how to translate from on
e language to another. His work on applying formal grammars and machine le
arning to translation has been recognized with two best paper awards (at A
CL 2005 and NAACL HLT 2009). He has received research grants from DARPA\,
NSF\, Google\, and Amazon\, has served on the executive board of NAACL and
the editorial board of Computational Linguistics and JAIR\, and is curren
tly on the editorial board of Transactions of the ACL.
DTSTART;TZID=America/New_York:20221017T120000
DTEND;TZID=America/New_York:20221017T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:David Chiang (University of Notre Dame) “Exact Recursive Probabilis
tic Programming with Colin McDonald\, Darcey Riley\, Kenneth Sible (Notre
Dame) and Chung-chieh Shan (Indiana)”
URL:https://www.clsp.jhu.edu/events/david-chiang-university-of-notre-dame/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n \\n\\n**Bio
graphy**\n

**Abstr
act**

Recursive calls over recursive data are w
idely useful for generating probability distributions\, and probabilistic
programming allows computations over these distributions to be expressed i
n a modular and intuitive way. Exact inference is also useful\, but unfort
unately\, existing probabilistic programming languages do not perform exac
t inference on recursive calls over recursive data\, forcing programmers t
o code many applications manually. We introduce a probabilistic language i
n which a wide variety of recursion can be expressed naturally\, and infer
ence carried out exactly. For instance\, probabilistic pushdown automata a
nd their generalizations are easy to express\, and polynomial-time parsing
algorithms for them are derived automatically. We eliminate recursive dat
a types using program transformations related to defunctionalization and r
efunctionalization. These transformations are assured correct by a linear
type system\, and a successful choice of transformations\, if there is one
\, is guaranteed to be found by a greedy algorithm. I will also describe t
he implementation of this language in two phases: first\, compilation to a
factor graph grammar\, and second\, computing the sum-product of the fact
or graph grammar.

\n\nDavid Chiang (PhD\,
University of Pennsylvania\, 2004) is an associate professor in the Depart
ment of Computer Science and Engineering at the University of Notre Dame.
His research is on computational models for learning human languages\, par
ticularly how to translate from one language to another. His work on apply
ing formal grammars and machine learning to translation has been recognize
d with two best paper awards (at ACL 2005 and NAACL HLT 2009). He has rece
ived research grants from DARPA\, NSF\, Google\, and Amazon\, has served o
n the executive board of NAACL and the editorial board of Computational Li
nguistics and JAIR\, and is currently on the editorial board of Transactio
ns of the ACL.

\n
X-TAGS;LANGUAGE=en-US:2022\,Chiang\,October
END:VEVENT
END:VCALENDAR