Jerry Zhu (University of Wisconsin – Madison) “Adversarial Machine Learning: Beyond Manipulating Pixels and Words”

When:
October 28, 2019 @ 12:00 pm – 1:15 pm
2019-10-28T12:00:00-04:00
2019-10-28T13:15:00-04:00
Where:
Hackerman Hall B17
3400 N. Charles Street
Baltimore
MD 21218
Cost:
Free

Abstract

Adversarial machine learning research has been nearly obsessed with test-time attacks on image (and to a lesser degree, text) classification tasks.  This talk examines two directions that broaden the anticipated threats.  First, I discuss vulnerabilities in sequential machine learning such as multi-armed bandits and reinforcement learning.  We will see how the attacker can manipulate the environment and force the learner into learning any target policy. The attacker can optimize such attacks by solving a control problem, or equivalently, a higher level reinforcement learning problem.  Second, I revisit attacks on image classification and disprove a key assumption: small pixel p-norm manipulation implies humanly imperceptible attack.  I will describe a human behavioral study demonstrating that pixel p-norm, earth mover’s distance, structural similarity index, and deep net embedding do not match human perception.

Biography

Jerry Zhu is the Sheldon & Marianne Lubar professor in the Department of Computer Sciences at the University of Wisconsin-Madison. Jerry received his Ph.D. from Carnegie Mellon University in 2005. His research interest is in machine learning, particularly machine teaching, adversarial learning, active learning and semi-supervised learning. He currently serves or has served the following roles: conference chair for AISTATS and CogSci, Action Editor of Machine Learning Journal, member of DARPA ISAT advisory group. He is a recipient of a NSF CAREER Award, and winner of multiple best paper awards including an ICML classic paper prize in 2013.

Center for Language and Speech Processing