Student Seminar – Sonal Joshi “Classify and Detect Adversarial Attacks Against Speaker and Speech Recognition Systems”

When:
March 4, 2022 @ 12:00 pm – 1:15 pm
2022-03-04T12:00:00-05:00
2022-03-04T13:15:00-05:00
Where:
Ames Hall 234
3400 N. Charles Street
Baltimore
MD 21218
Cost:
Free

Abstract

Adversarial attacks deceive neural network systems by adding carefully crafted perturbations to benign signals. Being almost imperceptible to humans, these attacks pose a severe security threat to the state-of-the-art speech and speaker recognition systems, making it vital to propose countermeasures against them. In this talk, we focus on 1) classification of a given adversarial attack into attack algorithm type, threat model type, and signal-to-adversarial-noise ratios, 2) developing a novel speech denoising solution to further improve the classification performance. 

Our proposed approach uses an x-vector network as a signature extractor to get embeddings, which we call signatures. These signatures contain information about the attack and can help classify different attack algorithms, threat models, and signal-to-adversarial-noise ratios. We demonstrate the transferability of such signatures to other tasks. In particular, a signature extractor trained to classify attacks against speaker identification can also be used to classify attacks against speaker verification and speech recognition. We also show that signatures can be used to detect unknown attacks i.e. attacks not included during training.  Lastly, we propose to improve the signature extractor by making the job of the signature extractor easier by removing the clean signal from the adversarial example (which consists of clean signal+perturbation). We train our signature extractor using adversarial perturbation. At inference time, we use a time-domain denoiser to obtain adversarial perturbation from adversarial examples. Using our improved approach, we show that common attacks in the literature (Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), Carlini-Wagner (CW) ) can be classified with accuracy as high as 96%. We also detect unknown attacks with an equal error rate (EER) of about 9%, which is very promising.

Center for Language and Speech Processing