BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21494@www.clsp.jhu.edu DTSTAMP:20240328T131705Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:
Abstract
\nAdversarial atta cks deceive neural network systems by adding carefully crafted perturbatio ns to benign signals. Being almost imperceptible to humans\, these attacks pose a severe security threat to the state-of-the-art speech and speaker recognition systems\, making it vital to propose countermeasures against t hem. In this talk\, we focus on 1) classification of a given adversarial a ttack into attack algorithm type\, threat model type\, and signal-to-adver sarial-noise ratios\, 2) developing a novel speech denoising solution to f urther improve the classification performance.
\nO ur proposed approach uses an x-vector network as a signature extractor to get embeddings\, which we call signatures. These signatures contain inform ation about the attack and can help classify different attack algorithms\, threat models\, and signal-to-adversarial-noise ratios. We demonstrate th e transferability of such signatures to other tasks. In particular\, a sig nature extractor trained to classify attacks against speaker identificatio n can also be used to classify attacks against speaker verification and sp eech recognition. We also show that signatures can be used to detect unkno wn attacks i.e. attacks not included during training. Lastly\, we propose to improve the signature extractor by making the job of the signature ext ractor easier by removing the clean signal from the adversarial example (w hich consists of clean signal+perturbation). We train our signature extrac tor using adversarial perturbation. At inference time\, we use a time-doma in denoiser to obtain adversarial perturbation from adversarial examples. Using our improved approach\, we show that common attacks in the literatur e (Fast Gradient Sign Method (FGSM)\, Projected Gradient Descent (PGD)\, C arlini-Wagner (CW) ) can be classified with accuracy as high as 96%. We al so detect unknown attacks with an equal error rate (EER) of about 9%\, whi ch is very promising.
DTSTART;TZID=America/New_York:20220304T120000 DTEND;TZID=America/New_York:20220304T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Sonal Joshi “Classify and Detect Adversarial Atta cks Against Speaker and Speech Recognition Systems” URL:https://www.clsp.jhu.edu/events/student-seminar-sonal-joshi/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,Joshi\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-22394@www.clsp.jhu.edu DTSTAMP:20240328T131705Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nModel robustness and spurious correlations have received increasing atten tion in the NLP community\, both in methods and evaluation. The term “spur ious correlation” is overloaded though and can refer to any undesirable sh ortcuts learned by the model\, as judged by domain experts.
\nWhen designing mitigation algorithms\, we oft en (implicitly) assume that a spurious feature is irrelevant for predictio n. However\, many features in NLP (e.g. word overlap and negation) are not spurious in the sense that the background is spurious for classifying obj ects in an image. In contrast\, they carry important information that’s ne eded to make predictions by humans. In this talk\, we argue that it is mor e productive to characterize features in terms of their necessity and suff iciency for prediction. We then discuss the implications of this categoriz ation in representation\, learning\, and evaluation.
\nBiogr aphy
\nHe He is an Assistant Professor in the Department of Computer Science and the Center for Data Science at New York University. She obtained her PhD in Computer Science at the University of Maryland\, C ollege Park. Before joining NYU\, she spent a year at AWS AI and was a pos t-doc at Stanford University before that. She is interested in building ro bust and trustworthy NLP systems in human-centered settings. Her recent re search focus includes robust language understanding\, collaborative text g eneration\, and understanding capabilities and issues of large language mo dels.
\n DTSTART;TZID=America/New_York:20221014T120000 DTEND;TZID=America/New_York:20221014T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:He He (New York University) “What We Talk about When We Talk about Spurious Correlations in NLP” URL:https://www.clsp.jhu.edu/events/he-he-new-york-university/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,He\,October END:VEVENT END:VCALENDAR