BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21494@www.clsp.jhu.edu DTSTAMP:20240328T091405Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nAdversarial attacks deceive neural network systems by adding carefully crafted perturbations to benign signals. Being almost im perceptible to humans\, these attacks pose a severe security threat to the state-of-the-art speech and speaker recognition systems\, making it vital to propose countermeasures against them. In this talk\, we focus on 1) cl assification of a given adversarial attack into attack algorithm type\, th reat model type\, and signal-to-adversarial-noise ratios\, 2) developing a novel speech denoising solution to further improve the classification per formance. \nOur proposed approach uses an x-vector network as a signature extractor to get embeddings\, which we call signatures. These signatures c ontain information about the attack and can help classify different attack algorithms\, threat models\, and signal-to-adversarial-noise ratios. We d emonstrate the transferability of such signatures to other tasks. In parti cular\, a signature extractor trained to classify attacks against speaker identification can also be used to classify attacks against speaker verifi cation and speech recognition. We also show that signatures can be used to detect unknown attacks i.e. attacks not included during training. Lastly \, we propose to improve the signature extractor by making the job of the signature extractor easier by removing the clean signal from the adversari al example (which consists of clean signal+perturbation). We train our sig nature extractor using adversarial perturbation. At inference time\, we us e a time-domain denoiser to obtain adversarial perturbation from adversari al examples. Using our improved approach\, we show that common attacks in the literature (Fast Gradient Sign Method (FGSM)\, Projected Gradient Desc ent (PGD)\, Carlini-Wagner (CW) ) can be classified with accuracy as high as 96%. We also detect unknown attacks with an equal error rate (EER) of a bout 9%\, which is very promising. DTSTART;TZID=America/New_York:20220304T120000 DTEND;TZID=America/New_York:20220304T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Sonal Joshi “Classify and Detect Adversarial Atta cks Against Speaker and Speech Recognition Systems” URL:https://www.clsp.jhu.edu/events/student-seminar-sonal-joshi/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nAdversarial attacks deceive neural network systems by adding carefully crafted perturbations to benign signals. Being almost imperceptible to humans\, these attacks pose a severe security thr eat to the state-of-the-art speech and speaker recognition systems\, makin g it vital to propose countermeasures against them. In this talk\, we focu s on 1) classification of a given adversarial attack into attack algorithm type\, threat model type\, and signal-to-adversarial-noise ratios\, 2) de veloping a novel speech denoising solution to further improve the classifi cation performance.
\nOur proposed approach uses a n x-vector network as a signature extractor to get embeddings\, which we c all signatures. These signatures contain information about the attack and can help classify different attack algorithms\, threat models\, and signal -to-adversarial-noise ratios. We demonstrate the transferability of such s ignatures to other tasks. In particular\, a signature extractor trained to classify attacks against speaker identification can also be used to class ify attacks against speaker verification and speech recognition. We also s how that signatures can be used to detect unknown attacks i.e. attacks not included during training. Lastly\, we propose to improve the signature e xtractor by making the job of the signature extractor easier by removing t he clean signal from the adversarial example (which consists of clean sign al+perturbation). We train our signature extractor using adversarial pertu rbation. At inference time\, we use a time-domain denoiser to obtain adver sarial perturbation from adversarial examples. Using our improved approach \, we show that common attacks in the literature (Fast Gradient Sign Metho d (FGSM)\, Projected Gradient Descent (PGD)\, Carlini-Wagner (CW) ) can be classified with accuracy as high as 96%. We also detect unknown attacks w ith an equal error rate (EER) of about 9%\, which is very promising.
\n X-TAGS;LANGUAGE=en-US:2022\,Joshi\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-21615@www.clsp.jhu.edu DTSTAMP:20240328T091405Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\n\n\nWe consider a problem of data collection for sema ntically rich NLU tasks\, where detailed semantics of documents (or uttera nces) are captured using a complex meaning representation. Previously\, d ata collection for such tasks was either handled at the cost of extensive annotator training (e.g. in FrameNet or PropBank) or simplified meaning re presentation (e.g. in QA-SRL or Overnight). In this talk\, we present two systems [1\, 2] that aim to support fast\, accurate\, and expressive sema ntic annotations by pairing human workers with a trained model in the loop .\n\nThe first system\, called Guided K-best [1]\, is an annotation toolki t for conversational semantic parsing. Instead of typing annotations from scratch\, data specialists choose a correct parse from the K-best output of a few-shot prototyped model. As the K-best list can be large (e.g. K=1 00)\, we guide the annotators’ exploration of the K-best list via explaina ble hierarchical clustering. In addition\, we experiment with RoBERTa-bas ed reranking of the K-best list to recalibrate the few-shot model towards Accuracy@K. The final system allows to annotate data up to 35% faster tha n the standard\, non-guided K-best and improves the few-shot model’s top-1 accuracy by up to 18%. The second system\, called SchemaBlocks [2]\, is an annotation toolkit for schemas\, or structured descriptions of frequent real-world scenarios (e.g.\, cooking a meal). It represents schemas in t he annotation UI as nested blocks. Using a novel Causal ARM model\, we fu rther speed up the annotation process and guide data specialists towards e xpressive and diverse schemas. As part of this work\, we collect 232 sche mas\, evaluating their internal coherence and their coverage on large-scal e newswire corpora.\n\n\n DTSTART;TZID=America/New_York:20220311T120000 DTEND;TZID=America/New_York:20220311T131500 LOCATION:Virtual Seminar SEQUENCE:0 SUMMARY:Student Seminar – Anton Belyy “Systems for Human-AI Cooperation on Collecting Semantic Annotations” URL:https://www.clsp.jhu.edu/events/student-seminar-anton-belyy-systems-for -human-ai-cooperation-on-collecting-semantic-annotations/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\n\n X-TAGS;LANGUAGE=en-US:2022\,Belyy\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-21616@www.clsp.jhu.edu DTSTAMP:20240328T091405Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nSocial media allows researchers to track societal and cultural changes over time based on language analysis tools. Many of thes e tools rely on statistical algorithms which need to be tuned to specific types of language. Recent studies have shown the absence of appropriate tu ning\, specifically in the presence of semantic shift\, can hinder robustn ess of the underlying methods. However\, little is known about the practic al effect this sensitivity may have on downstream longitudinal analyses. W e explore this gap in the literature through a timely case study: understa nding shifts in depression during the course of the COVID-19 pandemic. We find that inclusion of only a small number of semantically-unstable featur es can promote significant changes in longitudinal estimates of our target outcome. At the same time\, we demonstrate that a recently-introduced met hod for measuring semantic shift may be used to proactively identify failu re points of language-based models and\, in turn\, improve predictive gene ralization. DTSTART;TZID=America/New_York:20220318T120000 DTEND;TZID=America/New_York:20220318T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Keith Harrigian “The Problem of Semantic Shift in Longitudinal Monitoring of Social Media” URL:https://www.clsp.jhu.edu/events/student-seminar-keith-harrigian-the-pro blem-of-semantic-shift-in-longitudinal-monitoring-of-social-media/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Abstr act
\nSocial media allows researchers to track societal and cultural changes over time based on language analysis tools. Many of thes e tools rely on statistical algorithms which need to be tuned to specific types of language. Recent studies have shown the absence of appropriate tu ning\, specifically in the presence of semantic shift\, can hinder robustn ess of the underlying methods. However\, little is known about the practic al effect this sensitivity may have on downstream longitudinal analyses. W e explore this gap in the literature through a timely case study: understa nding shifts in depression during the course of the COVID-19 pandemic. We find that inclusion of only a small number of semantically-unstable featur es can promote significant changes in longitudinal estimates of our target outcome. At the same time\, we demonstrate that a recently-introduced met hod for measuring semantic shift may be used to proactively identify failu re points of language-based models and\, in turn\, improve predictive gene ralization.
\n X-TAGS;LANGUAGE=en-US:2022\,Harrigian\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-23555@www.clsp.jhu.edu DTSTAMP:20240328T091405Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20230327T120000 DTEND;TZID=America/New_York:20230327T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Desh Raj URL:https://www.clsp.jhu.edu/events/student-seminar-desh-raj-2/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,March\,Raj END:VEVENT BEGIN:VEVENT UID:ai1ec-24461@www.clsp.jhu.edu DTSTAMP:20240328T091405Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nMost machine translation systems operate on the sente nce-level while humans write and translate within a given context. Operati ng on individual sentences forces error-prone sentence segmentation into t he machine translation pipeline. This limits the upper-bound performance o f these systems by creating noisy training bitext. Further\, many grammati cal features necessitate inter-sentential context in order to translate wh ich makes perfect sentence-level machine translation an impossible task. I n this talk\, we will cover the inherent limits of sentence-level machine translation. Following this\, we will explore a key obstacle in the way of true context-aware machine translation—an abject lack of data. Finally\, we will cover recent work that provides (1) a new evaluation dataset that specifically addresses the translation of context-dependent discourse phe nomena and (2) reconstructed documents from large-scale sentence-level bit ext that can be used to improve performance when translating these types o f phenomena. DTSTART;TZID=America/New_York:20240304T120000 DTEND;TZID=America/New_York:20240304T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Rachel Wicks (JHU) “To Sentences and Beyond: Paving the Way for Con text-Aware Machine Translation” URL:https://www.clsp.jhu.edu/events/rachel-wicks-jhu/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nMost machine translation systems operate on the sente nce-level while humans write and translate within a given context. Operati ng on individual sentences forces error-prone sentence segmentation into t he machine translation pipeline. This limits the upper-bound performance o f these systems by creating noisy training bitext. Further\, many grammati cal features necessitate inter-sentential context in order to translate wh ich makes perfect sentence-level machine translation an impossible task. I n this talk\, we will cover the inherent limits of sentence-level machine translation. Following this\, we will explore a key obstacle in the way of true context-aware machine translation—an abject lack of data. Finally\, we will cover recent work that provides (1) a new evaluation dataset that specifically addresses the translation of context-dependent discourse phe nomena and (2) reconstructed documents from large-scale sentence-level bit ext that can be used to improve performance when translating these types o f phenomena.
\n X-TAGS;LANGUAGE=en-US:2024\,March\,Wicks END:VEVENT BEGIN:VEVENT UID:ai1ec-24479@www.clsp.jhu.edu DTSTAMP:20240328T091405Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nThe speech field is evolving to solve more challengin g scenarios\, such as multi-channel recordings with multiple simultaneous talkers. Given the many types of microphone setups out there\, we present the UniX-Encoder. It’s a universal encoder designed for multiple tasks\, a nd worked with any microphone array\, in both solo and multi-talker enviro nments. Our research enhances previous multichannel speech processing effo rts in four key areas: 1) Adaptability: Contrasting traditional models con strained to certain microphone array configurations\, our encoder is unive rsally compatible. 2) MultiTask Capability: Beyond the single-task focus o f previous systems\, UniX-Encoder acts as a robust upstream model\, adeptl y extracting features for diverse tasks including ASR and speaker recognit ion. 3) Self-Supervised Training: The encoder is trained without requiring labeled multi-channel data. 4) End-to-End Integration: In contrast to mod els that first beamform then process single-channels\, our encoder offers an end-to-end solution\, bypassing explicit beamforming or separation. To validate its effectiveness\, we tested the UniXEncoder on a synthetic mult i-channel dataset from the LibriSpeech corpus. Across tasks like speech re cognition and speaker diarization\, our encoder consistently outperformed combinations like the WavLM model with the BeamformIt frontend. DTSTART;TZID=America/New_York:20240311T200500 DTEND;TZID=America/New_York:20240311T210500 SEQUENCE:0 SUMMARY:Zili Huang (JHU) “Unix-Encoder: A Universal X-Channel Speech Encode r for Ad-Hoc Microphone Array Speech Processing” URL:https://www.clsp.jhu.edu/events/zili-huang-jhu-unix-encoder-a-universal -x-channel-speech-encoder-for-ad-hoc-microphone-array-speech-processing/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nThe speech field is evolving to solve more challenging scenarios\, such as multi-channel recordings wi th multiple simultaneous talkers. Given the many types of microphone setup s out there\, we present the UniX-Encoder. It’s a universal encoder design ed for multiple tasks\, and worked with any microphone array\, in both sol o and multi-talker environments. Our research enhances previous multichann el speech processing efforts in four key areas: 1) Adaptability: Contrasti ng traditional models constrained to certain microphone array configuratio ns\, our encoder is universally compatible. 2) MultiTask Capability: Beyon d the single-task focus of previous systems\, UniX-Encoder acts as a robus t upstream model\, adeptly extracting features for diverse tasks including ASR and speaker recognition. 3) Self-Supervised Training: The encoder is trained without requiring labeled multi-channel data. 4) End-to-End Integr ation: In contrast to models that first beamform then process single-chann els\, our encoder offers an end-to-end solution\, bypassing explicit beamf orming or separation. To validate its effectiveness\, we tested the UniXEn coder on a synthetic multi-channel dataset from the LibriSpeech corpus. Ac ross tasks like speech recognition and speaker diarization\, our encoder c onsistently outperformed combinations like the WavLM model with the Beamfo rmIt frontend.
\n X-TAGS;LANGUAGE=en-US:2024\,Huang\,March END:VEVENT END:VCALENDAR