BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21275@www.clsp.jhu.edu DTSTAMP:20240328T165134Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\n\n\n\nAutomatic discovery of phone or word-like units is one of the core objectives in zero-resource speech processing. Recent attempts employ contrastive predictive coding (CPC)\, where the model lear ns representations by predicting the next frame given past context. Howeve r\, CPC only looks at the audio signal’s structure at the frame level. The speech structure exists beyond frame-level\, i.e.\, at phone level or eve n higher. We propose a segmental contrastive predictive coding (SCPC) fram ework to learn from the signal structure at both the frame and phone level s.\n\nSCPC is a hierarchical model with three stages trained in an end-to- end manner. In the first stage\, the model predicts future feature frames and extracts frame-level representation from the raw waveform. In the seco nd stage\, a differentiable boundary detector finds variable-length segmen ts. In the last stage\, the model predicts future segments to learn segmen t representations. Experiments show that our model outperforms existing ph one and word segmentation methods on TIMIT and Buckeye datasets. DTSTART;TZID=America/New_York:20220211T120000 DTEND;TZID=America/New_York:20220211T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Saurabhchand Bhati “Segmental Contrastive Predict ive Coding for Unsupervised Acoustic Segmentation” URL:https://www.clsp.jhu.edu/events/student-seminar-saurabhchand-bhati/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\n\n\n\n\nAutomatic discovery of phone or word-like units is one of the core objectives in zero-resource speech processing. Recent attempts employ contrastive predictive coding (CPC)\, where the model learns repre sentations by predicting the next frame given past context. However\, CPC only looks at the audio signal’s structure at the frame level. The speech structure exists beyond frame-level\, i.e.\, at phone level or even higher . We propose a segmental contrastive predictive coding (SCPC) framework to learn from the signal structure at both the frame and phone levels.\n\n\nSCPC is a hierarchical mode l with three stages trained in an end-to-end manner. In the first stage\, the model predicts future feature frames and extracts frame-level represen tation from the raw waveform. In the second stage\, a differentiable bound ary detector finds variable-length segments. In the last stage\, the model predicts future segments to learn segment representations. Experiments sh ow that our model outperforms existing phone and word segmentation methods on TIMIT and Buckeye datasets.
Abstr act
\nIn recent years\, the field of Natural Language Proce ssing has seen a profusion of tasks\, datasets\, and systems that facilita te reasoning about real-world situations through language (e.g.\, RTE\, MN LI\, COMET). Such systems might\, for example\, be trained to consider a s ituation where “somebody dropped a glass on the floor\,” and conclude it i s likely that “the glass shattered” as a result. In this talk\, I will dis cuss three pieces of work that revisit assumptions made by or about these systems. In the first work\, I develop a Defeasible Inference task\, which enables a system to recognize when a prior assumption it has made may no longer be true in light of new evidence it receives. The second work I wil l discuss revisits partial-input baselines\, which have highlighted issues of spurious correlations in natural language reasoning datasets and led t o unfavorable assumptions about models’ reasoning abilities. In particular \, I will discuss experiments that show models may still learn to reason i n the presence of spurious dataset artifacts. Finally\, I will touch on wo rk analyzing harmful assumptions made by reasoning models in the form of s ocial stereotypes\, particularly in the case of free-form generative reaso ning models.
\nBiography
\nRachel Rudinger is an Assistant Professor in the Department of Computer Science at the Unive rsity of Maryland\, College Park. She holds joint appointments in the Depa rtment of Linguistics and the Institute for Advanced Computer Studies (UMI ACS). In 2019\, Rachel completed her Ph.D. in Computer Science at Johns Ho pkins University in the Center for Language and Speech Processing. From 20 19-2020\, she was a Young Investigator at the Allen Institute for AI in Se attle\, and a visiting researcher at the University of Washington. Her res earch interests include computational semantics\, common-sense reasoning\, and issues of social bias and fairness in NLP.
\n X-TAGS;LANGUAGE=en-US:2022\,Rudinger\,September END:VEVENT END:VCALENDAR