BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21275@www.clsp.jhu.edu DTSTAMP:20240329T020249Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\n\n\n\nAutomatic discovery of phone or word-like units is one of the core objectives in zero-resource speech processing. Recent attempts employ contrastive predictive coding (CPC)\, where the model lear ns representations by predicting the next frame given past context. Howeve r\, CPC only looks at the audio signal’s structure at the frame level. The speech structure exists beyond frame-level\, i.e.\, at phone level or eve n higher. We propose a segmental contrastive predictive coding (SCPC) fram ework to learn from the signal structure at both the frame and phone level s.\n\nSCPC is a hierarchical model with three stages trained in an end-to- end manner. In the first stage\, the model predicts future feature frames and extracts frame-level representation from the raw waveform. In the seco nd stage\, a differentiable boundary detector finds variable-length segmen ts. In the last stage\, the model predicts future segments to learn segmen t representations. Experiments show that our model outperforms existing ph one and word segmentation methods on TIMIT and Buckeye datasets. DTSTART;TZID=America/New_York:20220211T120000 DTEND;TZID=America/New_York:20220211T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Saurabhchand Bhati “Segmental Contrastive Predict ive Coding for Unsupervised Acoustic Segmentation” URL:https://www.clsp.jhu.edu/events/student-seminar-saurabhchand-bhati/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\n\n\n\n\nAutomatic discovery of phone or word-like units is one of the core objectives in zero-resource speech processing. Recent attempts employ contrastive predictive coding (CPC)\, where the model learns repre sentations by predicting the next frame given past context. However\, CPC only looks at the audio signal’s structure at the frame level. The speech structure exists beyond frame-level\, i.e.\, at phone level or even higher . We propose a segmental contrastive predictive coding (SCPC) framework to learn from the signal structure at both the frame and phone levels.\n\n\nSCPC is a hierarchical mode l with three stages trained in an end-to-end manner. In the first stage\, the model predicts future feature frames and extracts frame-level represen tation from the raw waveform. In the second stage\, a differentiable bound ary detector finds variable-length segments. In the last stage\, the model predicts future segments to learn segment representations. Experiments sh ow that our model outperforms existing phone and word segmentation methods on TIMIT and Buckeye datasets.
Abstr act
\nWhile the “deep learning tsunami” continues to define the state of the art in speech and language processing\, finite-state tra nsducer grammars developed by linguists and engineers are still widely use d in industrial\, highly-multilingual settings\, particularly for symbolic \, “front-end” speech applications. In this talk\, I will first briefly re view the current state of the OpenFst and OpenGrm finite-state transducer libraries. I then review two “late-breaking” algorithms found in these lib raries. The first is a heuristic but highly-effective general-purpose opti mization routine for weighted transducers. The second is an algorithm for computing the single shortest string of non-deterministic weighted accepto rs which lack certain properties required by classic shortest-path algorit hms. I will then illustrate how the OpenGrm tools can be used to induce a finite-state string-to-string transduction model known as a pair n-gram mo del. This model has been applied to grapheme-to-phoneme conversion\, loanw ord detection\, abbreviation expansion\, and back-transliteration\, among other tasks.
\nBiography
\nKyle Gorman is an assistant professor of linguistics at the Graduate Center\, City Universit y of New York\, and director of the master’s program in computational ling uistics\; he is also a software engineer in the speech and language algori thms group at Google. With Richard Sproat\, he is the coauthor of Finit e-State Text Processing (Morgan & Claypool\, 2021) and the creator of Pynini\, a finite-state text processing library for Python. He has also pu blished on statistical methods for comparing computational models\, text n ormalization\, grapheme-to-phoneme conversion\, and morphological analysis \, as well as many topics in linguistic theory.
\n X-TAGS;LANGUAGE=en-US:2022\,Gorman\,March END:VEVENT END:VCALENDAR