All lectures will be held in Hackerman Hall, room B17. All labs will be held in Malone 122.
The morning lectures are open to the public. We request that you inform us at least one day in advance if you plan to attend (use the e-mail [email protected]).
Participation in the afternoon laboratories may be limited by the number of available workstations, and attendance is by permission only; contact us if interested.
All lectures will be held in Hackerman Hall, room B17. All labs will be held in Malone 122.
The morning lectures are open to the public. We request that you inform us at least one day in advance if you plan to attend (use the e-mail [email protected]).
Participation in the afternoon laboratories may be limited by the number of available workstations, and attendance is by permission only; contact us if interested.
All lectures will be held in Hackerman Hall, room B17. All labs will be held in Malone 122.
The morning lectures are open to the public. We request that you inform us at least one day in advance if you plan to attend (use the e-mail [email protected]).
Participation in the afternoon laboratories may be limited by the number of available workstations, and attendance is by permission only; contact us if interested.
All lectures will be held in Hackerman Hall, room B17. All labs will be held in Malone 122.
The morning lectures are open to the public. We request that you inform us at least one day in advance if you plan to attend (use the e-mail [email protected]).
Participation in the afternoon laboratories may be limited by the number of available workstations, and attendance is by permission only; contact us if interested.
All lectures will be held in Hackerman Hall, room B17. All labs will be held in Malone 122.
The morning lectures are open to the public. We request that you inform us at least one day in advance if you plan to attend (use the e-mail [email protected]).
Participation in the afternoon laboratories may be limited by the number of available workstations, and attendance is by permission only; contact us if interested.
All lectures will be held in Hackerman Hall, room B17. All labs will be held in Malone 122.
The morning lectures are open to the public. We request that you inform us at least one day in advance if you plan to attend (use the e-mail [email protected]).
Participation in the afternoon laboratories may be limited by the number of available workstations, and attendance is by permission only; contact us if interested.
All lectures will be held in Hackerman Hall, room B17. All labs will be held in Malone 122.
The morning lectures are open to the public. We request that you inform us at least one day in advance if you plan to attend (use the e-mail [email protected]).
Participation in the afternoon laboratories may be limited by the number of available workstations, and attendance is by permission only; contact us if interested.
All lectures will be held in Hackerman Hall, room B17. All labs will be held in Malone 122.
The morning lectures are open to the public. We request that you inform us at least one day in advance if you plan to attend (use the e-mail [email protected]).
Participation in the afternoon laboratories may be limited by the number of available workstations, and attendance is by permission only; contact us if interested.
All lectures will be held in Hackerman Hall, room B17. All labs will be held in Malone 122.
The morning lectures are open to the public. We request that you inform us at least one day in advance if you plan to attend (use the e-mail [email protected]).
Participation in the afternoon laboratories may be limited by the number of available workstations, and attendance is by permission only; contact us if interested.
Abstract
Automatic discovery of phone or word-like units is one of the core objectives in zero-resource speech processing. Recent attempts employ contrastive predictive coding (CPC), where the model learns representations by predicting the next frame given past context. However, CPC only looks at the audio signal’s structure at the frame level. The speech structure exists beyond frame-level, i.e., at phone level or even higher. We propose a segmental contrastive predictive coding (SCPC) framework to learn from the signal structure at both the frame and phone levels.SCPC is a hierarchical model with three stages trained in an end-to-end manner. In the first stage, the model predicts future feature frames and extracts frame-level representation from the raw waveform. In the second stage, a differentiable boundary detector finds variable-length segments. In the last stage, the model predicts future segments to learn segment representations. Experiments show that our model outperforms existing phone and word segmentation methods on TIMIT and Buckeye datasets.