Discriminative Language Modeling for LVCSR – Murat Saraclar (AT&T Labs – Research)

When:
April 13, 2004 all-day
2004-04-13T00:00:00-04:00
2004-04-14T00:00:00-04:00

Abstract
This talk describes a discriminative language modeling technique for large vocabulary speech recognition. We contrast two parameter estimation methods: the perceptron algorithm, and a method based on conditional random fields (CRFs). The models are encoded as deterministic weighted finite-state automata, and are applied by intersecting the automata with word-lattices that are output from a baseline recognizer. The perceptron algorithm has the benefit of automatically selecting a relatively small feature set in just a couple of passes over the training data. We present results for various perceptron training scenarios for the Switchboard task, including using n-gram features of different orders, and performing n-best extraction versus using full word lattices. Using the feature set selected by the perceptron algorithm, CRF training provides an additional 0.5 percent reduction in word error rate, for a total of 1.8 percent absolute WER reduction from the baseline of 39.2 percent.

Johns Hopkins University

Johns Hopkins University, Whiting School of Engineering

Center for Language and Speech Processing
Hackerman 226
3400 North Charles Street, Baltimore, MD 21218-2680

Center for Language and Speech Processing