Discriminative Language Modeling for LVCSR – Murat Saraclar (AT&T Labs – Research)
This talk describes a discriminative language modeling technique for large vocabulary speech recognition. We contrast two parameter estimation methods: the perceptron algorithm, and a method based on conditional random fields (CRFs). The models are encoded as deterministic weighted finite-state automata, and are applied by intersecting the automata with word-lattices that are output from a baseline recognizer. The perceptron algorithm has the benefit of automatically selecting a relatively small feature set in just a couple of passes over the training data. We present results for various perceptron training scenarios for the Switchboard task, including using n-gram features of different orders, and performing n-best extraction versus using full word lattices. Using the feature set selected by the perceptron algorithm, CRF training provides an additional 0.5 percent reduction in word error rate, for a total of 1.8 percent absolute WER reduction from the baseline of 39.2 percent.