Confusion-based Statistical Language Modeling for Machine Translation and Speech Recognition
How can we decide that one sentence is more likely in a language than another sentence, especially if those sentences have never been seen before in entirety? Why would we want to? The answer to the second question is that many natural language applications -- machine translation, automatic speech recognition -- produce a multitude of possible sentences as the output (of translation or recognition) and the likelihood of the resulting sentences in the language is a key way to choose between them. New methods for figuring out the answer to the first question is the topic of this summer workshop project. For the same "true" output, the set of competing outputs ('confusions') depends on the application: for speech recognition, the confusions typically sound similar (such as 'their' and 'there'); while in machine translation, the confusions will depend on ambiguities that arise in the translation process for a particular language pair (different for, say, Chinese and German when translating into English). In this project, we will be investigating techniques to automatically generate possible confusions for a particular task and learn statistical models of language from such confusions. These models can then be used to do a better job of choosing which of the alternative outputs of a particular system is best. This project is a chance to work on cutting edge speech and natural language applications, and get your hands dirty underneath the hood of state-of-the-art systems, while trying to make them better.
|Nathan Glenn||Brigham Young University|
|Darcey Riley||University of Rochester|