Forest-based Search Algorithms in Parsing and Machine Translation – Liang Huang (University of Pennsylvania)

April 29, 2008 all-day

View Seminar Video
Many problems in Natural Language Processing (NLP) involves an efficient search for the best derivation over (exponentially) many candidates, especially in parsing and machine translation. In these cases, the concept of “packed forest” provides a compact representation of the huge search spaces, where efficient inference algorithms based on Dynamic Programming (DP) are possible. In this talk we address two important problems within this framework: exact k-best inference which is widely used in NLP pipelines such as parse reranking and MT rescoring, and approximate inference when the search space is too big for exact search. We first present a series of fast and exact k-best algorithms on forests, which are orders of magnitudes faster than previously used methods on state-of-the-art parsers such as Collins (1999). We then extend these algorithms for approximate search when the forests are too big for exact inference. We will discuss two particular instances of this new method, forest rescoring for MT decoding with integrated language models, and forest reranking for discriminative parsing. In the former, our methods perform orders of magnitudes faster than conventional beam search on both state-of-the-art phrase-based and syntax-based systems, with the same level of search error or translation quality. In the latter, faster search also leads to better learning, where our approximate decoding makes whole-Treebank discriminative training practical and results in the best accuracy to date for parsers trained on the Treebank. This talk includes joint work with David Chiang (USC Information Sciences Institute).

Liang Huang will shortly finish his PhD at Penn, and is looking for a postdoctoral position.

Center for Language and Speech Processing