Imagine the following situation. Your favorite search engine finds out that in addition to the 200 documents in English that match your query, another 200 are also relevant, but they are all in Chinese. Suppose also that your Chinese is not all that good.
Imagine now that you have a sophisticated search engine that will actually automatically extract the most useful information from all the Chinese (as well as English) documents and summarize it for you in English so that you don’t have to read the entire documents! The goal of this six-week project is to study this integration of cross-lingual information retrieval and subsequent multi-document summarization.
To conduct a scientific and thorough study, we will build in the months before the workshop, a parallel corpus of Chinese and English news stories consisting of:
9000 pairs of news articles that are manual translations of one another
30-50 queries in both languages,
Both manually and automatically translated document relevance-judgments for the queries (i.e. an answer-key of documents which are truly relevant to each query)
Sentence relevance-judgments for the top-ranked documents for each query (i.e. the sentences in a document which best summarize the contents of the entire document)
In such an environment, a user can type in a query in English and obtain ranked translated summaries of potentially relevant documents in English as well as Chinese.
We will investigate the quality of cross-language retrieval and summarization using a rank correlation metric and correlate that valuation technique with other evaluation methods such as precision/recall and relative utility.
At the end of the summer, a publicly available toolkit for cross-lingual summarization and evaluation will be available. The toolkit will implement (at arbitrary compression rates) multiple summarization algorithms, such as position-based, TF*IDF, largest common subsequence, and keywords. The methods for evaluating the quality of the summaries will be both intrinsic (such as percent agreement, precision/recall, and relative utility) and extrinsic (document rank).
|Wai Lam Chinese||University of Hong Kong|
|Dragomir Radev||University of Michigan|
|Horacio Saggion||University of Sheffield|
|Simone Teufel||Columbia University|
|Danyu Liu||Chinese University of Hong Kong|
|Hong Qi||University of Michigan|
|John Blitzer||Cornell University|
|Arda Celebi||Bilkent University|