An Exploration of How to Learn from Visually Descriptive Text

This workshop will involve learning to identifying visually descriptive text, parsing this text and extracting statistical models, and using these models to 1) learn how people describe the world and 2) build more relevant recognition systems in computer vision. It should be anexciting opportunity to deal with large scale text and image data, be exposed to cutting edge techniques in computer vision, andinteractively develop new strategies on the boundary between NLP and computer vision. Specific types of work will include, data collection, parsing, using Amazon’s Mechanical Turk, building andusing probabilistic models, and work on applications including image parsing, retrieval, and automatic sentence generation from images.

 

Team Members
Senior Members
Alexander BergStony Brook University
Tamara BergStony Brook University
Hal Daumé IIIUniversity of Maryland
Graduate Students
Amit GoyalUniversity of Maryland
Xufeng HanStony Brook University
Margaret MitchellUniversity of Aberdeen
Karl StratosColumbia University
Kota YamaguchiStony Brook University
Undergraduate Students
Jesse DodgeUniversity of Washington
Alyssa MenschMassachusetts Institute of Technology
Affiliate Members
Yejin ChoiStony Brook University
Julia HockenmaierUniversity of Illinois at Urbana-Champaign
Erik Learned-MillerUniversity of Massachusetts, Amherst
Alan QiPurdue University

Center for Language and Speech Processing