Assistive Technology for the Deaf: American Sign Language Machine Translation – Matt Huenerfauth (University of Pennsylvania)

When:
April 11, 2006 all-day
2006-04-11T00:00:00-04:00
2006-04-12T00:00:00-04:00

Abstract
A majority of deaf high school graduates in the United States have an English reading level comparable to that of a 10-year-old hearing student, and so machine translation MT software that translates English text into American Sign Language ASL animations can significantly improve these individuals access to information, communication, and services. This talk will trace the development of an English-to-ASL MT system that has made translating texts important for literacy and user-interface applications a priority. These texts include some difficult-to-translate ASL phenomena called classifier predicates that have been ignored by previous ASL MT projects. During classifier predicates, signers use special hand movements to indicate the location and movement of invisible objects representing entities under discussion in space around their bodies. Classifier predicates are frequent in ASL and are necessary for conveying many concepts. This talk will describe several new technologies that facilitate the creation of machine translation software for ASL and are compatible with recent linguistic analyses of the language. These technologies include: a multi-path machine translation architecture, a 3D visualization of the arrangement of objects under discussion, a planning-based animation generator, and a multi-channel representation of the structure of the ASL animation performance. While these design features have been prompted by the unique requirements of generating a sign language, these technologies have applications for the machine translation of written languages, the representation of other multimodal language signals, and the production of meaningful gestures by other animated virtual human characters.

Center for Language and Speech Processing