Florian Metze (Carnegie Mellon University) “Visual Features for Context-Aware Speech Recognition”

When:
October 14, 2016 @ 12:00 pm – 1:15 pm
2016-10-14T12:00:00-04:00
2016-10-14T13:15:00-04:00
Where:
Hackerman Hall B17
3400 N Charles St
Baltimore, MD 21218
USA
Cost:
Free
Contact:
Center for Language and Speech Processing

Abstract

Transcribing consumer generated multi-media content such as “Youtube” videos is still one of the hardest tasks for automatic speech recognition, with high error rates. Such data typically occupies a very broad domain, has been recorded in challenging conditions, with cheap hardware and a focus on the visual modality, and may have been post-processed or edited.

In this paper, we extend our earlier work on adapting the acoustic model of a DNN-based speech recognition system to an RNN language model, and show how both can be adapted to the objects and scenes that can be automatically detected in the video. We are working on a corpus of “how-to” videos from the web, and the idea is that an object that can be seen (“car”), or a scene that is being detected (“kitchen”) can be used to condition both models on the “context” of the recording, thereby reducing perplexity and improving transcription. We achieve good improvements in both cases, and compare and analyze the respective reductions in word error rate.

We expect that our results can be useful for any type of speech processing in which “context” information is available, for example in robotics, man machine interaction, or when indexing large audio-visual archives, and should ultimately help to bring together the “video-to-text” and “speech-to-text” communities. 

Biography

Florian Metze is an Associate Research Professor at Carnegie Mellon University, in the School of Computer Science’s Language Technologies Institute. His work covers many areas of speech recognition and multi-media analysis with a recent focus on end-to-end deep learning. He has also worked on low resource and multi-lingual speech processing, speech recognition with articulatory features, large-scale multi-media retrieval and summarization, along with recognition of personality or similar meta-data from speech. 

He is the founder of the “Speech Recognition Virtual Kitchen” project, which strives to make state-of-the-art speech processing techniques usable by non-experts in the field, and started the “Query by Example Search on Speech” task at MediaEval. He was Co-PI and PI of the CMU team in the IARPA Aladdin and Babel projects. Most recently, his group released the “Eesen” toolkit for end-to-end speech recognition using recurrent neural networks and connectionist temporal classification.

He received his PhD from the Universität Karlsruhe (TH) for a thesis on “Articulatory Features for Conversational Speech Recognition” in 2005. He worked at Deutsche Telekom Laboratories (T-Labs) from 2006 to 2009, and led research and development projects involving language technologies in the customer care and mobile services area. In 2009, he joined Carnegie Mellon University, where is also the associate director of the InterACT center. He served on the committee of multiple conferences and journals, and is an elected member of the IEEE Speech and Language Technical Committee since 2011.

For more information, please see http://www.cs.cmu.edu/directory/fmetze 

Center for Language and Speech Processing