Florian Metze (CMU) “Masked Autoencoders that Listen” @ Hackerman Hall B17
Nov 10 @ 12:00 pm – 1:15 pm


In this talk, I will present a simple extension of image-based Masked Autoencoders (MAE) to self-supervised representation learning from audio spectrograms. Following the Transformer encoder-decoder design in MAE, our Audio-MAE first encodes audio spectrogram patches with a high masking ratio, feeding only the non-masked tokens through encoder layers. The decoder then re-orders and decodes the encoded context padded with mask tokens, in order to reconstruct the input spectrogram. We find it beneficial to incorporate local window attention in the decoder, as audio spectrograms are highly correlated in local time and frequency bands. We then fine-tune the encoder with a lower masking ratio on target datasets. Empirically, Audio-MAE sets new state-of-the-art performance on six audio and speech classification tasks, outperforming other recent models that use external supervised pre-training.


Florian Metze is a Research Scientist Manager at Meta AI in New York, supporting a team of researchers and engineers working on multi-modal (image, video, audio, text) content understanding for Meta’s Family of Apps (Instagram, Threads, Facebook, WhatsApp). He used to be an Associate Research Professor at Carnegie Mellon University, in the School of Computer Science’s Language Technologies Institute, where he still is an Adjunct Professor. He is also a co-founder of Abridge, a company working on extracting information from doctor patient conversations. His work covers many areas of speech recognition and multi-media analysis with a focus on end-to-end deep learning. Currently, he focuses on multi-modal processing of videos, and using that information to recommend unconnected content. In the past, he has worked on low resource and multi-lingual speech processing, speech recognition with articulatory features, large-scale multi-media retrieval and summarization, information extraction from medical interviews, and recognition of personality or similar meta-data from speech.

For more information, please see


Center for Language and Speech Processing