Michael I Mandel (Meta) “Speech and Audio Processing in Non-Invasive Brain-Computer Interfaces at Meta”

When:
January 29, 2024 @ 12:00 pm – 1:15 pm
2024-01-29T12:00:00-05:00
2024-01-29T13:15:00-05:00
Where:
Hackerman Hall B17
3400 N. Charles Street
Baltimore
MD 21218
Cost:
Free

Abstract

Non-invasive neural interfaces have the potential to transform human-computer interaction by providing users with low friction, information rich, always available inputs. Reality Labs at Meta is developing such an interface for the control of augmented reality devices based on electromyographic (EMG) signals captured at the wrist. Speech and audio technologies turn out to be especially well suited to unlocking the full potential of these signals and interactions and this talk will present several specific problems and the speech and audio approaches that have advanced us towards this ultimate goal of effortless and joyful interfaces. We will provide the necessary neuroscientific background to understand these signals, describe automatic speech recognition-inspired interfaces generating text and beamforming-inspired interfaces for identifying individual neurons, and then explain how they connect with egocentric machine intelligence tasks that might reside on these devices.

Biography

Michael I Mandel is a Research Scientist in Reality Labs at Meta. Previously, he was an Associate Professor of Computer and Information Science at Brooklyn College and the CUNY Graduate Center working at the intersection of machine learning, signal processing, and psychoacoustics. He earned his BSc in Computer Science from the Massachusetts Institute of Technology and his MS and PhD with distinction in Electrical Engineering from Columbia University as a Fu Foundation Presidential Scholar. He was an FQRNT Postdoctoral Research Fellow in the Machine Learning laboratory (LISA/MILA) at the Université de Montréal, an Algorithm Developer at Audience Inc, and a Research Scientist in Computer Science and Engineering at the Ohio State University. His work has been supported by the National Science Foundation, including via a CAREER award, the Alfred P. Sloan Foundation, and Google, Inc.

Center for Language and Speech Processing