Monitoring Movement through Sound

June 20, 2015

MonitoringMovement_FOR-WEB

Children and adults hitting virtual baseballs and throwing punches at TV screens have become a common sight, thanks to video games with motion sensors.

Today’s motion-sensing devices track actions using images made by cameras and infrared sensors. “But using camera images raises privacy issues,” says Andreas Andreou, PhD ’86, professor of electrical and computer engineering.

Andreou and his graduate student Thomas Murray have devised a human action-recognition technology that could be much less intrusive and expensive. The technology identifies actions using sound. “It’s really a breakthrough that we can accomplish action recognition without a camera,” Andreou says.

He first started working on the system in 2008 for gait detection. The device, made from parts found at Radio Shack, relies on the Doppler effect, the same principle bats use to navigate in the dark. It detects sound waves bouncing off a moving object and converts the signals into a colorful graph representing body movement.

Murray has since improved the hardware design and developed sophisticated algorithms to process the signals from the micro-Doppler sensors. The device can now accurately infer whether a person is walking, jumping, dancing, or performing one of many other moves.

To develop the system, Murray collected data by having 10 test subjects, including himself, perform 21 different actions multiple times in front of three sensors placed at three different spots. He used machine-learning techniques to analyze the signals generated by the device and decipher the unique patterns associated with various actions.

“Think of this technology as a poor man’s Kinect,” Andreou says, referring to the popular laptop-sized motion-sensing device made by Microsoft, which sells for $150. The micro-Doppler gadget could cost as little as $5, he says, and be made small enough to fit in a pocket.

Aside from gaming, the device could be set up in living rooms or nursing homes to monitor people with disabilities and injuries, he adds. “It could detect whether they fell or if their gait has changed because of a small, unnoticed stroke.”

This article originally appeared in the Summer 2015 issue of Johns Hopkins Engineering magazine.

Center for Language and Speech Processing