Sensing and Computation for Autonomous Surveillance Systems – Andreas Andreou (Johns Hopkins University)

Abstract
Sensing and decoding human motor activity into a symbolic representation yields information about the state of mind of individuals and their future actions and interactions within their environment. Automatic speech recognition by machines, the decoding of the signals produced by the human vocal apparatus, is an example of such a task long recognized as a fundamental problem of interest to the department of defense and the commercial sector. In this talk, I will discuss research in my lab for sensing and computational systems capable of solving the above problem while operating in a truly autonomous manner on energy aware hardware and aimed at matching or exceeding the cognitive capabilities of humans. I make the discussion concrete by presenting the architecture of JHU-ASU Johns Hopkins University – Autonomous/Acoustic Surveillance Unit a prototype hardware platform developed to explore algorithms and architectures as well test custom hardware systems for autonomous surveillance sensing and computation. The problem of surveillance is hierarchically solved by obtaining answers to the questions: Is there anything interesting in the environment, where is it and what is it? To do that we exploit a multitude of passive and active sensors, including a continuous wave acoustic radar in a distributed network.

Center for Language and Speech Processing