Efficient Coding of Natural Sounds
Michael S. Lewicki, Center for the Neural Basis of Cognition, Carnegie Mellon University
September 26, 2000
Much is known about how the brain encodes sensory information, but why it has evolved to use the particular coding strategies it does have been the subject of long-standing interest and debate. The responses of auditory nerve fibers share some filtering properties with Fourier and wavelet transforms, which are widely used in speech and signal processing, but because these have largely been derived using intuitions about signal structure, these coding strategies can provide only limited theoretical justification of auditory sensory coding. Theories of sensory coding based on the idea of maximizing information transmission and eliminating statistical redundancy from the raw sensory signal and have been successful in explaining several properties of neural responses in the visual system including the population of receptive fields in visual cortex, but it is not known whether these theories can also explain sensory coding in the auditory system. In this talk I will show how efficient coding of natural sounds yields filtering properties similar to those of the auditory nerves, and can resemble both wavelet and Fourier transforms depending on the class of sounds for which the derived sensory code is optimized. These results provide evidence that the neural coding of auditory signals approaches an information theoretic optimum, and further support the hypothesis that efficient coding could provide a general principle of sensory coding.
Michael S. Lewicki received his BS degree in mathematics and cognitive science in 1989 from Carnegie Mellon University. He received his PhD degree in computation and neural systems from the California Institute of Technology in 1996. From 1996 to 1998, he was a postdoctoral fellow in the Computational Neurobiology Laboratory at the Salk Institute.
Dr. Lewicki is currently an assistant professor in the Computer Science Department at Carnegie Mellon University and in the CMU-University of Pittsburgh Center for the Neural Basis of Cognition. His research involves the study and development of computational approaches to the representation, processing, and learning of pattern structure in natural visual and acoustic environments.