April 5, 2013
Interschool Lab, 750 CEPSR
Hosted by: Prof. Aurel Lazar
Speaker: Dr. Nima Mesgarani , Postdoctoral Scholar (University of California, San Francisco)
Abstract: The brain empowers humans and other animals with remarkable abilities to sense and perceive their acoustic environment in highly degraded conditions. These seemingly trivial tasks for humans have proven extremely difficult to model and implement in machines. One crucial limiting factor has been the need for a deep interaction between two very different disciplines, that of neuroscience and engineering. In this talk, I will present results of an interdisciplinary research effort to address the following fundamental questions: 1) what computation is performed in the brain when we listen to complex sounds? 2) How could this computation be modeled and implemented in computational systems? and 3) how could one build an interface to connect brain signals to machines? I will present results from recent invasive neural recordings in human auditory cortex that show a distributed representation of speech in auditory cortical areas. This representation remains unchanged even when an interfering speaker is added, as if the second voice is filtered out by the brain. In addition, I will show how this knowledge has been successfully incorporated in novel automatic speech processing applications and used by DARPA and other agencies for their superior performance. Finally, I will demonstrate how speech can be read directly from the brain that eventually, can allow for communication by people who have lost their ability to speak. This integrated research approach leads to better scientific understanding of the brain, innovative computational algorithms, and a new generation of Brain-Machine interfaces.
Speaker Bio: Nima Mesgarani is a postdoctoral scholar at the neurosurgery department of UC San Francisco. He received his Ph.D. in Electrical Engineering from University of Maryland College Park and was a postdoctoral scholar at Johns Hopkins University prior to joining UCSF. His research interests are in human-like information processing of acoustic signals at the interface of engineering and brain science. His goal is to develop an interdisciplinary research program designed to bridge the gap between these two very different disciplines by reverse engineering the signal processing in the brain, which in turn inspires novel approaches to emulate human abilities in machines. This integrated research approach leads to better scientific understanding of the brain, novel speech processing algorithms for automated systems, and a new generation of Brain-Machine Interfaces and neural prostheses.