Leveraging Neuroscience to Give Machines Human-Like Information Processing Capability

By
Amy Biemiller
February 26, 2014

Compared to man-made computers, the human brain is still the most remarkable machine, particularly in its efficient ability to process sensory input in challenging conditions. Because of the brain’s remarkable computing capacity, humans can create an accurate auditory description of their surroundings by separating simultaneous acoustic signals—such as a car radio announcer’s voice, ambient sounds of traffic, and an emergency vehicle’s siren—and make split-second decisions based upon that input.

While seemingly trivial for humans, the ability to sense and perceive acoustic environment in multitudes of conditions has proved extremely difficult to model and implement in machines. With extensive training in engineering and neuroscience, Nima Mesgarani, assistant professor of electrical engineering, is uniquely qualified to investigate how these processes work in the brain and apply the findings to a wide range of speech processing algorithms.

“I want to understand what computation is done in the brain when we listen to speech, and how to formulate mathematical models to explain the brain signals,” he says. “Answers to these questions will inform work in building an interface to connect brain signals to machines. Although challenging, this hybrid approach affords us novel perspectives and allows us to apply a broad range of expertise to the challenge.”

Mesgarani’s interdisciplinary approach to the development of human-like information processing applies engineering and neuroscience to reverse-engineer how the brain perceives and processes complex sounds in order to emulate human ability in machines. He expects his research, which has been published in journals such as Nature and Science, to lead to novel speech processing algorithms for automated systems, and innovative methods for brain-machine interface and neural prosthetics.

“Our research answers fundamental questions at the juncture of these disciplines and provides insight into the possible mechanisms underlying various speech disorders,” he explains. “These findings could result in prosthetic devices that can restore speech in people who have lost their ability to speak.”

Mesgarani joined Columbia Engineering in 2013 after earning his PhD in electrical engineering from the University of Maryland. He completed his postdoctoral research at the Center for Language and Speech Processing (CLSP) at Johns Hopkins University, and in the Department of Neurological Surgery at the UC San Francisco. He is a member of the honor society of Phi Kappa Phi, the Society for Neuroscience, the Association for Research in Otolaryngology, and the Institute of Electrical and Electronics Engineers.

BSc, Sharif University of Technology, Tehran, Iran, 1999; PhD, University of Maryland, 2008