Research

Many insects, including the fruit fly Drosophila melanogaster have developed very sensitive olfactory systems that allow them to avoid predators, find food, select and localize mating partners. In order to perform such tasks in an intrinsically dynamic world, it is essential for the fruit fly to be able to carry out computations on time-varying olfactory signals in real time. Although the anatomical structure of Drosophila's olfactory system is well understood, the precise signal-processing role of the associated neural circuits remains to be elucidated. There is a fundamental need to reverse engineer the early olfactory system of Drosophila.

In the course of our experimental and theoretical work, my collaborators and I have identified and sought to resolve the following three major obstacles in reverse engineering the olfactory system of Drosophila: (i) the lack of a precise and reproducible odor delivery (the input); (ii) the inability to record from a large population of neurons in a single animal in response to the same stimulus in-vivo (the output); and (iii) the lack of a strong theoretical foundation for identifying dendritic processing (the reverse-engineering algorithm).

Identifying Dendritic Processing in Spiking Neural Circuits

We pioneered a novel theoretical approach for estimating receptive fields in neural circuit models that incorporate biophysical spike-generating mechanisms (e.g., the Hodgkin-Huxley neuron) and admit both continuous sensory signals and multidimensional spike trains as input stimuli. Our methodology explicitly takes into account the highly nonlinear nature of spike generation that has been shown to result in significant interactions between various stimulus features and to fundamentally affect the estimation of receptive fields. Furthermore, and in contrast to many existing methods, our approach estimates receptive fields directly from spike times produced by a neuron, thereby obviating the need to repeat experiments in order to compute the neuron’s instantaneous rate of response (e.g., PSTH). The employed test signals belong to spaces of bandlimited functions and bridge the gap between identification using synthetic and naturalistic stimuli. This makes our methodology particularly attractive in those sensory modalities (most notably olfaction), where it is difficult to produce stimuli that are white and/or have particular distribution/attributes.

[1] A. A. Lazar and Y. B. Slutskiy and Y. Zhou, Massively Parallel Neural Circuits for Stereoscopic Color Vision: Encoding, Decoding and Identification, Neural Networks: Mathematical and Computational Analysis, In Press
[2] A. A. Lazar and Y. B. Slutskiy, Spiking Neural Circuits with Dendritic Stimulus Processors: Encoding, Decoding, and Identification in Reproducing Kernel Hilbert Spaces, Journal of Computational Neuroscience, July 2014
[3] A. A. Lazar and Y. B. Slutskiy, Multisensory Encoding, Decoding, and Identification, Advances in Neural Information Processing Systems 26, pp. 3208–3216, December 2013   [pdf]
[4] A. A. Lazar and Y. B. Slutskiy, Functional Identification of Spike-Processing Neural Circuits, Neural Computation, MIT Press, August 2013   [pdf]
[5] A. A. Lazar and Y. B. Slutskiy, Channel Identification Machines, Journal of Computational Intelligence and Neuroscience, Volume 2012, pp. 1-20, July 2012   [pdf]
[6] A. A. Lazar and Y. B. Slutskiy, Identifying Dendritic Processing, Advances in Neural Information Processing Systems 23, pp. 1261–1269, December 2010   [pdf]

Using a channel identification machine to identify a moving spatiotemporal receptive field from spike times produced by an integrate-and-fire neuron.

Using a channel identification machine to identify a rotating spatiotemporal receptive field from spike times produced by an integrate-and-fire neuron.

A multisensory time encoding machine (mTEM) consisting of 9,000 neurons was used to encode simultaneous streams of natural audio and video into a common pool of spikes.

Each neuron was modeled as an integrate-and-fire neuron with two receptive fields: a non-separable spatiotemporal receptive field for video stimuli and a temporal receptive field for audio stimuli. Thus, two distinct stimuli having different dimensions (three for video, one for audio) and dynamics (2-4 cycles vs. 4,000 cycles in each direction) were multiplexed at the level of every spiking neuron and encoded into an unlabeled sequence of spikes.

The mTEM produced a total of 360,000 spikes in response to a 6-second-long video of Albert Einstein explaining the mass-energy equivalence formula E=mc^2: "... [a] very small amount of mass may be converted into a very large amount of energy." A multisensory time decoding machine (mTDM) was then used to reconstruct the video and audio stimuli from the produced set of spikes. The first and third columns in this demo show the original (top row) and recovered (middle row) video and audio, respectively, together with the absolute error between them (bottom row). The second and fourth columns show the corresponding amplitude spectra of all signals.

 

This video demonstrates that evaluation of an identification algorithm can be performed in the stimulus space. The quality of stimulus reconstruction depends on the number of spikes used in identifying the circuit parameters.

A massively parallel neural circuit consisting of 30,000 IAF neurons with color receptive fields was used to encode color video covering a screen size of 160x90px. The underlying spatio-temporal receptive fields were non-separable in each of their color components and resembled 2D Gabor filters rotating in time. All receptive fields were functionally identified using the same natural video with a duration of 200 seconds.

A novel stimulus (the bee video shown here) encoded by the underlying circuit was recovered using the identified circuit parameters and the set of spikes generated by the underlying circuit. The decoding quality depends on how well the circuit is identified. As identification quality increases (more spikes are used), the quality of reconstruction converges to that of reconstruction using the underlying circuit parameters (known receptive fields/filters).

Signal Processing in the Early Olfactory System of Drosophila

The lack of precise stimulus delivery and measurement systems has fundamentally limited the progress of reverse engineering neural circuits in olfaction. To address this limitation, we have developed an extremely precise and versatile odor delivery system with a capability to measure the odorant concentration in real-time. The odor delivery system allows one to design arbitrary odorant concentration profiles and to control airborne odorants in a precise and reproducible fashion (1% trial-to-trial tolerance). We are currently using this groundbreaking experimental setup to study the processing of time-varying olfactory signals in the early olfactory system of Drosophila [1-3]. This research is performed in collaboration with Dr. Richard Axel in the Axel Laboratory at the College of Physicians and Surgeons, Columbia University.

[1] A. J. Kim, A. A. Lazar and Y. B. Slutskiy, System Identification of Drosophila Olfactory Sensory Neurons, Journal of Computational Neuroscience, Vol. 30, No. 1, pp. 143–161, 2011, Special Issue on Methods of Information Theory
[2] A. J. Kim, A. A. Lazar and Y. B. Slutskiy, Drosophila Projection Neurons Encode the Acceleration of Time-Varying Odor Waveforms, Computational and Systems Neuroscience Meeting (COSYNE 2011), Salt Lake City, UT, February 2011, oral presentation
[3] A. J. Kim, A. A. Lazar and Y. B. Slutskiy, 2D Encoding of Concentration and Concentration Gradient in Drosophila ORNs, Computational and Systems Neuroscience Meeting (COSYNE 2010), Salt Lake City, UT, February 2010

Simultaneous in-vivo recordings of Or59b olfactory sensory neuron response to a time-varying odorant waveform.

In-vivo recording of a DM4 projection neuron response to a staircase waveform.

Nanophotonic Brain Machine Interfaces

The aforementioned advances in theory and experimentation demonstrated that the early olfactory system of Drosophila is much more sophisticated than previously thought and underscored the need to be able to record from a large population of neurons simultaneously and in-vivo. In collabortion with Dr. Dirk R. Englund and the Laboratory for Quantum Photonics we are developing nanophotonic brain machine interfaces that will pave the way for neural sensing and recording from a large population of neurons simultaneously. This will radically change our ability to characterize and model neural circuits in fruit flies.