CNS*2008 Workshop on
Methods of Information Theory in Computational Neuroscience
Wednesday and Thursday, July 23-24, 2008
Methods originally developed in Information Theory have found wide applicability in
computational neuroscience. Beyond these original methods there is a need to
develop novel tools and approaches that are driven by problems arising in neuroscience.
A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited.
The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work.
The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.
Aurel A. Lazar, Department of Electrical Engineering, Columbia University
Alex Dimitrov, Center for Computational Biology, Montana State University.
Wednesday (2:00 PM - 5:00 PM), July 23, 2008
Afternoon Session (2:00 PM - 5:00 PM)
Information Representation and Neural Coding
Chair: Paul Sajda
2:00 PM - 2:50 PM
Information Theory and Neuroscience
Don H. Johnson and Ilan N. Goodman Department of Electrical Engineering, Rice University.
Information theoretic methods offer to provide insight into the coding-fidelity capabilities of simple neural populations. Using rate-distortion theory, we show how well populations can represent information. We also analyze the effect of spike-sorting errors on measuring population activity. Beyond theoretical predictions, new developments in information theory offer ways of analyzing data to discover network connectivity. We review these new techniques and indicate how they might be used to study population data.
2:50 PM - 3:40 PM
Temporally Diverse Firing Patterns in Olfactory Receptor Neurons Underlie Spatio-Temporal Neural Codes for Odors
Raman Baranidharan, National Institute of Child Health and Human Development, NIH, Bethesda, MD. Also NIST.
Odorants are represented as spatio-temporal patterns of spiking in the antennal lobe (AL, insects) and the olfactory bulb (OB, mammals). We combined electrophysiological recordings in the locust with well-constrained computational models to examine how these neural codes for odors are generated. Extracellular recordings from the olfactory receptor neurons (ORNs) that provide input to the AL showed that the ORNs themselves can respond to odorants with reliable spiking patterns that vary both in strength (firing rate) and time course. A single ORN could respond with diverse firing patterns to different odors, and, a single odorant could evoke differently structured responses in multiple ORNs. Further, odors could elicit responses in some ORNs that greatly outlasted the stimulus duration, and some ORNs showed enduring inhibitory responses that fell well below baseline activity levels, or reliable sequences of inhibition and excitation. Thus, output from ORNs contains temporal structures that vary with the odor. The heterogeneous firing patterns of sensory neurons may, to a greater extent than presently understood, contribute to the production of complex temporal odor coding structures in the AL.
Our computational model of the first two stages of the olfactory system revealed that several well-described properties of odor codes previously believed to originate within the circuitry of the AL (odor-elicited spatio-temporal patterning of projection neuron (PN) activity, decoupling of odor identity from intensity, formation of fixed-point attractors for long odor pulses) appear to arise within the ORNs. To evaluate the contributions of the AL circuits, we examined subsequent processing of the ORN responses with a model of the AL network. The AL circuitry enabled the transient oscillatory synchronization of groups of PNs. Further, we found that the AL transformed information contained in the temporal dynamics of the ORN response into patterns that were more broadly distributed across groups of PNs, and more temporally complex because of GABAergic inhibition from local neurons. And, because of this inhibition, and unlike odor responses in groups of ORNs, responses in groups of PNs decorrelated over time, allowing better use of the AL coding space. Thus, the principle role of the AL appears to be transforming spatio-temporal patterns in the ORNs into a new coding format, possibly to decouple otherwise conflicting odor classification and identification tasks.
Acknowledgements: Barani Raman is supported by a joint NIH-NIST postdoctoral fellowship award from the National Research Council. This is a joint work with Joby Joseph (equal contributor), Jeff Tang and Mark Stopfer (NICHD, NIH).
3:40 PM - 4:10 PM
4:10 PM - 5:00 PM
We investigate an architecture for the encoding, processing and decoding of sensory stimuli such as odors, natural and synthetic video streams (movies, animation) and, sounds and speech. The stimuli are encoded with a population of spiking neurons, processed in the spike domain and finally decoded. The population of spiking neurons includes level crossing as well as integrate-and-fire neuron models with feedback. A number of spike domain processing algorithms are demonstrated, including faithful stimulus recovery, as well as simple operations on the original visual stimulus such as translations, rotations and zooming. All these operations are executed in the spike domain. Finally, the processed spike trains are decoded for the faithful recovery of the stimulus and its transformations.
Thursday (9:00 AM - 5:00 PM), July 24, 2008
Morning Session (9:00 AM - 12:00 noon)
Chair: W.B. Levy
9:00 AM - 9:50 AM
Computation by Neural and Cortical Systems
Robert L. Fry, System Engineering Group, Inc. and Johns Hopkins University, Columbia, MD 21046.
A general theory of computation is summarized based on the premise that both information and control are computationally defined and manipulated within the subjective frame of an intelligent system. It is then used to show how one can reverse engineer pyramidal neurons and cortical systems. In this view, pyramidal neurons must adapt and behave intelligently by optimizing their throughput by computationally matching their information acquisition and output decision rates. This has been shown to be accomplished through the information-theoretic construct of dual-matching.
The new aspect of computational theory summarized and exploited in this paper is that information theory, thermodynamics, and a theory of intelligence have common grounding this general theory of computation. The engineering consequence is that constructs and concepts used in one domain can be exported and used in the others including the Carnot cycle of thermodynamics and the principle of dual-matching in information theory.
This paper derives and captures the dynamics of pyramidal neurons as a Carnot cycle operating in refrigeration mode. Such systems reduce their entropy by acquiring information and then expend it when they make decisions. The neural Carnot cycle is best represented as a rectangle in temperature-entropy (T-S) space to describe its computational dynamics. The Carnot operation of the neuron can be succinctly summarized.
9:50 AM - 10:40 AM
Using Feedback Information Theory for Closed-Loop Neural Control in Brain-Machine Interfaces
Cyrus Omar, Miles Johnson, Tim Bretl and Todd P. Coleman, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign.
We propose a complementary approach to the design of neural prosthetic interfaces that goes beyond the standard approach of estimating desired control signals from neural activity. We exploit the fact that the for a user's intended application, the dynamics of the prosthetic in fact impact subsequent desired control inputs. This closed-loop approach uses principles from stochastic control and feedback information theory. We illustrate its effectiveness both theoretically and experimentally - in terms of spelling words from a menu of characters with a non-invasive brain-computer interface.
10:40 AM - 11:10 AM
11:10 AM - 12:00 AM
NeuroXidence: Reliable and Efficient Analysis of an Excess or Deficiency of Joint-Spike Events
Gordon Pipa, MIT and Massachusetts General Hospital, and Diek W. Wheeler, Wolf Singer and Danko Nikoli, Frankfurt Institute for Advanced Studies and Department of Neurophysiology, Max-Planck Institute for Brain Research, Frankfurt am Main.
We present a non-parametric and computationally-efficient method named NeuroXidence that detects coordinated firing of two or more neurons and tests whether the observed level of coordinated firing is significantly different from that expected by chance. The method considers the full auto-structure of the data, including the changes in the rate responses and the history dependencies in the spiking activity. Also, the method accounts for trial-by-trial variability in the dataset, such as the variability of the rate responses and their latencies. NeuroXidence can be applied to short data windows lasting only tens of milliseconds, which enables the tracking of transient neuronal states correlated to information processing. We demonstrate, on both simulated data and single-unit activity recorded in cat visual cortex, that NeuroXidence discriminates reliably between significant and spurious events that occur by chance.
Acknowledgements: The authors wish to thank to Sonja Grün and Emery Brown for the fruitful discussions on this project. Also, Gordon Pipa would like to thank to his wife Gabriela Pipa and her family for the great support. This study was partially supported by Hertie foundation.
12:00 PM - 2:00 PM
Afternoon Session (2:00 PM - 5:00 PM)
Chair: Raman Baranidharan
2:00 PM - 2:50 PM
Searching for Information/Energy Optimal Codes
William B. Levy, Laboratory for Systems Neurodynamics, University of Virginia.
Probability distributions with sufficient statistics would seem to characterize any optimal distribution of signaling between neurons of a system, regardless of the neurally-plausible code being used. Here neurally-plausible means instantly decodable by adding up inputs. Here we consider interpulse interval codes. The importance of a sufficient statistic will be described and one form discussed. Energy optimizations will then be considered. Regarding the distributions produced by the two approaches, there is a requirement for some overlap (i.e., a distribution that fits both optimization schemes). A set, or even one, distribution which fits experimental data remains elusive.
2:50 PM - 3:40 PM
Optimal Computation with Probabilistic Population Codes
Jeff Beck, Computational Cognitive Neuroscience Laboratory, University of Rochester.
Human behavior has been shown to be optimal in a Bayesian/Laplacian (1) sense. This kind of optimality requires a neural code which represents probability distributions a way which allows for the operations of probabilistic inference to be implemented via biologically plausible operations. Within the Probabilistic Population Coding (PPC) framework it will first be argued that optimal neural computation implies a strong relationship between the neural operations which implement probabilistic computation and the statistics of neural activity. As an example, it will then be shown that when the statistics of stimulus conditioned neural activity are Poisson-like, a recurrent neural network which can implement linear combinations of neural activity as well as quadratic non-linearities (and/or coincidence detection) and divisive normalization is sufficient to implement the three basic operations of probabilistic inference: evidence integration, marginalization of nuisance parameters, and parameter estimation/action selection in a wide variety of behaviorally relevant paradigms. As a concrete example, I will present a spike based neural code which tracks the posterior distribution of a particle in Brownian motion in a quadratic potential (i.e. implements a Kalman filter) and then optimally generates motor commands for smooth pursuit.
(1) Though widely credited for the discovery of the rule which bears his name, no direct reference to that rule can be found in his work. Indeed, there is evidence that Bayes was more concerned with reward maximizing decision making and that the form of probabilistic inference currently labeled as Bayesian was best (if not first) elucidated by Laplace in his Philosophical Treatise on Probability: http://ba.stat.cmu.edu/journal/2006/vol01/issue01/fienberg.pdf
3:40 PM - 4:10 PM
4:10 PM - 5:00 PM
Perceptual Decision Making via Sparse Decoding of Neural Activity from a Spiking Neuron Model of V1
Paul Sajda, Department of Biomedical Engineering, Columbia University.
Recent empirical evidence supports the hypothesis that invariant visual object recognition might result from non-linear encoding of the visual input followed by linear decoding. This hypothesis has received theoretical support through the development of neural network architectures which are based on a non-linear encoding of the input via recurrent network dynamics followed by a linear decoder.
In this talk we will consider such an architecture in which the visual input is non-linearly encoded by a biologically realistic spiking model of V1, and mapped to a perceptual decision via a sparse linear decoder. Novel is that we 1) utilize a large-scale conductance based spiking neuron model of V1 which has been well-characterized in terms of classical and extra-classical response properties, and 2) use the model to investigate decoding over a large population of neurons (>1,000) and diverse biological constraints (e.g. Magno vs. Parvo architectures). We compare decoding performance of the model system to human performance by comparing neurometric and psychometric curves. We see that a recurrently-connected V1-type encoding followed by a sparse linear decoder can achieve supra-accurate decoding relative to human behavioral performance.