Wednesday, October 19, 2011

Boltzmann Machines Allow A Generalized Entropy Calculation

Information Theory provides a way to understand neural activity patterns (spike trains). Most information theoretic analyses hinge on calculating the Shannon entropy of a sequence of action potentials. (Note, I will follow the convention of calling Shannon entropy just entropy and using the word entropy interchangeably with information.) The catch is one can't, in general, calculate entropy for spike trains. Noise confounds the calculation. You can average out the noise by repeating the experiment if you know how to reptitively stimulate that neuron. We can do this this for only a few cases like the early visual system, inner ear, and touch receptors. All those nerves have something in common- they respond to specific things. This narrow focus makes it easy to recreate the stimuli to which those neurons respond.

Most of the brain, unlike those early sensory neurons, receives thousands of poorly defined inputs. And we have no idea how to realistically stimulate them.

Enter the Boltzmann Machine. To calculate a firing pattern's entropy all you need to know is how likely that pattern is to occur. Put simply, if we can calculate the chance that some activity pattern occurs, we can then calculate entropy.

Luckily, the Boltzmann machine has a readily calculable probability distribution. So you can calculate the entropy of a pattern of firing in the system's Boltzmann machine representation.There's a leap of faith here. You can't observe all the activity patterns. So you won't know whether the Boltzmann machine representation of your network accurately represents the chance of some pattern occuring that you've never seen before. I'm working right now to see how outlandish that assumption is.

Monday, October 17, 2011

The Logistic Model in Firing Rate Models of Neural Population Behavior

Bruno Averbeck visited the Kaplan lab last Thursday. In addition to a nice lunch at
Peri Ela we spent an hour discussing the merits of different quantiative definitions of complexity as applied to the nervous system. For my simulations I prefer Poisson neurons modified to have an absolute and relative refractory period. He suggested that I try his favored model- the logistic equation. I encountered this first in a Paninski lab paper where they discuss the utility of generalized linear models in decoding neural stimuli. Setting aside the technical difficulties of estimating the parameters of a generalized linear model from data, one can, in reverse, use the model to simulate that activity of a group of neurons and specify the pairwise correlations therein.