Monday, September 3, 2012

A Musician on Science

Science isn't nearly as honest as music is. Yes, neither are monolithic. I no more study science now than I studied music almost decade ago. Even so, my experiences with each are deep enough to justify some simple comparisons.

In neither, is the most esteemed pathway- making a significant new discovery or being a rock star- likely to happen.

Music provides alternatives to being a rock star that are based on skill. Becoming a rock star almost always comes down to luck. But, you can be a studio musician. You can train in classical or jazz music. Years of practice can develop artistic talent into the ability to bend sound into an expression of yourself.

The best musician got ahead often enough for it to be a rule of thumb that the best way to make it as a musician was to develop your hands and mind as a musician.


How science determines its losers and runners-up seems arbitrary.  I don't know what all other budding scientists do at night. Some are in the lab. Others aren't and I've seen no emerging trend that those who work the hardest make it. It's not clear who will succeed or why.


Science has a new crop of believers every year and so it can counter unstable practices with access to new markets and specious war stories.





Thursday, February 16, 2012

Some Thoughts on Animal Experimentation

A recent confluence of events motivated me to think more deeply about the use of animals in my research. The confluence:

The FT article reminded me how industrial farming, while commercially beneficial, has a dark side. I don't agree with PETA that all species are equal. I support it pushing this issue because, in an increasingly competitive economy, the pressure to use exploit anything will only mount. Finally, the lab boss's request recalls the similarity between the rationales both agribusiness and science use; namely, "the ends justify the means". Justifiable doesn't mean justified. There's always that twinge that gives me pause before the animal is fully anesthetized or when you are euthanizing it- the euphemism is "sacrificing". Unless we are willing to do more speculative experiments on humans, using animals for research, even when done properly, seems to be a necessary evil due to how little we know about disease.

Modern science is very much a business. Published papers are something like quarterly reports. Curing cancer, however, is different from making the next iPad. The creep of business vocabulary into science is just as disturbing as it was in medicine. Are grad students a waste of money because the cost of training them outweighs the data (widgets) they produce? Why not underspend on equipment for animal experimentation because the animals won't "complain"?

Unlike other blog entries, there's no promise of code. I'm also sure I'll edit the exposition. Even in the alpha version I hope I've conveyed that I'm not sure why appealing to "it's for science", especially as that becomes code more and more for "it's for the bottom line of my lab", is different from industry claiming cost-efficiency.

Thursday, February 9, 2012

Javascript Tools for Neuroscience

MATLAB,C, and Python are, in my experience, the programming languages of choice in computational neuroscience. They, however, lack the capability for dynamic visualization that Javascript, Julia, or Mathematica allow. By dynamic visualization I mean the ability to alter, in real-time, the data and see how the results of analyses change. This is useful not only for teaching the methods but also for the type of interactive simulation that, I feel, does the most for designing experiments. Of those latter three languages, Javascript is the most widely supported and, perhaps, familiar.

In the coming weeks you will find versions of neuroTools.js available for download in the code section of my website. In the courses and notes section you will example visualization to explain the data analysis methods I use.

I'll begin with the interspike interval distance (ISI-D), Granger causality, Causal State Splitting Reconstruction, and Lempel-Ziv complexity.

I greatly appreciate any constructive feedback.

Wednesday, October 19, 2011

Boltzmann Machines Allow A Generalized Entropy Calculation

Information Theory provides a way to understand neural activity patterns (spike trains). Most information theoretic analyses hinge on calculating the Shannon entropy of a sequence of action potentials. (Note, I will follow the convention of calling Shannon entropy just entropy and using the word entropy interchangeably with information.) The catch is one can't, in general, calculate entropy for spike trains. Noise confounds the calculation. You can average out the noise by repeating the experiment if you know how to reptitively stimulate that neuron. We can do this this for only a few cases like the early visual system, inner ear, and touch receptors. All those nerves have something in common- they respond to specific things. This narrow focus makes it easy to recreate the stimuli to which those neurons respond.

Most of the brain, unlike those early sensory neurons, receives thousands of poorly defined inputs. And we have no idea how to realistically stimulate them.

Enter the Boltzmann Machine. To calculate a firing pattern's entropy all you need to know is how likely that pattern is to occur. Put simply, if we can calculate the chance that some activity pattern occurs, we can then calculate entropy.

Luckily, the Boltzmann machine has a readily calculable probability distribution. So you can calculate the entropy of a pattern of firing in the system's Boltzmann machine representation.There's a leap of faith here. You can't observe all the activity patterns. So you won't know whether the Boltzmann machine representation of your network accurately represents the chance of some pattern occuring that you've never seen before. I'm working right now to see how outlandish that assumption is.

Monday, October 17, 2011

The Logistic Model in Firing Rate Models of Neural Population Behavior

Bruno Averbeck visited the Kaplan lab last Thursday. In addition to a nice lunch at
Peri Ela we spent an hour discussing the merits of different quantiative definitions of complexity as applied to the nervous system. For my simulations I prefer Poisson neurons modified to have an absolute and relative refractory period. He suggested that I try his favored model- the logistic equation. I encountered this first in a Paninski lab paper where they discuss the utility of generalized linear models in decoding neural stimuli. Setting aside the technical difficulties of estimating the parameters of a generalized linear model from data, one can, in reverse, use the model to simulate that activity of a group of neurons and specify the pairwise correlations therein.