What’s Computational about Computational Neuroscience?

Turing_Machine_Model_Davey_2012

When non-scientists ask me what I studied for my PhD, if I want to put a complete halt to the conversation, I answer ‘computational neuroscience.’

To continue, for those readers who have not already clicked away, the usual response (eye’s glazing over) is something like ‘wow,’ followed by a quick deflection to another topic. So, I’ve learned instead to start with ‘neuroscience,’ which turns off fewer people (though the number is still substantial). It seems there is something about the word ‘computational’ that just pushes into incomprehensibility for many people.

So what is computational about computational neuroscience? The short answer is that it means thinking about how brains are computers. Notice that I didn’t say brains are ‘like’ computers.  Many people think that when scientists talk about brains as computers they are speaking in a kind of metaphor, but that’s not the case. Still, they are not saying that brains are literally ‘silicon microprocessor based generic math devices.’ They are saying that brains are literally ‘things that compute.’ And that’s not just a semantic dodge; it comes from Alan Turing, who originally defined a conceptually simple machine that could theoretically calculate pretty much anything. This led to the concept of computational equivalence, meaning that anything (mechanical, electronic or biological) that can do the elementary steps that a Turing Machine can do is qualitatively similar: they are all computers. One caveat is that Turing’s machine had an infinite memory and an infinite lifespan, so all physical computers are only approximations of the ideal. Actual computers, as opposed to theoretical ones, can be quite different from one another. A particular mathematical operation might be completed by one computer in a nanosecond while requiring another to chug along for a hundred million years, but they are both still computers in the Turing sense.

In other words, brains are computers because they can do elemental mathematical/logical operations. Other systems can also be described by equations too, but they are not considered computers because they do not generalize. For example, a falling object can be described by equations, and the path of the object can be considered a solution to those equations. But the falling object can only really solve one problem (perhaps with different initial conditions), while computers have abstract representations (neural codes) that can be used to do general purpose problem-solving. The internal states of the computer can stand for different things at different times, providing an essentially infinite flexibility.

Saying that brains are computers is not the same thing as saying that brains are only computers (a confusion found often in criticisms of the BRAIN Initiative). Still, this computational view of brains makes some people very uncomfortable. For obvious reasons this discomfort is less acute when we are describing animal brains, but it becomes quite an issue when we start talking about human gray matter.  It’s related to the discomfort that people have with the idea that the mental states are in fact brain states, that the mind is actually the brain, the philosophical/theoretical position called materialism (in opposition to the idea that the ‘soul’ or ‘mind’ is somehow distinct from the brain, called dualism).  People imagine that talking about brains as computers implies that we are all mindless automatons, subject to the sort of ‘does not compute’ rigidity that Captain Kirk always seemed to use to cause errant robots to self-destruct (puff of smoke out the ears and all). In truth, whatever you may think of the colossal literalness of your computer’s insistence on overwriting the file you intended to actually save, computational theory explores a much more flexible idea of computers. This model has proven delightfully fruitful for understanding all sorts of neural systems, even those that presume to wrestle with their own existence.

Photo Credit: Photo of a Turing Machine (without the infinite tape memory) created by Mike Davey, CC 3.0 license by Wikipedia user Rocky Acosta (Arttechlaw)

Advertisements

~ by nucamb on July 7, 2013.

11 Responses to “What’s Computational about Computational Neuroscience?”

  1. Quite astute observations. There might one additional point, though, why people have quarrels with the term ‘computer’: unlike ‘Turing machine’ it elicits connotations of the engineered machines in our homes. While brains are the computers as you rather accurately describe them, they’re evolved computers, not engineered computers – and we don’t really know how evolved computers work, in contrast to the computers we build. Thus, while ‘computer’ is of course an accurate description, due to the machines in our homes, computer is probably not a very fortunate word to use for brains, as they do their computing quite differently from the machines in our homes.

    • Quite true, and I don’t mean to use the term ‘computer’ in pedantic defiance of the conventional (non-specialist) meaning, but more to point out the way in which we use it, and why we use it that way.

  2. Nice post. One thing systems neurosci has been slow to discover about any computing done by the brain, or neurons, with spikes, particularly their timing and information content, is that neurons don’t transmit information to each other with spikes, as you know, they use vesicles. The important thing to realize is a neuron uses spikes to communicate with its synapses, not so much other neurons, and a neuron has lots of synapses. Therefore the large majority of the information, as transmitted in spikes, pertains mainly to the neuron that generated the spikes — it is what it wants to tell its own synapses. Therefore statements like, “if you know that one neuron connects with another, pretty much all the information transmitted from the first to the second must be embodied in a simple list of the times of spikes,” needs to be drastically modified for the general case at least, to reflect the above facts. A neuron, by virtue of the unique “transfer function” ( http://medicalxpress.com/news/2013-06-efficient-transmission-sensory-synapses.html ) at each synapse, sends a different message to each of the thousands of cells it connects, despite sending roughly the same message to each of its synapses.

    • I’ll agree that it is also necessary to know the transfer function at each synapse to get a complete picture of the information processing going on in a network, but since all of that synaptic information must necessarily be integrated at the spike generating mechanism of any post-synaptic neuron in order to be relayed to the next neuron, I think spikes are still rightly considered a sort of sufficient statistic of neural information processing.

  3. So if the axonal bandwidth available to control the output of a thousand synapses by spikes is in fact severely limited, as is the primary dendrite shaft to transmit the integrated input from 10,000 input synapses, I think either we need to posit a new mode of operation for this kind of computer, or intuit additional mechanisms of information transfer through these bottlenecks beyond spikes (bulk cyctoplasmic or membrane mechanical oscillations, cytoskeletal, optic, etc)

    • Think differently: synapses are the sites of computation, axons are the sites of transmission, not the other way around. A synapse is not the connection between two computational units, it IS the computational unit. If you want to know the computation you need to compare input with output and find a model that describes sufficiently the transformation you measured. Focussing on molecular mechanisms within the synapse means to investigate the implementation of computation rather than the computation itself.

      To say it oversimplified: if you go deeper and deeper trying to understand smaller and smaller steps in neuronal computation first, you will end up solving quantum mechanics before you realize the cell is just adding up all the input in a carefully weighted manner – which you could have known by modelling the input-output transformation mathematically rather than by empirically testing the movement of every single molecule involved.

      • So, synapses, as computational units, couple in locally generated information and add it to the spike train — a lot of it. This is a different kind of computer then the one systems neuro typically models, as I said. If the purpose of the huge, expensive sensory synapses, (like those mentioned in my link above for the retina with multiple elaborate ribbon structures) is just speed, fidelity and reliability, then that would be a tremendous waste of resource — the neurons could just instead use gap junctions. But they don’t and the reason is that these synapses, in effect, enable much greater volume of the retina to act as computing volume. Then now we have it, big man, the whole retina needs to be considered as giant molecular computer.

      • i don’t know that we can rule out any particular loci for computation. Yes, dendrites in some animals in some preparation may be basically reducible to linear integrators; in others, they may be essentially supercomputers at each branch. Same goes for synapses.

        Still, I’m not quite convinced that because natural selection stumbled onto ribbon synapses in the retina (instead of gap junctions), that that necessarily means that every molecule is computationally important.

  4. When I get that glazed over eye look, I say “you know, brainy stuff”. Usually that clears the ice 🙂

    • (Michael) Of course we would never say every molecule is computationally important, however I would point out that the idea that these big synapses, (or any synapse for that matter), have all this protein machinery primarily for the purpose of turning just the spikes into memory, or LTP, or dark adaption, or whatever, may not hold water when you consider that spikes are but a very tiny portion of the total information, and certainly the formative mass, that is conveyed down an axon to the synapses. Sure a lot of action does hinge on spike-induced calcium, but mitochondria are also calcium dispensers and absorbers, and have their own localized agenda to a large extent. Transport alone, (in both directions, in cytoplasm and membrane treadmilling), while not independent of spikes, is none-the-less fairly continuous, relatively rapid, and quite massive. These flows ultimately characterize the brain perhaps as much as spikes. Cyctoskeletal transduction of information (not quantum BS variety, but real physical processes) also may contribute, where it is continuous, as could myelin if it is rigid enough to support mechanical disturbances or other effects.

  5. […] “noise” in the nervous system can be quite controversial (and with important consequences for computational theories). As I said above, variability between trials using the same stimulus, between different neurons, […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: