Timing is Everything: Cracking another Neural Code

Rate codes are among the simplest ways to interpret the activity of neurons as they represent information in brains. Essentially, the stronger the stimulus, the more active the neuron, meaning that it fires more action potentials in given period of time. But what does that mean exactly? Answering that question requires diving into some cellular biology.

Generally speaking, most cells can be thought of as batteries. Skipping the details, they are required to maintain a certain balance of charged particles within their membranes and that creates a charge difference between the inside and the outside. This difference is measured as a voltage between the inside and outside of the cell. For most cells in the body, the voltage stays pretty much the same most of the time, but in neurons, the voltage (called the membrane potential) can change quite rapidly, which is ultimately what allows neurons to transmit information. Given an absence of input from their neighbors, many neurons would just sit idly with an outside-inside potential of about 65 millivolts (purely by convention, we call the outside-to-inside difference a negative potential), but that would be a non-starter for any nervous system worth its salt (inside joke). In fact, neurons typically receive many inputs from other neighboring (and even quite distant) neurons. Inputs that tend to spur the neuron to fire more action potentials are called excitatory, and tend to push the membrane potential toward zero. Those that suppress firing are called inhibitory, because they tend to hold the membrane potential more negative.

actionPotenial1

Here’s where this little biology lesson detour connects with neural coding. Once the membrane potential of a neuron gets pushed a few millivolts toward zero from its normally negative value, the cell generates a stereotypical waveform, the action potential (or spike). To a first approximation, there is no information in the detailed shape of the action potential (since that is largely invariant); it is an all-or-nothing event that is transmitted essentially unchanged to downstream neurons. This fact is essential to understanding why systems neuroscientists often focus on spikes as the core currency of information processing in brains. While it is true that there are other ways of moving messages around in nervous systems (and other tissues), the ability to generate and transmit action potentials over relatively long distances with speed, reliability and precision has been exquisitely honed by natural selection.

In general terms, if the details of the shape of the action potential waveform can be set aside, and if the waveform propagates relatively unchanged to other neurons in the network, then we can ignore a lot of gory details and think of spikes simply as events described only by the time that they occur. In other words, if you know that one neuron connects with another, pretty much all the information transmitted from the first to the second must be embodied in a simple list of the times of spikes.

abstractionl1

So, like Lenny in the next room with the Geiger counter, if you are a neuron (or an eavesdropping neuroscientist) getting a string of pop…pop…pops from another neuron, you only have the timing of those inputs to divine what that neuron is trying to tell you. As I mentioned last time, one coding strategy is a rate code, in which the exact timing of the spikes is random, but firing rate contains all the relevant information. The problem with this strategy is that you have to listen to the sending neuron for a while before you can get a good sense of the value that is being coded. Let’s say for example that the neuron is firing at an average rate of 5 spikes per second, and you listen for 1/10 of a second. Just by chance you might hear a spike or you might not. If you did, you might be tempted to say the spike rate is 10 spikes/second, but if you didn’t hear a spike you might guess 0 spikes/second. What if you listen a little longer? Even listening for a full second, just by chance you could easily end up only getting 4 spikes or perhaps catching 6. The result is that if you really want a good estimate of the coded value, you have to commit to listening for a while and counting up all the spikes.

windows

If you are an eavesdropping neuroscientist, you might be able to sit around all day counting spikes and trying to relate them to the experimental environment, but the nervous system as a whole doesn’t have that luxury. Fast animals are hungry and slow animals are dinner. A more efficient coding system, where the exact timing of each spike now contains the relevant data, could be quite handy for animals preferring not to be meat. To see how that might work, we’ll look back at the rate code in a slightly different way. Instead of thinking about how many spikes happen in a given period of time, we can turn the problem on its head and think about how long we have to wait between each spike. For example, if the spike rate is 10 spikes/second, then on average, we have to wait 1/10th of a second between action potentials. It’s only an average, because the exact times are random (like the pop…pop…pop of the Geiger counter), but what if you could enforce a more strict discipline on the radioactive ingot or the neuron? What if a 10 spikes/second rate was coded by a train of spikes separated precisely by 1/10th of a second. Now that the randomness has been squeezed out of the process, each spike now codes the underlying value exactly. 1/30th of a second from the previous spike to this one, that’s 30 spikes/second. Next spike after 1/5th of a second: five spikes/second. This coding strategy is called a spike timing code, for obvious reasons.

timingCode

Such a system could be extremely efficient in terms of how quickly information could be conveyed, but it relies on the ability of neurons to generate (and interpret) precisely timed action potentials in a reliable way. Even if your only lab experience has been high school biology class, you are likely aware that living systems tend to be messy, random, unpredictable things. It turns out that some neural circuits in some animals exhibit very precise spike timing that could be evidence of such a code, but so far the case is weaker for timing codes in mammalian neocortical circuits that underlie the most adaptive and intelligent behaviors.

 

 

 

 

Advertisements

~ by nucamb on April 29, 2013.

8 Responses to “Timing is Everything: Cracking another Neural Code”

  1. “any nervous system worth its salt” – electrolyte humor FTW!

  2. How could you ensure that a coding system based on inter-spike interval be efficient?

    • We need to be clear that the efficiency of the spike timing code comes from the lack of randomness, not from the organization of the code itself (which is really just an inversion of the rate code), so it brings up questions of how precise and reliable can the timing of action potentials be. The more reliable and precise neurons can be, the more efficient (in terms of number of spikes, and time needed to deliver a message) the code can be. Of course the receiving neuron has to be capable of interpreting that precision and use it for it’s own precise computations.

      • Thanks for your reply. I am being involved in modelling the CA3-CA1 tripartite synapse by using Izhikevich model for pre-synaptic neuron (CA3) and Traub’s 19-compartment model for post-synaptic neuron (CA1). I am currently involved in investigating the role of astrocytes, NMDAR current, spine density and location of synaptic inputs on the spike dynamics of action potential in CA1. I got versatile bursting characteristics in CA1 by tuning the above simulation parameters such as spine density, location of synaptic inputs on the dendrtiic compartments, presence/absence of astrocytes and NMDAR current. I want to decipher the information coding in CA1 in this framework. I got several results with various test cases and it will be good if you could send me your mail id.

  3. […] Nucleus Ambiguous […]

  4. […] My tutorials on neural coding. […]

  5. […] problem (perhaps with different initial conditions), while computers have abstract representations (neural codes) that can be used to do general purpose problem-solving. The internal states of the computer can […]

  6. […] its famously squishy subject matter. Whether you stick an electrode into a neuron and measure its membrane potential or image the combined activity of billions of neurons (and glia, by the way) with an fMRI scanner, […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: