Craigslist Killed the Newspaper, but Science Publishing Thrives (for All the Wrong Reasons)

•May 23, 2013 • 6 Comments

JIF graphic

The wringers of hands in the scientific community have been busy lately fretting over the current state of affairs in science publishing. Since I’m not really a science historian, I can’t speak to the novelty of these concerns, whether they represent some kind of unprecedented crisis of confidence or simply navel-gazing declinism. But there is ample reason to believe that scientific communication is encountering some of the same structural shifts that have upended the publishing business in general, and print journalism in particular. We’ve all seen newspapers around the country close under pressure from novel forms of media, but those not among the twitterati-blogosophers might be surprised to hear that many scientists now consider the main avenues of science communication hopelessly broken.

Here’s why:  Scientific publishing is still largely modeled on assumptions and economics of the dead-tree publishing era. In those glory days, publishers provided editing, typesetting, printing, marketing and distribution services that were otherwise impractical for scientists to obtain on their own. These days, not so much. While most journals do continue to produce a few paper copies, the associated costs of producing those have dropped dramatically (of course, there are now other costs, like hosting websites and archiving materials).  You would think that competitive forces would then drive publishers to lower their prices, but you would be wrong. The prices that publishers charge (mainly to academic libraries) for their work has instead increased, along with the profits of those publishers. Of course, moralizing to for-profit companies about making a profit is pointless, so what are the factors that contribute to this lack of market-driven pricing?

One possibility is that the market isn’t fully competitive. In fact, as with publishing in general, the field has become dominated by a few very large publishers. So institutional libraries claim they lack the clout to negotiate against these oligopolies. Another contributing factor is a kind of prestige science culture that has evolved in universities. Scientific journals are rated by what is called an impact factor. Specifically, it is the average number of citations each paper in a journal receives (over a two-year period). Since scientific papers follow a tradition of citing antecedent work, the number of citations a paper receives is a reasonable measure of scientific influence (though influence is certainly no assurance of quality). Most journals have impact factors less than about 3 or so, but a few journals have very high values. For basic science, the biggies are the multidisciplinary journals Nature, with an impact factor of 36.2, and Science, with 31.2.  Publication in either of these journals, or another called Cell, is often considered a must-have for a scientist’s CV. Without at least one of these glamour pubs, a researcher’s career can stall out at one of the less stable, lower-paid rungs of the scientific career ladder. So scientists need to publish in the big journals, and university libraries at research-oriented institutions are likewise essentially required to carry subscriptions to those big journals in order to give students and faculty access the latest and greatest.

All this would be somewhat less galling if publishers were still providing a great deal of added value to the scientific process, but as mentioned above, most of the publishing, typesetting and marketing services they provided in days past are now nearly universally available at very low cost. As always, the vast majority of the work of science publishing is actually provided to publishers for free by the scientists themselves, the volunteer editors and peer reviewers who contribute the essential intellectual muscle to the process. To review the accusations against the industry: scientific publishers rely largely on volunteer labor to produce journals based on outdated communication models, for which they charge increasing prices to the institutions that provide that labor (universities) in order to generate high profit margins for themselves. Furthermore, despite the fact that much of the funding that pays for these journals ultimately comes from taxpayers and public-interest foundations, the publishers continue to charge high fees for electronic access to even the oldest articles in their archives.

The other thing that is keeping worriers worrying about the current state of the scientific process is a quality control issue. Some highly publicized examples of scientific fraud, in addition to a more (not-so-)benign neglect of statistical best practices, have led scientists in some quarters to warn of a replication crisis, suggesting that most scientific findings might be just plain wrong. Aside from peer review, which is largely incapable of detecting deliberate fraud, replication of previous results in different labs is an essential element of maintaining the integrity of research. However, since replication studies aren’t sexy they tend not to be pursued or published, a problem that seems to be exacerbated by the relentless pursuit of the precious impact factor.

Taking these critiques of science publishing and science process at face value, what are the possible solutions? In general, the proposals entail using modern communication technologies and social networks to crowd-source quality control in science while democratizing access to the results. For example, open access journals have become quite popular recently. In this model, authors pay a fee to cover the publishing overhead costs, but the articles are then free for anyone to download. Several of these journals have also sought to bring transparency to the peer review process by opening it up to more democratic participation. Ultimately, the basic publishing overhead is still being borne by the grants that fund the research, but the scientists themselves can take comfort at least in the fact that publishers aren’t wringing profit from their labor while restricting access to their work in perpetuity. Other efforts at created a Science 2.0 infrastructure have focused on bringing social media elements to the process of reviewing results after they have been published. PubPeer for example, provides a threaded commenting system for published scientific papers. It’s fair to say that the site has yet to develop the sort of robust exchanged of informed opinion we would all hope for, commenters on the site recently identified some sketchy image duplication in a hastily published article in Cell. A more radical approach to fixing what ails scientific publishing has been to avoid established dissemination routes altogether, opting instead to self-publication on data-sharing sites like figshare or on personal blogs. Needless to say, considering how entrenched the current prestige system is in academic career advancement, early career scientists are reasonably wary of using this approach exclusively.

Another element of the current science reform movement stems from harsh criticisms that have been leveled against biomedical clinical research. Whether from unintentional bias, or more deliberate suppression of unfavorable results, the efficacy of the drugs and medical devices we entrust our lives to has been dangerously inflated in the scientific literature. For this reasons, granting agencies and journal publishers began a few years ago to require advance registration of clinical studies, meaning that researchers have to publicly declare (on the web) their intended research aims, hypotheses, methods and outcome measures before beginning a study (There is also a campaign to make these requirements retroactive). This transparency allows peer reviewers to look back at the original design of the study and identify statistical shenanigans that scientists can otherwise use to make a silk purse out of the sow’s ear of negative results. It also prevents research sponsors, like pharmaceutical companies, from disappearing unfavorable studies (Yes, they throw the studies out of helicopters over the jungle) to improve the apparent efficacy of their products.

This kind of preregistration has not really caught on in basic science research. Hypotheses and methods are not routinely publicized before studies begin, so the studies are susceptible to the same sorts of biases (and outright fraud) that have been identified in clinical research. Still Science 2.0 transparency advocates (such as @Neuro_Skeptic) suggest that what’s good for the goose (profiteering drug companies) may also be good for the gander (altruistic humanitarian scientists; sarcastic emphasis added), though some scientists are still hesitant to make that strong commitment to transparency.

Image Credit: This image comes from A Principal Component Analysis of 39 Scientific Impact Measures (Bollen et al. 2009) in the open access journal PLoS ONE. Unlike images from proprietary journal articles (even my own), I’m allowed to use this image (without recourse to fair use claims) because the journal publishes under a Creative Commons license.

Update on the Great BRAINI Debates

•May 12, 2013 • 7 Comments

rasterThe NSF, DARPA and the NIH sponsored a meeting last week that brought together scientists to brainstorm ideas for the new BRAIN Initiative (a proposal I’ve explored many times, starting here, and most recently, here). Organizers seem to have begun to reassure critics that they are developing more inclusive planning procedures and that the funding mechanisms will not siphon off resources from other projects. They still can’t seem to figure out how to get Science to publish their white papers outside the paywall, and there has also been criticism that they are not doing enough to include women scientists in the process. As I’ve mentioned before, I still have my qualms about selling the project to the public based on promises to alleviate clinical disorders that are least likely to be addressed by the proposed methods (as do others).

Still, the silliest critique of the goals of the BRAIN Initiative is that we (meaning systems neuroscientists) wouldn’t know what to do with the data from thousands (or millions) of neurons if we had it. I can assure you that we would, but before I explore that, let’s look at the different facets of this argument. One strain of critique contends that because systems neuroscientists don’t agree on the goal, then none exists. This is like saying there is no coherent point to sequencing the genome because cell biologists, evolutionary biologists, translational researchers and clinicians can’t agree on a set of specific aims. I’m willing to bet that the scientists making this argument would be faced with the same heterogeneity in their own disciplines if they were brainstorming a similarly transformative infrastructure project.

Another strain of this argument is that neuroscientists don’t know enough about the basic components of their field to study the emergent properties of large interacting populations. The argument often has the form “How can you study Y when you don’t even know how X works?” where Y is some presumed higher order function (like color perception) and X is some supposed component subsystem (like retinal neurons). In some ways this is a really just an element of personal scientific disposition. Some people like to study systems, some like reductionist approaches, some like top-down, some like bottom-up, PO-TAY-TO, PO-TAH-TO. Atomists argue that you can’t possibly understand systems without exhaustive explication of components, while systems people think reductionists fail to see the forests for the trees. My suspicion is that people who make the reductionist argument about the BRAIN Initiative really just don’t believe in systems neuroscience as a productive discipline at all. I’m certainly not going to convince those people in a blog entry. Nonetheless, we shouldn’t forget that all science involves judgments about the right level of analysis, the right amount of abstraction, the right topic, the right experimental model, the right modeling equations or the right techniques. We like to argue that these decisions are empirically founded, but mostly we don’t have enough information to make those claims, so we often default to personal preference. Am I arguing that we should throw up our hands and give scientists money to do whatever the hell they want? No. The proof is still in the pudding. Does the approach/model  produce concrete predictions and observable  tests of those predictions? That is not a questions we can answer simply by saying “but you don’t even know…” Returning to the genome example, we did manage to wring some useful insights from sequencing despite the fact that we still don’t have a general solution to how genes relate to protein form/function.

A related argument contends that neuroscience is too atheoretical to formulate relevant questions on which to spend the kind of money that BRAINI proposes.  Again, this argument rests on somewhat idiosyncratic ideas about what a good theory is (as I’m sure philosophers of science can attest). What one scientist sees as a foundational framework, another sees as fuzzy-headed hand waving.  Judging the appropriateness of a particular theory is even more wrought than picking an experimental model. Good theories provide a unifying framework to understand disparate data, but just how unifying can we expect neuroscience theories to be? What these critics seem to be asking for is some grand unified theory of human cognition, consciousness and intelligence. That’s a rather high bar. In fact, there are many fruitful neuroscience theories out there in particular animals, systems and circuits– theories of locomotion, vision, navigation, memory, olfaction, learning, rhythm generation, homeostatic regulation, etc. Different neural systems evolved under different constraints and selection pressures, so we would expect a certain level of heterogeneity in the details. Critics again seem to be conflating the lack of a single theory with the lack of any theory.

One critic of the BRAIN Initiative who seems to find systems neuroscientists particularly lacking in creativity or insight is DrugMonkey (and @drugmonkeyblog), who argues that BRAINI proponents are simply trying to recreate some previous era of generous funding for “neuron recording neuroscience.” S/He suggests that the proposals amount to nothing more than an effort to “record some more neurons.”  If s/he truly finds our entire field as intellectually so sterile, I’m certainly not going to change his/her mind. But I would like to argue that there is a transformative, qualitative difference in the jump from recording tens of cells to recording thousands of cells. This is because you begin to encompass functionally important networks with nearly complete sampling.

For example, what would I do with recordings from thousands of neurons? My dissertation research involved understanding how groups of cells fire together to create the rhythm that drives normal breathing in mice (and presumably, other mammals), so let’s take that work as an example . The cell group that I studied (called the pre-Bötzinger complex) is part of a broader network of circuits that coordinate muscles involved in different phases of breathing under different conditions.  These cell groups, located in the brainstem, are relatively hard for experimenters to access, so much of the basic science has been done in brain slice experiments, which necessarily disconnect networks from each other (and from the behavioral context of the living animal). Other researchers have used multicellular optical methods or multielectrode recordings in anesthetized animals, but for the most part, the interactions of different cell groups has been pieced together from separate recordings of single neurons. For our thought experiment let’s suppose that I had access to the proposed molecular ticker tape technology talked about for BRAINI. What kinds of questions could I answer?

The fact that respiratory neuroscience has not been able generate much data on intact, awake behaving animals means that the new technology would immediately provide physiologically relevant tests of theories from more ‘reduced’ experimental models. Where are the neurons that underlie breathing in the adult animal? How do the neurons fire in relation to breathing in or out? How do they fire in relation to different respiratory behaviors, like gasping or sighing or vocalization? How do the different underlying networks interact? Do some drive exhalation and others inhalation? Do different networks come online during exercise or asphyxia? How does the feedback from low blood oxygen or high carbon dioxide drive respiration? How are interactions between respiration and heart rate mediated?

The first month of experiments using BRAINI technology could address these questions in a depth that would replicate 100 years of research in respiratory neurophysiology. What would we do with the second month?  Development. Disease models. Pharmacology. It’s just the beginning.

And that’s just what I can think of in ten minutes. My systems neuroscience colleagues could easily come up with similar lists of questions in their particular subfields, and  the comparative rate of progress would be just as dramatic. Of course, I can’t guarantee that BRAINI technology would actually work, but I can assure you that systems neuroscientists are not at a loss to know what to do with the data if it does.

 

Update (minutes after posting). I originally assumed DrugMonkey was a ‘he’ purely from discursive style. I actually don’t know one way or another, so I changed the pronoun references.

Image: Visualization of multicellular activity from a simulation of the respiratory network.

Timing is Everything: Cracking another Neural Code

•April 29, 2013 • 8 Comments

Rate codes are among the simplest ways to interpret the activity of neurons as they represent information in brains. Essentially, the stronger the stimulus, the more active the neuron, meaning that it fires more action potentials in given period of time. But what does that mean exactly? Answering that question requires diving into some cellular biology.

Generally speaking, most cells can be thought of as batteries. Skipping the details, they are required to maintain a certain balance of charged particles within their membranes and that creates a charge difference between the inside and the outside. This difference is measured as a voltage between the inside and outside of the cell. For most cells in the body, the voltage stays pretty much the same most of the time, but in neurons, the voltage (called the membrane potential) can change quite rapidly, which is ultimately what allows neurons to transmit information. Given an absence of input from their neighbors, many neurons would just sit idly with an outside-inside potential of about 65 millivolts (purely by convention, we call the outside-to-inside difference a negative potential), but that would be a non-starter for any nervous system worth its salt (inside joke). In fact, neurons typically receive many inputs from other neighboring (and even quite distant) neurons. Inputs that tend to spur the neuron to fire more action potentials are called excitatory, and tend to push the membrane potential toward zero. Those that suppress firing are called inhibitory, because they tend to hold the membrane potential more negative.

actionPotenial1

Here’s where this little biology lesson detour connects with neural coding. Once the membrane potential of a neuron gets pushed a few millivolts toward zero from its normally negative value, the cell generates a stereotypical waveform, the action potential (or spike). To a first approximation, there is no information in the detailed shape of the action potential (since that is largely invariant); it is an all-or-nothing event that is transmitted essentially unchanged to downstream neurons. This fact is essential to understanding why systems neuroscientists often focus on spikes as the core currency of information processing in brains. While it is true that there are other ways of moving messages around in nervous systems (and other tissues), the ability to generate and transmit action potentials over relatively long distances with speed, reliability and precision has been exquisitely honed by natural selection.

In general terms, if the details of the shape of the action potential waveform can be set aside, and if the waveform propagates relatively unchanged to other neurons in the network, then we can ignore a lot of gory details and think of spikes simply as events described only by the time that they occur. In other words, if you know that one neuron connects with another, pretty much all the information transmitted from the first to the second must be embodied in a simple list of the times of spikes.

abstractionl1

So, like Lenny in the next room with the Geiger counter, if you are a neuron (or an eavesdropping neuroscientist) getting a string of pop…pop…pops from another neuron, you only have the timing of those inputs to divine what that neuron is trying to tell you. As I mentioned last time, one coding strategy is a rate code, in which the exact timing of the spikes is random, but firing rate contains all the relevant information. The problem with this strategy is that you have to listen to the sending neuron for a while before you can get a good sense of the value that is being coded. Let’s say for example that the neuron is firing at an average rate of 5 spikes per second, and you listen for 1/10 of a second. Just by chance you might hear a spike or you might not. If you did, you might be tempted to say the spike rate is 10 spikes/second, but if you didn’t hear a spike you might guess 0 spikes/second. What if you listen a little longer? Even listening for a full second, just by chance you could easily end up only getting 4 spikes or perhaps catching 6. The result is that if you really want a good estimate of the coded value, you have to commit to listening for a while and counting up all the spikes.

windows

If you are an eavesdropping neuroscientist, you might be able to sit around all day counting spikes and trying to relate them to the experimental environment, but the nervous system as a whole doesn’t have that luxury. Fast animals are hungry and slow animals are dinner. A more efficient coding system, where the exact timing of each spike now contains the relevant data, could be quite handy for animals preferring not to be meat. To see how that might work, we’ll look back at the rate code in a slightly different way. Instead of thinking about how many spikes happen in a given period of time, we can turn the problem on its head and think about how long we have to wait between each spike. For example, if the spike rate is 10 spikes/second, then on average, we have to wait 1/10th of a second between action potentials. It’s only an average, because the exact times are random (like the pop…pop…pop of the Geiger counter), but what if you could enforce a more strict discipline on the radioactive ingot or the neuron? What if a 10 spikes/second rate was coded by a train of spikes separated precisely by 1/10th of a second. Now that the randomness has been squeezed out of the process, each spike now codes the underlying value exactly. 1/30th of a second from the previous spike to this one, that’s 30 spikes/second. Next spike after 1/5th of a second: five spikes/second. This coding strategy is called a spike timing code, for obvious reasons.

timingCode

Such a system could be extremely efficient in terms of how quickly information could be conveyed, but it relies on the ability of neurons to generate (and interpret) precisely timed action potentials in a reliable way. Even if your only lab experience has been high school biology class, you are likely aware that living systems tend to be messy, random, unpredictable things. It turns out that some neural circuits in some animals exhibit very precise spike timing that could be evidence of such a code, but so far the case is weaker for timing codes in mammalian neocortical circuits that underlie the most adaptive and intelligent behaviors.

 

 

 

 

There’s No “The” in “Neural Code”

•April 22, 2013 • 9 Comments

homer

One of the central, and most misunderstood, goals of the BRAIN Initiative is to understand how brains represent information. But, when people talk about cracking “the neural code,” what do they mean? It’s a reasonable metaphor, since it’s clear that brain activity in people (and animals) somehow represents the outside world in some abstract and non-trivial way. But there is really no single neural code. Not for different animals, and not even for different neurons within a single nervous system.

To really understand how different neural codes can be, we have to take a step back and look at how neurons represent their inputs, starting with one of the simplest of codes. Imagine you’ve got that glowing radioactive ingot that Homer Simpson fumbles around with. It’s in a lead-lined room on a table and you stand with your Geiger counter several feet away. What do you hear? Let’s say you are far enough away that you don’t hear anything, though you imagine if you waited around long enough in that position a stray particle might hit the collector and you would hear a single pop. So you move the wand closer and the sensor’s speaker starts to come to life, like the beginning of a hail storm on a tin roof. At some point there is a steady, though not regular, chatter that seems to max out as you touch the wand to the ingot. If you measured the number of pops-per-second from the Geiger counter, and plotted it against the distance of the wand from the ingot, it would probably look something like this:

sigmoid3

Now let’s say Lenny is in the next room (no windows) listening to your Geiger counter output. It’s his job to figure out how far away the rod is from the ingot at any given moment just from listening to the popping sounds on the speaker. You can imagine that if the radioactivity of the ingot is pretty constant, and with a little training, Lenny should be able to figure this out after a while. That is to say, given a certain rate of popping, Lenny can guess the distance reasonably accurately. So the pop-rate of the counter’s speaker is a code for the distance of the wand from the ingot. So the counter is transmitting information about an attribute of the world (the distance of the wand from the radioactive source). In this case, the information is contained only in the pop-rate, and not in the individual timing of pops (which are random), so that implies that Lenny has to listen to the speaker for a while to get an estimate of the rate (the presence or absence of the pop in a small time window doesn’t tell him much).

What this thought experiment describes is essentially one of the simplest and most common of neural codes, especially in sensory and motor systems. Imagine instead of a Geiger counter, you are listening to the activity of a neuron embedded in the skin that measures pressure. Like the Geiger counter, the information from most neurons is conveyed by discrete all-or-nothing events called action potentials (or spikes), so in this case you’d hear a pop pop pop on your audio monitor and that would tell you how much pressure was being put on the skin around the cell. With the proper experimentation, you’d be able to make a graph relating spike rate (say, in spikes per second) to pressure (maybe in pounds per square inch), and then you could estimate one from observing the other. That’s a neural code. This particular system is called a rate code, for obvious reasons, and because it’s so common in sensory systems and also for certain theoretical reasons, it’s essentially the default model of neural coding in many cases. Importantly, while it is easiest to grasp in terms of sensory coding, rate codes can also be used to describe the output of the nervous system, where motor neurons drive muscles in a process that converts spike rate into muscle contraction.

While rate codes are likely fundamental to most nervous systems, at least as a default hypothesis. When systems neuroscientists or BRAIN Initiative scientists suggest using techniques like calcium imaging, they are at least tentatively endorsing the centrality of rate codes. Because calcium imaging relies on relatively slow ion dynamics, it provides a reasonable readout for spike rate as a measure of neural activity, but it cannot resolve action potential timing in fine detail. Still, there are many neural subsystems in many different animals that appear to rely on more precise spike timing to relay information, and understanding these requires different techniques. I’ll describe some of these alternative coding strategies in future posts.

Photo Credit: The Simpsons opening sequence, under fair use guidelines.

The Moon is not Made of Cheese and Other Hypotheses

•April 8, 2013 • 11 Comments

barrel mapI haven’t really spent too much time in these pages taking on the arguments of specific commenters about the BRAIN Initiative. I’ve certainly addressed some of the issues implicitly, and in fact I have agreed with some of the more serious criticisms of the project (such as overselling the connection to clinical medicine). I’m also just not a big fan of blogs as long-form flame wars, where authors try to show their intellectual skills by teasing out logical/factual inconsistencies in other people’s writing, with the inevitable result: thousands of words of snarky semantic quibbling. Nevertheless, a recent post on Live Science by Donald Stein embodies one strain of opposition to the BRAIN proposal that I think is misguided, and while I’ve alluded to some of these issues in previous posts, it seems like it is a good time to lay out some specific arguments. I mean no specific disrespect to Dr. Stein, rather I’m using his relatively articulate arguments as a proxy for many others who have staked out similar ground.

The crux of Stein’s argument is that the BRAIN Initiative is too open-ended. This has been a recurring theme for many writers as the plan developed from the early outline as the Brain Activity Map, so I’ll step through his particular arguments, given that they reflect the viewpoints of other commenters as well. First, he argues that, unlike the Human Genome Project for instance, the project has no clear end-point. I think that he, like many others, has mistaken the symbolic milestones of the previous big science attempts for a Specific Aim of an NIH grant proposal. Yes, the hallmark event of the lunar exploration program was landing a man on the moon, the goal of the HGP was sequencing an entire individual genome, and presumably characterizing the Higg’s boson was the target for the Large Hadron Collider. But the fact that these ‘goals’ were mostly symbolic is proven by the fact that none of them was shut down once the attainment was made. It’s not as though the moon shot was driven by the hypothesis that the moon was made of cheese. Putting a man on the moon was simply the publicity stunt meant to get the Cold War era public behind a large-scale investment in research/development of (mostly military) infrastructure. So, I’ll agree that the BRAINI team has not developed a good elevator pitch that will entice an increasingly deficit-shy public into a major brain science investment, but that is not really what this project is really about. In fact, as I’ve written before, the attempts to justify the plan as leading to cures for human diseases are the most far-fetched, and I worry that trying to sell basic science as clinical science may sour the public on both pursuits. Nonetheless, the BRAIN Initiative is not (and doesn’t need to be) a hypothesis-driven project; it is a basic science infrastructure development initiative meant to provide tools and technologies not for one Specific Aim in one lab, but for hundreds. In some sense the plan suffers from having too many goals, rather than too few, and in fact it is the charge of the new planning committee to pare down to the essentials in the next year. Perhaps when that process is closer to completion we can begin to ask the tough questions posed by Stein about whether this project is more important than “finding a cure for AIDS” or “beating cancer.” My personal opinion is that in almost any form, it is more important than another B2 Bomber or Joint Strike Fighter, but that’s just me.

The second thrust of Stein’s argument is rather novel among recent criticisms of the BRAINI, though as he mentions, there is some echo of it in Partha Mitra’s qualms about what was then called BAM. Stein asserts that the entire concept of brain mapping is itself so mired in the intellectual baggage of 19th century thinking, that it cannot produce an intellectually coherent result. I will concede that popular accounts of fMRI studies have often taken on the ring of phrenology, that prototypical pseudoscience of the Dickens era that claimed to divine “mental faculties” by the size of brain regions (as assessed by the shape of the skull). Most of the time, when science journalists talk about discrete areas of the brain as responsible for higher level functions, like executive planning or craving chocolate cake, they are just being lazy or stupid. But to confuse that sort of nonsense with the kind of mapping that goes on in systems neuroscience labs is either a complete category error or a classic straw man (which, like the Scarecrow, has no brain). I challenge Stein point out actual evidence of this blinding conceptual bias in the research of scientists like Michael Hasselmo, or György Buzsáki, or Eve Marder, or Terry Sejnowski (just to pick a few).

Stein tries to make this argument more concrete by making rather dramatic claims about neuroplasticity, arguing that “it is well established that the connections between dendrites and synapses in the brain are in a state of constant change,” and claiming that the territory of the brain is so dynamic as to make the entire concept of mapping it untenable. On the surface, depending on the time scale of ‘constant,’ you could say this statement is superficially true. We’ve all been surprised in the last 10-20 years by the amount of global, brain-level plasticity discovered in human brains, and since Stein studies traumatic brain injury, he no doubt sees this glass as half full. Still, to suggest that brains are such an amorphous ooze as to be completely unmappable is completely ridiculous. It’s also contradicted by hundreds of studies coming out every month. Recent discoveries that, for example, adult human brains may grow new brain cells in some very restricted conditions are surprising exactly because they are the exception that proves the rule. Long-term imaging experiments in Karel Svoboda’s lab have shown, for instance, that neocortical dendritic structure is basically stable over weeks, with a subset of synapses forming and disconnecting over shorter time periods. Even this plasticity takes place on the background of basically stable cell types in basically stable arrangements in basically stable cortical layers in basically stable cortical regions subserving basically stable sensory-motor functions. If it weren’t for this stability, structural brain mapping, let alone functional brain mapping, would be impossible. In fact, Partha Mitra’s arguments against the BAM/BRAIN proposal is based not on the impossibility of mapping in general, but on the prematurity of doing functional mapping before you have a good structural map. It is precisely this structural map that his Brain Architecture Project is trying to achieve1.

The third leg of Stein’s argument is that neuroscientists lack consensus about what aspect of brain activity warrants mapping. He claims that we don’t know whether we should be looking at biochemical cascades, genetic expression or structural changes, to name a few. It is true that a variety of people with “neuroscientist” on their business card study all these types of brain activity when trying to understand the brain as a general biological system or a locus for pathology. But those that study brains as information processing systems (called systems or computational neuroscientists) generally agree that action potentials are where the money is. Does that mean that gene expression or glial activity or neuromodulators or biochemical cascades are not important for our brains? Absolutely not. But more than 100 years of research suggests that if you want to know how brains encode, transform and compute information, you need to at least start with the electrochemical impulses neurons use to communicate. If you want to understand the internet, it’s best to start with the data packets and not the chemical composition of the paint on the server racks.

Finally, Stein props up Mitra’s argument that we wouldn’t know what to do with data from a recording of all the action potentials in a particular neural system if we had it. He suggests that we wouldn’t be able to tie that information to the underlying neural structure or to the relevant behavior/perception. Hogwash and balderdash, I say! I can assure Stein that if I had complete action potential data from the brainstem networks that I studied for my Ph.D., I could answer decades of disputed questions with a handful of experiments. This is equally true for my colleagues who study olfactory coding, bird song systems, motor control, cortical rhythms, tactile coding, auditory function, etc. The fact is systems neuroscientists know how to relate neural activity to behavior, perception, and even to network structure because they’ve been doing that for years.

1 Not to be accused of erecting my own straw man, I’m aware that there may be a lot of dynamic reconfiguration occurring in neural systems on the substrate of the structural network. How could we investigate that possibility? Exactly the methods proposed by the BRAIN Initiative.

Figure: One example of map organization in rat neocortex. 3D Reconstruction and Standardization of the Rat Vibrissal Cortex for Precise Registration of Single Neuron Morphology. PLOS Computational Biology. December 2012.

BRAIN Initiative (née BAM) Made Officially Palatable

•April 2, 2013 • 8 Comments

infographicIf you had any qualms about the media savvy of the proponents of the-project-formerly-know-as-BAM, you can now rest assured, because now “There’s an infographic for that.” And with that infographic, came a nice White House announcement, in which President Obama said all the right things about our national talent for “ideas [that] power our economy,” and how greater understanding of the brain could help people with Parkinson’s, epilepsy or PTSD (veterans, of course). The project is now called the BRAIN Initiative, a backronym awkwardly spelled out as Brain Research through Advancing Innovative Neurotechnologies, and for the moment it is conceived as a $100M project seemingly guided by the earlier proposals, but employing a dream team of scientists over the next year or so to define appropriate aims. Within the government, funding will be coming from NIH, DARPA and NSF, with collaborative efforts from private foundations like the Allen Institute, the Howard Hughes Medical Institute, the Kavli Foundation and the Salk Institute. In a live feed twitter answer session, NIH head Francis Collins, reassured twitterverse interrogators (myself included) that funding would be from discretionary accounts and not from current research.  He also emphasized that the BRAIN Initiative would follow the open access model of the Human Genome Project (HGP) by striving to make quality raw data available online to all interested researchers, regardless of intellectual property concerns.

When the original trial balloon was floated, the budget was said to be comparable to the HGP, which was estimated to have cost near $3B over ten years, so many critics were deeply skeptical about the possibility of drumming up $300M a year for systems neuroscience without sucking the life out of everything else. With today’s announcement of a more modest budget and a presumptively open/competitive proposal process, some of the open hostility among the science twitterati has calmed, perhaps reflecting the opinion that if it’s a boondoggle, at least only a $100M boondoggle.

Personally, while I still think the timing for this sort of work is good, this funding level is much less likely to produce the sort of transformative innovation in the field that has been suggested by earlier versions of the plan. As I’ve mentioned before, the translational medicine line of justification for the project has always been the most tenuous, and at this funding level, that is even more true. Three billion dollars would not have solved the human brain in a way that would fix diseases like epilepsy, and $100M certainly won’t either. Even more than before, this continued strategy of promoting basic science on the promise of curing human disease runs a very serious risk of alienating the public from both pursuits.

 

<p><span style=”display:none”>claimtoken-515e2e661ab33</span></p>

 

Rafael Yuste: More Bucks for the BAM

•March 14, 2013 • 4 Comments

yuste with addressDr. Rafael Yuste, a well-known researcher at the Howard Hughes Medical Institute (Columbia University), has been at the center of the mega-neuroscience effort to create dynamic maps that would characterize the dance of neuroelectric activity underlying perception and behavior in animals (with aspirations toward human brains). He participated in the Kavli Institute conference from which the plan sprang, and led the authorship of the two papers (one in Neuron and one in Science) that outlined the proposal for the public at large. In corners of the twittersphere/blogoverse the response to this plan has been rather tepid, with many authors worried about the appropriateness and practicality of the aims as well as the perceived antidemocratic nature of the planning. I contacted Yuste by email to ask him about these and other questions. His responses were generally terse and often referred to previous statements in the two papers, but they do provide some additional insight into the process and promise of this presumably multibillion dollar project.

When asked if he followed the response to the brain activity map (BAM) proposal in social media and given an opportunity to clear up any misconceptions, he suggested that some neuroscientists have criticized the project because they believe it could reduce their own funding. Yuste argues instead that neuroscience funding is not a “zero-sum game,” and that neuroscientists should welcome it as a catalyst for new revenue. He denied that providing for BAM would necessarily require syphoning money from other research and pointed out that the BAM planners “requested funds come from additional sources, perhaps by an act of Congress.”

As I’ve pointed out in a previous post, the optics of releasing the two BAM proposals behind paywalls (at least initially) seemed designed to raise the hackles of the many open access proponents among the tweeting hoards. Yuste pointed out that the Neuron paper had already been made available and that the BAM planners were working to open access to the Science paper as well. He insisted that their recommendations were for all BAM data to be open and publicly accessible online even before any publications.

Pointing to criticism of the project planning as insular, I asked Yuste how he would reassure the community that the funding process would be open and transparent. He rejected the premise of the question outright, insisting that the BAM proposal was developed over the course of 4 separate workshops involving a group of about 100 scientists, both men and women, who vigorously debated the merits of the proposal against hundreds of other ideas. He pointed out that the BAM plan cleared a final vetting round against a field of about 20 other proposals, with support from NIH, NSF, DARPA and White House representatives. He said it was “a clean shot without any agenda,” and that the team worked very hard to “generate a coherent proposal and integrate everyone’s input.” He contends the process was an excellent example of democracy in action, and an illustration of “grass-roots activism” at work influencing the political agenda.

Another thread to the recent criticism of BAM is that its goals are too vague and lacking in concrete hypotheses. As I have also argued, Yuste contends the project is not meant to be hypothesis-driven, but rather an effort to develop tools that can be uses by anyone to test “whatever hypotheses they want” in whatever animals they choose. He also rejects the criticism of Parth Mitra (and others) that brain activity mapping should wait for a more complete understanding of underlying circuit connectivity. He believes that activity mapping and connectomics can proceed in parallel with tremendous benefit to both endeavors.

All in all, Yuste seems mystified by much of the criticism of the BAM project, which he considers to be based on misconceptions and unfounded fears of scientists who think they will lose their own funding. In fact, in dismissing the opposition, he blithely asserts “It’s hard to argue with building better tools to acquire new knowledge.”