The Neural Signal and the Neural Noise

•November 22, 2014 • 8 Comments

The quantitatively inclined will no doubt recognize my reference to the recent book by Nate Silver about the potential and perils of prediction. While not exactly a reference for high level predictive techniques in statistics, the book was a good introduction to the general reader from a bright guy who is best known for revealing the uselessness of political pundits during recent election cycles.

And accurate prediction is at the heart of the scientific method; it’s what sets that method apart from other ways of knowing about the world. From the movement of the stars to the constituents of atoms, the true test of any scientific hypothesis is not the elegance of its theory (though that is typically held in high regard as well) but its ability to make concrete (typically quantitative) and accurate predictions about events that have either not been observed or not yet happened at all.

But to paraphrase either Niels Bohr or Yogi Berra (or someone completely different), ‘prediction is difficult, especially about the future.’ No less so in neuroscience, with its famously squishy subject matter. Whether you stick an electrode into a neuron and measure its membrane potential or image the combined activity of billions of neurons (and glia, by the way) with an fMRI scanner, there is a lot of variability in the response that seems to persist no matter how meticulously you control the inputs to the system. The typical approach to deal with this problem is to do your experiments over and over again with the expectation that the “noise” in the system (whatever its source) will eventually average out. So, you present a stimulus to a single cell or a network or a whole brain, measure the result, and maybe on that experimental trial the response is a little stronger. You repeat the stimulus. On the next trial, despite keeping everything as precisely identical as you can, the response is a little weaker. Rinse and repeat.

After a while you’ll have enough examples of the response that you can average all these together and expect the ups and downs not associated with your stimulus to balance each other (overall). It’s exactly the same principle as taking the average of all the scores on a given test for all of the students in a class. You expect that the average will tell you something about the performance of the class as a whole independent of the individual background and daily drama of the particular students within the class.

This leads to one of the most important issues with separating the signal from the noise. The difference between the two is mostly dependent on what information you want to extract. It’s like being at a party and trying to watch something on TV. For you, all that chit-chat is noise, a distraction from what you are interested in, while for someone else at the party that damn TV is interfering with her efforts to follow a conversation. Given a set of data about student grades, a teacher may be interested in the variability that relates to teaching methods while a demographer might be interested in differences associated with socio-economic status and a policy-maker might be concerned with how differences in funding in different schools are reflected in achievement (Needless to say, any of these people would likely have at least some interest in the other sources of variability as well).

Still, there are some examples of noise that are not just “shit that doesn’t interest me.” Some of it is “shit I just can’t get my head around.” Imagine a pair of dice, for example. At the macro, everyday, craps table level, they are pretty much unpredictable (random), meaning that all the variability in each throw is unexplained (really no signal there at all, unless you believe you have a “system”). Still you can imagine that if you had enough information about the mass, precise shape, and molecular composition of the dice (and table), and enough control over the throw, that at least in principle you could predict the outcome.

Nonetheless, at the micro (or rather nano, or ato) level, sometimes it’s not even possible in principle to make fully accurate predictions. Quantum theory argues that the very small bits that make up our universe don’t behave in that nice Newtonian billiard ball regime we are so used to. The counter-intuitiveness of that fundamental, intrinsic, elephants-all-the-way-down, randomness famously led Einstein to protest that “God doesn’t play dice with the world.” In other words, he thought the indeterminacy of quantum physics reflected “shit I just can’t get my head around” rather than true randomness.

There is one other source of unpredictability: chaos. Chaotic behavior is a feature of some systems that, despite being essentially deterministic, are fundamentally unpredictable, except over very short time horizons. Without going too far into the details, the important point is that the unpredictability of chaotic systems comes not from intrinsic randomness, but from the fact that they can produce wildly erratic behavior from the most infinitesimal differences in starting points.

Coming back to neuroscience, it turns out that the sources of “noise” in the nervous system can be quite controversial (and with important consequences for computational theories). As I said above, variability between trials using the same stimulus, between different neurons, between different brains, subjects, or days of the week are all vexingly real in experimental neuroscience. Nonetheless, in many experiments it remains maddeningly unclear whether the variability comes from intrinsic randomness percolating up from the nano-scale fluctuations of individual molecules, from the vast number of unmeasured and uncontrolled variables in any network, or from more strictly defined chaotic dynamics. Kind of like elections. At least we don’t have worry about the variability caused by the Koch brothers.

Camelot!

•November 30, 2013 • 2 Comments

camelot

That derisive comeback, throwing cold water on what would otherwise be a profound discovery, is repeated on a constant basis in science. While they are mostly not miniature versions of Arthurian castles designed for cinematic effect, scientists use models all the time, and criticizing other scientist’s models is an essential part of the process.

Most obviously, in experimental biology disciplines like neuroscience, scientists use animal models. That means that the mouse or rat or monkey or what-have-you is meant to be a stand-in for the real system of interest (often human), and it is used to answer those pesky questions that researchers are incapable of investigating in the actual organism. By the way, this is not to dismiss the many neuroscientist and other biologists who are not primarily interested in human application of their work, but who instead study animals to understand fundamental principles of nervous systems or ecosystems or evolution. That’s science too, but their animal models are more of an exemplar than a substitute. In any case, by way of example, there are several good reasons why neuroscientists do experiments with cute furry creatures. First of all, mice are cheap and easy to raise, and their nervous systems are comparable to ours in many important ways. The most important reason, however, is that they happen to be very easy to manipulate genetically. This makes them our nearest kindred vertebrate in which scientists can exert really precise molecular control over all aspects of development. This means, for example, that neuroscientists can breed mice in which a particular subset of neural types in a particular circuit express a protein that makes them glow for easy identification. Or, they can express another protein that makes the cells activate or inactivate when illuminated with a particular color of light. More speculatively, this genetic control might soon mean the ability to allow millions of neurons to record their own activity patterns or layout a whole brain circuit diagram in a single experiment. Obviously, this kind of experimental power is extremely attractive for the experimentalist.

Scientist also use more ephemeral models as well. These can range from the sort of rule-of-thumb abstractions (conceptual models) or simple metaphors for unobservable cause and effects to explicit mathematical models with thousands of variables. Whether they admit it or not, all scientists use conceptual models to organize their thinking about observations and experiments, just as all people organize their perceptions around more or less explicit understanding of how the universe works. Researchers who do mathematical/computational models (as I have) are required by the technique to specify all the moving parts in their models and to make explicit exactly how those parts interact. In that way, quantitative modeling disallows certain kinds of hand-waving (“the super-ego keeps the id in check”) while introducing others (“This symbol I wrote down means ‘supply’”).

With scientists using the range of models, from unconscious conceptual schemata to differential equations to living animals, it’s no wonder criticizing models makes up a large part of scientific discourse. So what makes a good model? Trick question. There is no such thing as a good model, at least not outside of the context of a particular scientific question. It all depends. Want to study executive function and decision-making? Cell cultures in a petri dish may not be the right choice. On the other hand, if you are studying the basic molecular functions that all living cells use, that petri dish might be just the thing. Of course this example makes the point trivially obvious, but scientists who use different model systems often speak at cross-purposes. The choice of model systems always involves trade-offs, some of which are carefully thought out and some of which are simply a matter of intellectual disposition (to-MAY-to, to-MAH-to). For example, in studying processes underlying human diseases, some people like the precise control of a cell culture system and others like the immediacy of clinical drug trials. The central tension is between the simple models that can be more tractable and more complex models that soon become as difficult to understand as the systems they are trying to explicate.

Just as an aside, some animal rights activists have claimed that cell-cultures and computational models can replace many animal experiments. While I’m absolutely in favor of reducing the suffering we cause animals as part of biomedical research, as someone who has used all three of these techniques, I can say this about people who claim animal research can be supplanted by the other techniques: they know nothing about biology and less about computers. Unless they are ethical absolutists on this question, animal rights activists would be sensible to support techniques (like those proposed by the BRAIN Initiative) that would allow collection of huge amounts of data from individual animals in one experiment.

So you have to be content with the fact that there is no sweet spot. Reductionists will always fault systems scientists for not understanding the underlying mechanisms, and systems scientists will always disparage reductionists for not explaining the emergent phenomena. It’s important to realize that despite the bickering tone this discussion may sometimes take, it is essential to understanding how findings from one model apply (or don’t apply) to others.

Photo Credit: Classic scene from the classic movie Monty Python’s Holy Grail. From a pre-CGI era, when special effects meant building miniature replicas of big things and getting the camera up real close.

Update  12/01/2013: Fixed a couple of minor typos/redundancies.

 

Preregistration, a Boring Ass Word for a very Important Proposal

•July 25, 2013 • 23 Comments

science

In response to open letter from more than 80 scientists in the Guardian last month, Sophie Scott (deputy director of the Institute of Cognitive Neuroscience at University College London) has a post in the Times Higher Education. She argues that preregistration would threaten scientific progress by limiting exploratory and speculative research, imposing a sort of methodological tyranny (my word, not hers) requiring researchers to adhere to a strict hypothesis-testing format. She also claims that preregistration could lock researchers into publishing in a journal in which they have preregistered a study, even when surprising results may be a better fit for another venue. She closes by suggesting that preregistration could stifle the kind of genius exhibited by scientists like Newton, whose lesser known pursuits included forays into alchemy.

Newton! Galileo! Science just wants to be free! Fight the man, right?

Well, no. The mistake that Scott makes in this column is so common from people who oppose preregistration, that it’s frankly getting a little tiresome. Preregistration would not suppress exploratory research. It would not press the jackboot against the throat of scientific serendipity. It doesn’t aim to root out speculation and force all investigations into the mold of radical empiricism. Why? Because preregistration is not exclusive.

Yes, I suppose that some people have talked about journals and funding agencies requiring preregistration in the same way that it is required for clinical trials, but there are several reasons to believe such proposals would not bring the death of creative research. First of all, nothing about these plans demands that you publish only results based on preregistered hypotheses. Spontaneous observations, chance findings, novel analyses all can reasonably appear in a preregistered study. Should I repeat that?  It doesn’t seem to be getting through to the critics. Again, preregistration doesn’t exclude unanticipated results, it simply requires that you label them as such. Also, despite the huge groundswell of public support [crickets], and the hegemonic domination of science publishing by preregistration advocates (really?), there will always be journals that will publish observational and speculative studies. If those studies are good, they will be cited.

Really, what’s mind-boggling about Scott’s defense of the status quo is the failure to even acknowledge what’s wrong with current statistical practices in the sciences. As pointed out skillfully in Slate by Andrew Gelman, researchers are able to instantaneously test huge numbers of hypotheses essentially unconsciously while ‘exploring’ their data, and yet these multiple comparisons remain uncorrected in the final analysis. Instead, the paper is written as if the researchers had the specific comparisons in mind from the beginning: p<0.05, BAM! Science Accomplished!

Is this just the inevitable Kuhnian lurching of science, as Scott suggests, or is it a systemic problem for which we can create institutional incentives for better practices?  First of all, it’s completely ridiculous to turn the observation that scientific progress has historically been paroxysmal (and politically contentious) into an argument that we don’t need to worry about the way we do hypothesis testing. Preregistration is not about excluding exploratory or speculative work, it is about setting up an infrastructure to reward researchers for using practices that are more likely to generate results which are meaningful, reliable and reproducible. What could be wrong with that?

So maybe the problem is branding. Instead of “preregistration”, perhaps we should call it “honest science labeling.”

Photo Credit: From flickr user judy_breck under Creative Commons.

Update: Minor edit about 15 minutes after posting to clarify Sophie Scott’s affiliation.

What’s Computational about Computational Neuroscience?

•July 7, 2013 • 11 Comments

Turing_Machine_Model_Davey_2012

When non-scientists ask me what I studied for my PhD, if I want to put a complete halt to the conversation, I answer ‘computational neuroscience.’

To continue, for those readers who have not already clicked away, the usual response (eye’s glazing over) is something like ‘wow,’ followed by a quick deflection to another topic. So, I’ve learned instead to start with ‘neuroscience,’ which turns off fewer people (though the number is still substantial). It seems there is something about the word ‘computational’ that just pushes into incomprehensibility for many people.

So what is computational about computational neuroscience? The short answer is that it means thinking about how brains are computers. Notice that I didn’t say brains are ‘like’ computers.  Many people think that when scientists talk about brains as computers they are speaking in a kind of metaphor, but that’s not the case. Still, they are not saying that brains are literally ‘silicon microprocessor based generic math devices.’ They are saying that brains are literally ‘things that compute.’ And that’s not just a semantic dodge; it comes from Alan Turing, who originally defined a conceptually simple machine that could theoretically calculate pretty much anything. This led to the concept of computational equivalence, meaning that anything (mechanical, electronic or biological) that can do the elementary steps that a Turing Machine can do is qualitatively similar: they are all computers. One caveat is that Turing’s machine had an infinite memory and an infinite lifespan, so all physical computers are only approximations of the ideal. Actual computers, as opposed to theoretical ones, can be quite different from one another. A particular mathematical operation might be completed by one computer in a nanosecond while requiring another to chug along for a hundred million years, but they are both still computers in the Turing sense.

In other words, brains are computers because they can do elemental mathematical/logical operations. Other systems can also be described by equations too, but they are not considered computers because they do not generalize. For example, a falling object can be described by equations, and the path of the object can be considered a solution to those equations. But the falling object can only really solve one problem (perhaps with different initial conditions), while computers have abstract representations (neural codes) that can be used to do general purpose problem-solving. The internal states of the computer can stand for different things at different times, providing an essentially infinite flexibility.

Saying that brains are computers is not the same thing as saying that brains are only computers (a confusion found often in criticisms of the BRAIN Initiative). Still, this computational view of brains makes some people very uncomfortable. For obvious reasons this discomfort is less acute when we are describing animal brains, but it becomes quite an issue when we start talking about human gray matter.  It’s related to the discomfort that people have with the idea that the mental states are in fact brain states, that the mind is actually the brain, the philosophical/theoretical position called materialism (in opposition to the idea that the ‘soul’ or ‘mind’ is somehow distinct from the brain, called dualism).  People imagine that talking about brains as computers implies that we are all mindless automatons, subject to the sort of ‘does not compute’ rigidity that Captain Kirk always seemed to use to cause errant robots to self-destruct (puff of smoke out the ears and all). In truth, whatever you may think of the colossal literalness of your computer’s insistence on overwriting the file you intended to actually save, computational theory explores a much more flexible idea of computers. This model has proven delightfully fruitful for understanding all sorts of neural systems, even those that presume to wrestle with their own existence.

Photo Credit: Photo of a Turing Machine (without the infinite tape memory) created by Mike Davey, CC 3.0 license by Wikipedia user Rocky Acosta (Arttechlaw)

How the Scientific Sausage Gets Made: Preregistration Escrow for Basic Science?

•June 2, 2013 • 6 Comments

safe with creditScientific research is at the same time highly collaborative and extremely competitive.  The overall culture encourages cooperation and sharing of results (and data) with colleagues, but  scientists are also extremely sensitive to the possibility of being scooped on a major finding by rivals working in the same area. Everyone wants to be Darwin or Edison, but no one wants to be Wallace or Tesla. So despite the basic collegial ethos, we are often wary of publicizing early hypotheses or experimental approaches for fear that some other enterprising grad student or post doc might be able to muster more extreme sleep deprivation and experimental luck to get a quicker publication of the same findings. That’s why there is always a bit of cat-and-mouse at scientific conferences, where scientists present early results to their most able rivals (and ardent critics).

So it’s not surprising that basic science researchers would be a little wary of proposals to pre-register their hypotheses and methods in a public database. Many regulators and journals require such registration for clinical trials (of drugs and medical devices) as a way of improving the quality of clinical research; preregistration helps prevent statistical (and publishing) practices that can bias results. But you could argue that basic research is by definition much more speculative, and therefore more vulnerable to misappropriation than clinical trials of drugs and devices that have most likely already been protected by patents. In fact, there is already a sort of preregistration in basic science because grant proposals to federal funding agencies contain the most preliminary hypotheses and cutting-edge experimental methods. While these proposals are confidential during review, they can apparently be requested through open government procedures (such as freedom-of-information requests) after they are funded (though still long before the submitter can establish a dominant head start on the research). For this reason, there has been some attempt to limit open government laws from being applied to scientific research (hat tip to @mrgunn).

How can we reconcile the need for quality control that comes with publicizing research aims in advance with some protection for scientific intellectual property (or even just the thrill of being first)? I have an idea. It’s such a simple idea that I’m sure many people must have thought of it before, and it may even be implemented somewhere in PubPeer or figshare or some other open/reproducible science platform. If it is, I’d love to hear about it, and I’ll be glad to publicize it. Here it is: Publishers, journals and granting agencies could require that project proposals (including hypotheses, methods, etc.) be preregistered in an escrow system that remains confidential until publication. During peer review, the database would be opened to reviewers and editors, and since entries would be date-stamped by the data escrow service, reviewers could verify the timeline of the research. Once published, the account could be opened up to public access for post-publication review.

As I say, this proposal is so simple that it has most likely been made, argued and abandoned at some point already (perhaps in this excellent blog by @ceptional about open science and the replication crisis). This proposal also fails to address other possible flaws in preregistration schemes, such as that they might suppress speculative research or reward data mining over data collecting (also, see a response here). It may be that some of these issues can be addressed within the scheme I describe, or it may not. I’d be interested to hear what people think either way.

UPDATE (7/26/2013): I should have added this sooner, but as I suspected the concept of escrow has been part of open science discussions, and as pointed out in the comment below, is part of the Open Science Framework.

Image credit: Hatena Fotolife user Tanemori (under Creative Commons Attribution 2.1 Japan).

Other Recent Topics:

My most recent review on the BRAIN Initiative.

My tutorials on neural coding.

Craigslist Killed the Newspaper, but Science Publishing Thrives (for All the Wrong Reasons)

•May 23, 2013 • 6 Comments

JIF graphic

The wringers of hands in the scientific community have been busy lately fretting over the current state of affairs in science publishing. Since I’m not really a science historian, I can’t speak to the novelty of these concerns, whether they represent some kind of unprecedented crisis of confidence or simply navel-gazing declinism. But there is ample reason to believe that scientific communication is encountering some of the same structural shifts that have upended the publishing business in general, and print journalism in particular. We’ve all seen newspapers around the country close under pressure from novel forms of media, but those not among the twitterati-blogosophers might be surprised to hear that many scientists now consider the main avenues of science communication hopelessly broken.

Here’s why:  Scientific publishing is still largely modeled on assumptions and economics of the dead-tree publishing era. In those glory days, publishers provided editing, typesetting, printing, marketing and distribution services that were otherwise impractical for scientists to obtain on their own. These days, not so much. While most journals do continue to produce a few paper copies, the associated costs of producing those have dropped dramatically (of course, there are now other costs, like hosting websites and archiving materials).  You would think that competitive forces would then drive publishers to lower their prices, but you would be wrong. The prices that publishers charge (mainly to academic libraries) for their work has instead increased, along with the profits of those publishers. Of course, moralizing to for-profit companies about making a profit is pointless, so what are the factors that contribute to this lack of market-driven pricing?

One possibility is that the market isn’t fully competitive. In fact, as with publishing in general, the field has become dominated by a few very large publishers. So institutional libraries claim they lack the clout to negotiate against these oligopolies. Another contributing factor is a kind of prestige science culture that has evolved in universities. Scientific journals are rated by what is called an impact factor. Specifically, it is the average number of citations each paper in a journal receives (over a two-year period). Since scientific papers follow a tradition of citing antecedent work, the number of citations a paper receives is a reasonable measure of scientific influence (though influence is certainly no assurance of quality). Most journals have impact factors less than about 3 or so, but a few journals have very high values. For basic science, the biggies are the multidisciplinary journals Nature, with an impact factor of 36.2, and Science, with 31.2.  Publication in either of these journals, or another called Cell, is often considered a must-have for a scientist’s CV. Without at least one of these glamour pubs, a researcher’s career can stall out at one of the less stable, lower-paid rungs of the scientific career ladder. So scientists need to publish in the big journals, and university libraries at research-oriented institutions are likewise essentially required to carry subscriptions to those big journals in order to give students and faculty access the latest and greatest.

All this would be somewhat less galling if publishers were still providing a great deal of added value to the scientific process, but as mentioned above, most of the publishing, typesetting and marketing services they provided in days past are now nearly universally available at very low cost. As always, the vast majority of the work of science publishing is actually provided to publishers for free by the scientists themselves, the volunteer editors and peer reviewers who contribute the essential intellectual muscle to the process. To review the accusations against the industry: scientific publishers rely largely on volunteer labor to produce journals based on outdated communication models, for which they charge increasing prices to the institutions that provide that labor (universities) in order to generate high profit margins for themselves. Furthermore, despite the fact that much of the funding that pays for these journals ultimately comes from taxpayers and public-interest foundations, the publishers continue to charge high fees for electronic access to even the oldest articles in their archives.

The other thing that is keeping worriers worrying about the current state of the scientific process is a quality control issue. Some highly publicized examples of scientific fraud, in addition to a more (not-so-)benign neglect of statistical best practices, have led scientists in some quarters to warn of a replication crisis, suggesting that most scientific findings might be just plain wrong. Aside from peer review, which is largely incapable of detecting deliberate fraud, replication of previous results in different labs is an essential element of maintaining the integrity of research. However, since replication studies aren’t sexy they tend not to be pursued or published, a problem that seems to be exacerbated by the relentless pursuit of the precious impact factor.

Taking these critiques of science publishing and science process at face value, what are the possible solutions? In general, the proposals entail using modern communication technologies and social networks to crowd-source quality control in science while democratizing access to the results. For example, open access journals have become quite popular recently. In this model, authors pay a fee to cover the publishing overhead costs, but the articles are then free for anyone to download. Several of these journals have also sought to bring transparency to the peer review process by opening it up to more democratic participation. Ultimately, the basic publishing overhead is still being borne by the grants that fund the research, but the scientists themselves can take comfort at least in the fact that publishers aren’t wringing profit from their labor while restricting access to their work in perpetuity. Other efforts at created a Science 2.0 infrastructure have focused on bringing social media elements to the process of reviewing results after they have been published. PubPeer for example, provides a threaded commenting system for published scientific papers. It’s fair to say that the site has yet to develop the sort of robust exchanged of informed opinion we would all hope for, commenters on the site recently identified some sketchy image duplication in a hastily published article in Cell. A more radical approach to fixing what ails scientific publishing has been to avoid established dissemination routes altogether, opting instead to self-publication on data-sharing sites like figshare or on personal blogs. Needless to say, considering how entrenched the current prestige system is in academic career advancement, early career scientists are reasonably wary of using this approach exclusively.

Another element of the current science reform movement stems from harsh criticisms that have been leveled against biomedical clinical research. Whether from unintentional bias, or more deliberate suppression of unfavorable results, the efficacy of the drugs and medical devices we entrust our lives to has been dangerously inflated in the scientific literature. For this reasons, granting agencies and journal publishers began a few years ago to require advance registration of clinical studies, meaning that researchers have to publicly declare (on the web) their intended research aims, hypotheses, methods and outcome measures before beginning a study (There is also a campaign to make these requirements retroactive). This transparency allows peer reviewers to look back at the original design of the study and identify statistical shenanigans that scientists can otherwise use to make a silk purse out of the sow’s ear of negative results. It also prevents research sponsors, like pharmaceutical companies, from disappearing unfavorable studies (Yes, they throw the studies out of helicopters over the jungle) to improve the apparent efficacy of their products.

This kind of preregistration has not really caught on in basic science research. Hypotheses and methods are not routinely publicized before studies begin, so the studies are susceptible to the same sorts of biases (and outright fraud) that have been identified in clinical research. Still Science 2.0 transparency advocates (such as @Neuro_Skeptic) suggest that what’s good for the goose (profiteering drug companies) may also be good for the gander (altruistic humanitarian scientists; sarcastic emphasis added), though some scientists are still hesitant to make that strong commitment to transparency.

Image Credit: This image comes from A Principal Component Analysis of 39 Scientific Impact Measures (Bollen et al. 2009) in the open access journal PLoS ONE. Unlike images from proprietary journal articles (even my own), I’m allowed to use this image (without recourse to fair use claims) because the journal publishes under a Creative Commons license.

Update on the Great BRAINI Debates

•May 12, 2013 • 7 Comments

rasterThe NSF, DARPA and the NIH sponsored a meeting last week that brought together scientists to brainstorm ideas for the new BRAIN Initiative (a proposal I’ve explored many times, starting here, and most recently, here). Organizers seem to have begun to reassure critics that they are developing more inclusive planning procedures and that the funding mechanisms will not siphon off resources from other projects. They still can’t seem to figure out how to get Science to publish their white papers outside the paywall, and there has also been criticism that they are not doing enough to include women scientists in the process. As I’ve mentioned before, I still have my qualms about selling the project to the public based on promises to alleviate clinical disorders that are least likely to be addressed by the proposed methods (as do others).

Still, the silliest critique of the goals of the BRAIN Initiative is that we (meaning systems neuroscientists) wouldn’t know what to do with the data from thousands (or millions) of neurons if we had it. I can assure you that we would, but before I explore that, let’s look at the different facets of this argument. One strain of critique contends that because systems neuroscientists don’t agree on the goal, then none exists. This is like saying there is no coherent point to sequencing the genome because cell biologists, evolutionary biologists, translational researchers and clinicians can’t agree on a set of specific aims. I’m willing to bet that the scientists making this argument would be faced with the same heterogeneity in their own disciplines if they were brainstorming a similarly transformative infrastructure project.

Another strain of this argument is that neuroscientists don’t know enough about the basic components of their field to study the emergent properties of large interacting populations. The argument often has the form “How can you study Y when you don’t even know how X works?” where Y is some presumed higher order function (like color perception) and X is some supposed component subsystem (like retinal neurons). In some ways this is a really just an element of personal scientific disposition. Some people like to study systems, some like reductionist approaches, some like top-down, some like bottom-up, PO-TAY-TO, PO-TAH-TO. Atomists argue that you can’t possibly understand systems without exhaustive explication of components, while systems people think reductionists fail to see the forests for the trees. My suspicion is that people who make the reductionist argument about the BRAIN Initiative really just don’t believe in systems neuroscience as a productive discipline at all. I’m certainly not going to convince those people in a blog entry. Nonetheless, we shouldn’t forget that all science involves judgments about the right level of analysis, the right amount of abstraction, the right topic, the right experimental model, the right modeling equations or the right techniques. We like to argue that these decisions are empirically founded, but mostly we don’t have enough information to make those claims, so we often default to personal preference. Am I arguing that we should throw up our hands and give scientists money to do whatever the hell they want? No. The proof is still in the pudding. Does the approach/model  produce concrete predictions and observable  tests of those predictions? That is not a questions we can answer simply by saying “but you don’t even know…” Returning to the genome example, we did manage to wring some useful insights from sequencing despite the fact that we still don’t have a general solution to how genes relate to protein form/function.

A related argument contends that neuroscience is too atheoretical to formulate relevant questions on which to spend the kind of money that BRAINI proposes.  Again, this argument rests on somewhat idiosyncratic ideas about what a good theory is (as I’m sure philosophers of science can attest). What one scientist sees as a foundational framework, another sees as fuzzy-headed hand waving.  Judging the appropriateness of a particular theory is even more wrought than picking an experimental model. Good theories provide a unifying framework to understand disparate data, but just how unifying can we expect neuroscience theories to be? What these critics seem to be asking for is some grand unified theory of human cognition, consciousness and intelligence. That’s a rather high bar. In fact, there are many fruitful neuroscience theories out there in particular animals, systems and circuits– theories of locomotion, vision, navigation, memory, olfaction, learning, rhythm generation, homeostatic regulation, etc. Different neural systems evolved under different constraints and selection pressures, so we would expect a certain level of heterogeneity in the details. Critics again seem to be conflating the lack of a single theory with the lack of any theory.

One critic of the BRAIN Initiative who seems to find systems neuroscientists particularly lacking in creativity or insight is DrugMonkey (and @drugmonkeyblog), who argues that BRAINI proponents are simply trying to recreate some previous era of generous funding for “neuron recording neuroscience.” S/He suggests that the proposals amount to nothing more than an effort to “record some more neurons.”  If s/he truly finds our entire field as intellectually so sterile, I’m certainly not going to change his/her mind. But I would like to argue that there is a transformative, qualitative difference in the jump from recording tens of cells to recording thousands of cells. This is because you begin to encompass functionally important networks with nearly complete sampling.

For example, what would I do with recordings from thousands of neurons? My dissertation research involved understanding how groups of cells fire together to create the rhythm that drives normal breathing in mice (and presumably, other mammals), so let’s take that work as an example . The cell group that I studied (called the pre-Bötzinger complex) is part of a broader network of circuits that coordinate muscles involved in different phases of breathing under different conditions.  These cell groups, located in the brainstem, are relatively hard for experimenters to access, so much of the basic science has been done in brain slice experiments, which necessarily disconnect networks from each other (and from the behavioral context of the living animal). Other researchers have used multicellular optical methods or multielectrode recordings in anesthetized animals, but for the most part, the interactions of different cell groups has been pieced together from separate recordings of single neurons. For our thought experiment let’s suppose that I had access to the proposed molecular ticker tape technology talked about for BRAINI. What kinds of questions could I answer?

The fact that respiratory neuroscience has not been able generate much data on intact, awake behaving animals means that the new technology would immediately provide physiologically relevant tests of theories from more ‘reduced’ experimental models. Where are the neurons that underlie breathing in the adult animal? How do the neurons fire in relation to breathing in or out? How do they fire in relation to different respiratory behaviors, like gasping or sighing or vocalization? How do the different underlying networks interact? Do some drive exhalation and others inhalation? Do different networks come online during exercise or asphyxia? How does the feedback from low blood oxygen or high carbon dioxide drive respiration? How are interactions between respiration and heart rate mediated?

The first month of experiments using BRAINI technology could address these questions in a depth that would replicate 100 years of research in respiratory neurophysiology. What would we do with the second month?  Development. Disease models. Pharmacology. It’s just the beginning.

And that’s just what I can think of in ten minutes. My systems neuroscience colleagues could easily come up with similar lists of questions in their particular subfields, and  the comparative rate of progress would be just as dramatic. Of course, I can’t guarantee that BRAINI technology would actually work, but I can assure you that systems neuroscientists are not at a loss to know what to do with the data if it does.

 

Update (minutes after posting). I originally assumed DrugMonkey was a ‘he’ purely from discursive style. I actually don’t know one way or another, so I changed the pronoun references.

Image: Visualization of multicellular activity from a simulation of the respiratory network.

 
Follow

Get every new post delivered to your Inbox.

Join 50 other followers