Conspiracy Theories and Incompetence Thoeries

•January 16, 2015 • 4 Comments

Oliver North (Wikicommons)

 

It’s never wise to click through to the crap that people link to on FB, but it a weak moment, I succumbed to curiosity for a headline that said something about getting rid of your microwave oven. I guess I assumed it would be something more about artisanal slow food techniques or something, but instead I was treated to a ridiculous diatribe about how microwave ovens have been killing us by destroying nutrients and GIVING US ALL CANCER. I should have looked away. Should have let it go. But that looming “Someone is wrong on the Internet” rabbit hole was too strong a pull for this Alice. Instead of just getting a drink and tweeting about the decline of critical thinking skills in America, I thought I would explain a little about why we scientists are such wet-blankets all the time.

In fact, these kinds of posts continue to surprise me. They require such foreign view of how science actually works and what the incentives are in society. Let’s think this through: First these are truly extraordinary claims, so they reasonably require extraordinary evidence.

Yet, there is no extraordinary evidence presented on the linked page (and I won’t give them the clicks by posting it here). There is fear-mongering, unreviewed studies, no actual primary sources and vague claims about a Russian ban on microwave ovens (that apparently never happened http://skeptoid.com/episodes/4080).

All that aside, believing these claims requires views of scientists and scientific institutions that are complete fantasy. Yes, large corporations can and do fund research in some areas (drug discovery for example), but for most fields there is no monolithic scientific industrial complex. Imagine you are a public health researcher or epidemiologist and you discover that a common household appliance can be linked to poor health outcomes (especially the dreaded cancer). It’s not like it’s a hard study to do, requiring millions funding from GE, so you are not likely to be corrupted by some oh-too-cozy consulting relationship with a funder. Instead, you have every incentive to make sure you do your science right and publish in the highest profile journal you can. Now, unless you discover some hitherto unknown biochemical mechanism underlying the effect, you might not get a Nobel, but if your findings are real, and expanded and reproduced, you will be a science superstar. You get tenure, grants, adulation, you will be the Jonas Salk of the age. And that is in addition to the pure humanitarian joy of saving millions of people from cancer (or other diseases).

So imagine these health dangers are real, and there are thousands of scientists, mostly independent of corporate funding, coming from diverse backgrounds (even liberal hippy treehugger schools like I did), all with huge incentives to find evidence of a massive public health threat. Where’s the evidence? Where are replications (even in marginal journals wanting to make a name for themselves)? Why can’t the website cite any peer-reviewed studies in real journals? I can think of two possibilities:

Mass cognitive bias. Presumably there could some sort of fundamental social bias against seeing these effects, like persisting to believe the earth is flat or the sun revolves around our planet. Having lived through the introduction of microwave ovens into home kitchens in the 70s, I can attest that if there was any cultural bias, it was toward fear (not safety). Deep in the cold war, there was a conflation of microwaves with nuclear radiation in the public mind, and I remember the admonitions to always stand more than 6 feet away from the devices. There were exposés on ovens with leaky seals and on how the safety locks might fail. Not exactly the environment for ignoring the possibility of health harms from these devices.

Massive conspiracy: Again, assuming that the effect exists and that most scientists are highly motivated to find it, the other common scenario in these sorts of alternative health claims in some kind of mass conspiracy of misinformation or soft co-opting corruption. The main trouble with these mass conspiracy theories is that they are so contrary to what we know about how individuals behave in large complex institutions. Some younger readers will remember the Iran/Contra Affair, in which Oliver North and John Poindexter (as part of one of the most secretive agencies in government, the NSA) tried to broker a rapprochement with the Iranians while funneling money a counter-insurgency group that the US supported in Nicaragua. The details of the plan are interesting for many historical reasons in how they reflect US foreign policy, but the most telling part of the whole incident is the congressional hearings that followed, where super spies North and Poindexter recounted a series of bumbling mistakes that eventually led to them being discovered. One precious example was that the committee was able to retrieve email that North thought he had deleted, because it was being dutifully backed up by the IT department. This is simply to say that large organizations have a hard time keeping secrets; they are much more likely screw things up (what I call “incompetence theory”).  And this was an organization presumably a highly controlled, homogeneous workforce with largely convergent values. Does that describe the thousands of public health researchers in American or around the world?

Logistically, how would this suppression of findings even work? Would agents from GE’s microwave division, maybe Jack Donaghy, run around with bags of cash for scientists to keep them quiet? Maybe they get a cameo on 30 Rock. I digress. How much money would it take to convince someone to keep quiet about the biggest public health breakthrough of the last fifty years? How many scientists would say “No, keep your money, I’d rather cure cancer. Oh yes, and I’ll tell the New York Times what you are trying to do.”

The other logical flaw in these corporate conspiracy theories is that they assume corporate interests are themselves monolithic. Even assuming a vast network of microwave industrial complex agents spanning the globe, wouldn’t there be other corporate interests that would motivated to expose the truth? Remember those giant insurance and health care companies that are supposedly controlling the world and pulling the strings of their own conspiratorial puppet empires? They could expose the truth about microwave ovens and save millions of dollars. Natural gas companies would benefit from a return to gas ranges. Competitors could design safer versions that would highlight the negative effects of the previous generation.

In other words, though it is certainly possible that microwave ovens may cause adverse health effects, the hucksters, quacks and naturopathic scam artists have not made even the most basic case. The fact that people continue to fall for their sales ploys suggests nearly delusional misunderstandings of not only basic science, but sociology and psychology as well.

 

Nine Things A Journalist Should Never Write in a Neuroscience Listicle:

•November 30, 2014 • 2 Comments

1) “Your Brain is Hard-wired for…”

Brains don’t have hardware and software; it’s all one squishy mess where the “wiring” is constantly changed by the activity.

2) “Your Brain Made You Do It.”

Your brain is you and you are your brain. Mind/brain dualism had a certain charm in the Descartes’ time, but now has all the respectability of alchemy.

3) “Left Brain People are from Venus.”

Yes, if you surgically cut the connections between one side of the human brain and the other and do some sophisticated experimental tricks, you can find subtle differences. Does that say anything about everyday personality or learning? No.

4) “The Singularity is Nigh!”

Ray Kurzweil may know about computers, but as far as I can tell he knows fuck all about brains. Projections of that Skynet moment when Moore’s Law pushes machines past the complexity of human brains are based on that age-old scientific method: pulling numbers out of your ass.

5) “Miraculous Human Brain Breakthrough Discovered in Petri Dish.”

Experiments in simple models provide important information about basic neurobiological mechanism. Don’t oversell them.

6) “This is Your Lizard Brain on Bad Science.”

Like the “ascent of man” trope in evolutionary biology, the ice cream code model of brain evo-devo is wrong-headed.

7) “Because Dopamine.”

Pro-tip: if you can switch any of the Greek bodily humors (or a zodiac sign) for the neuromodulator in your prose then you are just using it as a sciency placeholder for “unseen forces.”

8) “Brain Training…”

Yes, I know Lumosity is buying banner ads on your site, but that doesn’t mean it’s real.

9) “Neuro-leadership/marketing/management/paradigm-shifting/disruption/bullshit.”

If you see the word “neuro” tacked onto the front of anything that is not an actual science you are most likely stepping into the world of late night cable TV infomercials.

GIF-credits: The silly GIFs are from giphy.com with credit to original sources: downheartedness.tumblr.comelectricspacekoolaid.tumblr.comwww.reddit.comthe-science-llama.tumblr.comdextershowgifs.tumblr.comwww.gifbay.com).

The Neural Signal and the Neural Noise

•November 22, 2014 • 12 Comments

The quantitatively inclined will no doubt recognize my reference to the recent book by Nate Silver about the potential and perils of prediction. While not exactly a reference for high level predictive techniques in statistics, the book was a good introduction to the general reader from a bright guy who is best known for revealing the uselessness of political pundits during recent election cycles.

And accurate prediction is at the heart of the scientific method; it’s what sets that method apart from other ways of knowing about the world. From the movement of the stars to the constituents of atoms, the true test of any scientific hypothesis is not the elegance of its theory (though that is typically held in high regard as well) but its ability to make concrete (typically quantitative) and accurate predictions about events that have either not been observed or not yet happened at all.

But to paraphrase either Niels Bohr or Yogi Berra (or someone completely different), ‘prediction is difficult, especially about the future.’ No less so in neuroscience, with its famously squishy subject matter. Whether you stick an electrode into a neuron and measure its membrane potential or image the combined activity of billions of neurons (and glia, by the way) with an fMRI scanner, there is a lot of variability in the response that seems to persist no matter how meticulously you control the inputs to the system. The typical approach to deal with this problem is to do your experiments over and over again with the expectation that the “noise” in the system (whatever its source) will eventually average out. So, you present a stimulus to a single cell or a network or a whole brain, measure the result, and maybe on that experimental trial the response is a little stronger. You repeat the stimulus. On the next trial, despite keeping everything as precisely identical as you can, the response is a little weaker. Rinse and repeat.

After a while you’ll have enough examples of the response that you can average all these together and expect the ups and downs not associated with your stimulus to balance each other (overall). It’s exactly the same principle as taking the average of all the scores on a given test for all of the students in a class. You expect that the average will tell you something about the performance of the class as a whole independent of the individual background and daily drama of the particular students within the class.

This leads to one of the most important issues with separating the signal from the noise. The difference between the two is mostly dependent on what information you want to extract. It’s like being at a party and trying to watch something on TV. For you, all that chit-chat is noise, a distraction from what you are interested in, while for someone else at the party that damn TV is interfering with her efforts to follow a conversation. Given a set of data about student grades, a teacher may be interested in the variability that relates to teaching methods while a demographer might be interested in differences associated with socio-economic status and a policy-maker might be concerned with how differences in funding in different schools are reflected in achievement (Needless to say, any of these people would likely have at least some interest in the other sources of variability as well).

Still, there are some examples of noise that are not just “shit that doesn’t interest me.” Some of it is “shit I just can’t get my head around.” Imagine a pair of dice, for example. At the macro, everyday, craps table level, they are pretty much unpredictable (random), meaning that all the variability in each throw is unexplained (really no signal there at all, unless you believe you have a “system”). Still you can imagine that if you had enough information about the mass, precise shape, and molecular composition of the dice (and table), and enough control over the throw, that at least in principle you could predict the outcome.

Nonetheless, at the micro (or rather nano, or ato) level, sometimes it’s not even possible in principle to make fully accurate predictions. Quantum theory argues that the very small bits that make up our universe don’t behave in that nice Newtonian billiard ball regime we are so used to. The counter-intuitiveness of that fundamental, intrinsic, elephants-all-the-way-down, randomness famously led Einstein to protest that “God doesn’t play dice with the world.” In other words, he thought the indeterminacy of quantum physics reflected “shit I just can’t get my head around” rather than true randomness.

There is one other source of unpredictability: chaos. Chaotic behavior is a feature of some systems that, despite being essentially deterministic, are fundamentally unpredictable, except over very short time horizons. Without going too far into the details, the important point is that the unpredictability of chaotic systems comes not from intrinsic randomness, but from the fact that they can produce wildly erratic behavior from the most infinitesimal differences in starting points.

Coming back to neuroscience, it turns out that the sources of “noise” in the nervous system can be quite controversial (and with important consequences for computational theories). As I said above, variability between trials using the same stimulus, between different neurons, between different brains, subjects, or days of the week are all vexingly real in experimental neuroscience. Nonetheless, in many experiments it remains maddeningly unclear whether the variability comes from intrinsic randomness percolating up from the nano-scale fluctuations of individual molecules, from the vast number of unmeasured and uncontrolled variables in any network, or from more strictly defined chaotic dynamics. Kind of like elections. At least we don’t have worry about the variability caused by the Koch brothers.

Camelot!

•November 30, 2013 • 3 Comments

camelot

That derisive comeback, throwing cold water on what would otherwise be a profound discovery, is repeated on a constant basis in science. While they are mostly not miniature versions of Arthurian castles designed for cinematic effect, scientists use models all the time, and criticizing other scientist’s models is an essential part of the process.

Most obviously, in experimental biology disciplines like neuroscience, scientists use animal models. That means that the mouse or rat or monkey or what-have-you is meant to be a stand-in for the real system of interest (often human), and it is used to answer those pesky questions that researchers are incapable of investigating in the actual organism. By the way, this is not to dismiss the many neuroscientist and other biologists who are not primarily interested in human application of their work, but who instead study animals to understand fundamental principles of nervous systems or ecosystems or evolution. That’s science too, but their animal models are more of an exemplar than a substitute. In any case, by way of example, there are several good reasons why neuroscientists do experiments with cute furry creatures. First of all, mice are cheap and easy to raise, and their nervous systems are comparable to ours in many important ways. The most important reason, however, is that they happen to be very easy to manipulate genetically. This makes them our nearest kindred vertebrate in which scientists can exert really precise molecular control over all aspects of development. This means, for example, that neuroscientists can breed mice in which a particular subset of neural types in a particular circuit express a protein that makes them glow for easy identification. Or, they can express another protein that makes the cells activate or inactivate when illuminated with a particular color of light. More speculatively, this genetic control might soon mean the ability to allow millions of neurons to record their own activity patterns or layout a whole brain circuit diagram in a single experiment. Obviously, this kind of experimental power is extremely attractive for the experimentalist.

Scientist also use more ephemeral models as well. These can range from the sort of rule-of-thumb abstractions (conceptual models) or simple metaphors for unobservable cause and effects to explicit mathematical models with thousands of variables. Whether they admit it or not, all scientists use conceptual models to organize their thinking about observations and experiments, just as all people organize their perceptions around more or less explicit understanding of how the universe works. Researchers who do mathematical/computational models (as I have) are required by the technique to specify all the moving parts in their models and to make explicit exactly how those parts interact. In that way, quantitative modeling disallows certain kinds of hand-waving (“the super-ego keeps the id in check”) while introducing others (“This symbol I wrote down means ‘supply’”).

With scientists using the range of models, from unconscious conceptual schemata to differential equations to living animals, it’s no wonder criticizing models makes up a large part of scientific discourse. So what makes a good model? Trick question. There is no such thing as a good model, at least not outside of the context of a particular scientific question. It all depends. Want to study executive function and decision-making? Cell cultures in a petri dish may not be the right choice. On the other hand, if you are studying the basic molecular functions that all living cells use, that petri dish might be just the thing. Of course this example makes the point trivially obvious, but scientists who use different model systems often speak at cross-purposes. The choice of model systems always involves trade-offs, some of which are carefully thought out and some of which are simply a matter of intellectual disposition (to-MAY-to, to-MAH-to). For example, in studying processes underlying human diseases, some people like the precise control of a cell culture system and others like the immediacy of clinical drug trials. The central tension is between the simple models that can be more tractable and more complex models that soon become as difficult to understand as the systems they are trying to explicate.

Just as an aside, some animal rights activists have claimed that cell-cultures and computational models can replace many animal experiments. While I’m absolutely in favor of reducing the suffering we cause animals as part of biomedical research, as someone who has used all three of these techniques, I can say this about people who claim animal research can be supplanted by the other techniques: they know nothing about biology and less about computers. Unless they are ethical absolutists on this question, animal rights activists would be sensible to support techniques (like those proposed by the BRAIN Initiative) that would allow collection of huge amounts of data from individual animals in one experiment.

So you have to be content with the fact that there is no sweet spot. Reductionists will always fault systems scientists for not understanding the underlying mechanisms, and systems scientists will always disparage reductionists for not explaining the emergent phenomena. It’s important to realize that despite the bickering tone this discussion may sometimes take, it is essential to understanding how findings from one model apply (or don’t apply) to others.

Photo Credit: Classic scene from the classic movie Monty Python’s Holy Grail. From a pre-CGI era, when special effects meant building miniature replicas of big things and getting the camera up real close.

Update  12/01/2013: Fixed a couple of minor typos/redundancies.

 

Preregistration, a Boring Ass Word for a very Important Proposal

•July 25, 2013 • 23 Comments

science

In response to open letter from more than 80 scientists in the Guardian last month, Sophie Scott (deputy director of the Institute of Cognitive Neuroscience at University College London) has a post in the Times Higher Education. She argues that preregistration would threaten scientific progress by limiting exploratory and speculative research, imposing a sort of methodological tyranny (my word, not hers) requiring researchers to adhere to a strict hypothesis-testing format. She also claims that preregistration could lock researchers into publishing in a journal in which they have preregistered a study, even when surprising results may be a better fit for another venue. She closes by suggesting that preregistration could stifle the kind of genius exhibited by scientists like Newton, whose lesser known pursuits included forays into alchemy.

Newton! Galileo! Science just wants to be free! Fight the man, right?

Well, no. The mistake that Scott makes in this column is so common from people who oppose preregistration, that it’s frankly getting a little tiresome. Preregistration would not suppress exploratory research. It would not press the jackboot against the throat of scientific serendipity. It doesn’t aim to root out speculation and force all investigations into the mold of radical empiricism. Why? Because preregistration is not exclusive.

Yes, I suppose that some people have talked about journals and funding agencies requiring preregistration in the same way that it is required for clinical trials, but there are several reasons to believe such proposals would not bring the death of creative research. First of all, nothing about these plans demands that you publish only results based on preregistered hypotheses. Spontaneous observations, chance findings, novel analyses all can reasonably appear in a preregistered study. Should I repeat that?  It doesn’t seem to be getting through to the critics. Again, preregistration doesn’t exclude unanticipated results, it simply requires that you label them as such. Also, despite the huge groundswell of public support [crickets], and the hegemonic domination of science publishing by preregistration advocates (really?), there will always be journals that will publish observational and speculative studies. If those studies are good, they will be cited.

Really, what’s mind-boggling about Scott’s defense of the status quo is the failure to even acknowledge what’s wrong with current statistical practices in the sciences. As pointed out skillfully in Slate by Andrew Gelman, researchers are able to instantaneously test huge numbers of hypotheses essentially unconsciously while ‘exploring’ their data, and yet these multiple comparisons remain uncorrected in the final analysis. Instead, the paper is written as if the researchers had the specific comparisons in mind from the beginning: p<0.05, BAM! Science Accomplished!

Is this just the inevitable Kuhnian lurching of science, as Scott suggests, or is it a systemic problem for which we can create institutional incentives for better practices?  First of all, it’s completely ridiculous to turn the observation that scientific progress has historically been paroxysmal (and politically contentious) into an argument that we don’t need to worry about the way we do hypothesis testing. Preregistration is not about excluding exploratory or speculative work, it is about setting up an infrastructure to reward researchers for using practices that are more likely to generate results which are meaningful, reliable and reproducible. What could be wrong with that?

So maybe the problem is branding. Instead of “preregistration”, perhaps we should call it “honest science labeling.”

Photo Credit: From flickr user judy_breck under Creative Commons.

Update: Minor edit about 15 minutes after posting to clarify Sophie Scott’s affiliation.

What’s Computational about Computational Neuroscience?

•July 7, 2013 • 11 Comments

Turing_Machine_Model_Davey_2012

When non-scientists ask me what I studied for my PhD, if I want to put a complete halt to the conversation, I answer ‘computational neuroscience.’

To continue, for those readers who have not already clicked away, the usual response (eye’s glazing over) is something like ‘wow,’ followed by a quick deflection to another topic. So, I’ve learned instead to start with ‘neuroscience,’ which turns off fewer people (though the number is still substantial). It seems there is something about the word ‘computational’ that just pushes into incomprehensibility for many people.

So what is computational about computational neuroscience? The short answer is that it means thinking about how brains are computers. Notice that I didn’t say brains are ‘like’ computers.  Many people think that when scientists talk about brains as computers they are speaking in a kind of metaphor, but that’s not the case. Still, they are not saying that brains are literally ‘silicon microprocessor based generic math devices.’ They are saying that brains are literally ‘things that compute.’ And that’s not just a semantic dodge; it comes from Alan Turing, who originally defined a conceptually simple machine that could theoretically calculate pretty much anything. This led to the concept of computational equivalence, meaning that anything (mechanical, electronic or biological) that can do the elementary steps that a Turing Machine can do is qualitatively similar: they are all computers. One caveat is that Turing’s machine had an infinite memory and an infinite lifespan, so all physical computers are only approximations of the ideal. Actual computers, as opposed to theoretical ones, can be quite different from one another. A particular mathematical operation might be completed by one computer in a nanosecond while requiring another to chug along for a hundred million years, but they are both still computers in the Turing sense.

In other words, brains are computers because they can do elemental mathematical/logical operations. Other systems can also be described by equations too, but they are not considered computers because they do not generalize. For example, a falling object can be described by equations, and the path of the object can be considered a solution to those equations. But the falling object can only really solve one problem (perhaps with different initial conditions), while computers have abstract representations (neural codes) that can be used to do general purpose problem-solving. The internal states of the computer can stand for different things at different times, providing an essentially infinite flexibility.

Saying that brains are computers is not the same thing as saying that brains are only computers (a confusion found often in criticisms of the BRAIN Initiative). Still, this computational view of brains makes some people very uncomfortable. For obvious reasons this discomfort is less acute when we are describing animal brains, but it becomes quite an issue when we start talking about human gray matter.  It’s related to the discomfort that people have with the idea that the mental states are in fact brain states, that the mind is actually the brain, the philosophical/theoretical position called materialism (in opposition to the idea that the ‘soul’ or ‘mind’ is somehow distinct from the brain, called dualism).  People imagine that talking about brains as computers implies that we are all mindless automatons, subject to the sort of ‘does not compute’ rigidity that Captain Kirk always seemed to use to cause errant robots to self-destruct (puff of smoke out the ears and all). In truth, whatever you may think of the colossal literalness of your computer’s insistence on overwriting the file you intended to actually save, computational theory explores a much more flexible idea of computers. This model has proven delightfully fruitful for understanding all sorts of neural systems, even those that presume to wrestle with their own existence.

Photo Credit: Photo of a Turing Machine (without the infinite tape memory) created by Mike Davey, CC 3.0 license by Wikipedia user Rocky Acosta (Arttechlaw)

How the Scientific Sausage Gets Made: Preregistration Escrow for Basic Science?

•June 2, 2013 • 6 Comments

safe with creditScientific research is at the same time highly collaborative and extremely competitive.  The overall culture encourages cooperation and sharing of results (and data) with colleagues, but  scientists are also extremely sensitive to the possibility of being scooped on a major finding by rivals working in the same area. Everyone wants to be Darwin or Edison, but no one wants to be Wallace or Tesla. So despite the basic collegial ethos, we are often wary of publicizing early hypotheses or experimental approaches for fear that some other enterprising grad student or post doc might be able to muster more extreme sleep deprivation and experimental luck to get a quicker publication of the same findings. That’s why there is always a bit of cat-and-mouse at scientific conferences, where scientists present early results to their most able rivals (and ardent critics).

So it’s not surprising that basic science researchers would be a little wary of proposals to pre-register their hypotheses and methods in a public database. Many regulators and journals require such registration for clinical trials (of drugs and medical devices) as a way of improving the quality of clinical research; preregistration helps prevent statistical (and publishing) practices that can bias results. But you could argue that basic research is by definition much more speculative, and therefore more vulnerable to misappropriation than clinical trials of drugs and devices that have most likely already been protected by patents. In fact, there is already a sort of preregistration in basic science because grant proposals to federal funding agencies contain the most preliminary hypotheses and cutting-edge experimental methods. While these proposals are confidential during review, they can apparently be requested through open government procedures (such as freedom-of-information requests) after they are funded (though still long before the submitter can establish a dominant head start on the research). For this reason, there has been some attempt to limit open government laws from being applied to scientific research (hat tip to @mrgunn).

How can we reconcile the need for quality control that comes with publicizing research aims in advance with some protection for scientific intellectual property (or even just the thrill of being first)? I have an idea. It’s such a simple idea that I’m sure many people must have thought of it before, and it may even be implemented somewhere in PubPeer or figshare or some other open/reproducible science platform. If it is, I’d love to hear about it, and I’ll be glad to publicize it. Here it is: Publishers, journals and granting agencies could require that project proposals (including hypotheses, methods, etc.) be preregistered in an escrow system that remains confidential until publication. During peer review, the database would be opened to reviewers and editors, and since entries would be date-stamped by the data escrow service, reviewers could verify the timeline of the research. Once published, the account could be opened up to public access for post-publication review.

As I say, this proposal is so simple that it has most likely been made, argued and abandoned at some point already (perhaps in this excellent blog by @ceptional about open science and the replication crisis). This proposal also fails to address other possible flaws in preregistration schemes, such as that they might suppress speculative research or reward data mining over data collecting (also, see a response here). It may be that some of these issues can be addressed within the scheme I describe, or it may not. I’d be interested to hear what people think either way.

UPDATE (7/26/2013): I should have added this sooner, but as I suspected the concept of escrow has been part of open science discussions, and as pointed out in the comment below, is part of the Open Science Framework.

Image credit: Hatena Fotolife user Tanemori (under Creative Commons Attribution 2.1 Japan).

Other Recent Topics:

My most recent review on the BRAIN Initiative.

My tutorials on neural coding.

Craigslist Killed the Newspaper, but Science Publishing Thrives (for All the Wrong Reasons)

•May 23, 2013 • 6 Comments

JIF graphic

The wringers of hands in the scientific community have been busy lately fretting over the current state of affairs in science publishing. Since I’m not really a science historian, I can’t speak to the novelty of these concerns, whether they represent some kind of unprecedented crisis of confidence or simply navel-gazing declinism. But there is ample reason to believe that scientific communication is encountering some of the same structural shifts that have upended the publishing business in general, and print journalism in particular. We’ve all seen newspapers around the country close under pressure from novel forms of media, but those not among the twitterati-blogosophers might be surprised to hear that many scientists now consider the main avenues of science communication hopelessly broken.

Here’s why:  Scientific publishing is still largely modeled on assumptions and economics of the dead-tree publishing era. In those glory days, publishers provided editing, typesetting, printing, marketing and distribution services that were otherwise impractical for scientists to obtain on their own. These days, not so much. While most journals do continue to produce a few paper copies, the associated costs of producing those have dropped dramatically (of course, there are now other costs, like hosting websites and archiving materials).  You would think that competitive forces would then drive publishers to lower their prices, but you would be wrong. The prices that publishers charge (mainly to academic libraries) for their work has instead increased, along with the profits of those publishers. Of course, moralizing to for-profit companies about making a profit is pointless, so what are the factors that contribute to this lack of market-driven pricing?

One possibility is that the market isn’t fully competitive. In fact, as with publishing in general, the field has become dominated by a few very large publishers. So institutional libraries claim they lack the clout to negotiate against these oligopolies. Another contributing factor is a kind of prestige science culture that has evolved in universities. Scientific journals are rated by what is called an impact factor. Specifically, it is the average number of citations each paper in a journal receives (over a two-year period). Since scientific papers follow a tradition of citing antecedent work, the number of citations a paper receives is a reasonable measure of scientific influence (though influence is certainly no assurance of quality). Most journals have impact factors less than about 3 or so, but a few journals have very high values. For basic science, the biggies are the multidisciplinary journals Nature, with an impact factor of 36.2, and Science, with 31.2.  Publication in either of these journals, or another called Cell, is often considered a must-have for a scientist’s CV. Without at least one of these glamour pubs, a researcher’s career can stall out at one of the less stable, lower-paid rungs of the scientific career ladder. So scientists need to publish in the big journals, and university libraries at research-oriented institutions are likewise essentially required to carry subscriptions to those big journals in order to give students and faculty access the latest and greatest.

All this would be somewhat less galling if publishers were still providing a great deal of added value to the scientific process, but as mentioned above, most of the publishing, typesetting and marketing services they provided in days past are now nearly universally available at very low cost. As always, the vast majority of the work of science publishing is actually provided to publishers for free by the scientists themselves, the volunteer editors and peer reviewers who contribute the essential intellectual muscle to the process. To review the accusations against the industry: scientific publishers rely largely on volunteer labor to produce journals based on outdated communication models, for which they charge increasing prices to the institutions that provide that labor (universities) in order to generate high profit margins for themselves. Furthermore, despite the fact that much of the funding that pays for these journals ultimately comes from taxpayers and public-interest foundations, the publishers continue to charge high fees for electronic access to even the oldest articles in their archives.

The other thing that is keeping worriers worrying about the current state of the scientific process is a quality control issue. Some highly publicized examples of scientific fraud, in addition to a more (not-so-)benign neglect of statistical best practices, have led scientists in some quarters to warn of a replication crisis, suggesting that most scientific findings might be just plain wrong. Aside from peer review, which is largely incapable of detecting deliberate fraud, replication of previous results in different labs is an essential element of maintaining the integrity of research. However, since replication studies aren’t sexy they tend not to be pursued or published, a problem that seems to be exacerbated by the relentless pursuit of the precious impact factor.

Taking these critiques of science publishing and science process at face value, what are the possible solutions? In general, the proposals entail using modern communication technologies and social networks to crowd-source quality control in science while democratizing access to the results. For example, open access journals have become quite popular recently. In this model, authors pay a fee to cover the publishing overhead costs, but the articles are then free for anyone to download. Several of these journals have also sought to bring transparency to the peer review process by opening it up to more democratic participation. Ultimately, the basic publishing overhead is still being borne by the grants that fund the research, but the scientists themselves can take comfort at least in the fact that publishers aren’t wringing profit from their labor while restricting access to their work in perpetuity. Other efforts at created a Science 2.0 infrastructure have focused on bringing social media elements to the process of reviewing results after they have been published. PubPeer for example, provides a threaded commenting system for published scientific papers. It’s fair to say that the site has yet to develop the sort of robust exchanged of informed opinion we would all hope for, commenters on the site recently identified some sketchy image duplication in a hastily published article in Cell. A more radical approach to fixing what ails scientific publishing has been to avoid established dissemination routes altogether, opting instead to self-publication on data-sharing sites like figshare or on personal blogs. Needless to say, considering how entrenched the current prestige system is in academic career advancement, early career scientists are reasonably wary of using this approach exclusively.

Another element of the current science reform movement stems from harsh criticisms that have been leveled against biomedical clinical research. Whether from unintentional bias, or more deliberate suppression of unfavorable results, the efficacy of the drugs and medical devices we entrust our lives to has been dangerously inflated in the scientific literature. For this reasons, granting agencies and journal publishers began a few years ago to require advance registration of clinical studies, meaning that researchers have to publicly declare (on the web) their intended research aims, hypotheses, methods and outcome measures before beginning a study (There is also a campaign to make these requirements retroactive). This transparency allows peer reviewers to look back at the original design of the study and identify statistical shenanigans that scientists can otherwise use to make a silk purse out of the sow’s ear of negative results. It also prevents research sponsors, like pharmaceutical companies, from disappearing unfavorable studies (Yes, they throw the studies out of helicopters over the jungle) to improve the apparent efficacy of their products.

This kind of preregistration has not really caught on in basic science research. Hypotheses and methods are not routinely publicized before studies begin, so the studies are susceptible to the same sorts of biases (and outright fraud) that have been identified in clinical research. Still Science 2.0 transparency advocates (such as @Neuro_Skeptic) suggest that what’s good for the goose (profiteering drug companies) may also be good for the gander (altruistic humanitarian scientists; sarcastic emphasis added), though some scientists are still hesitant to make that strong commitment to transparency.

Image Credit: This image comes from A Principal Component Analysis of 39 Scientific Impact Measures (Bollen et al. 2009) in the open access journal PLoS ONE. Unlike images from proprietary journal articles (even my own), I’m allowed to use this image (without recourse to fair use claims) because the journal publishes under a Creative Commons license.

Update on the Great BRAINI Debates

•May 12, 2013 • 7 Comments

rasterThe NSF, DARPA and the NIH sponsored a meeting last week that brought together scientists to brainstorm ideas for the new BRAIN Initiative (a proposal I’ve explored many times, starting here, and most recently, here). Organizers seem to have begun to reassure critics that they are developing more inclusive planning procedures and that the funding mechanisms will not siphon off resources from other projects. They still can’t seem to figure out how to get Science to publish their white papers outside the paywall, and there has also been criticism that they are not doing enough to include women scientists in the process. As I’ve mentioned before, I still have my qualms about selling the project to the public based on promises to alleviate clinical disorders that are least likely to be addressed by the proposed methods (as do others).

Still, the silliest critique of the goals of the BRAIN Initiative is that we (meaning systems neuroscientists) wouldn’t know what to do with the data from thousands (or millions) of neurons if we had it. I can assure you that we would, but before I explore that, let’s look at the different facets of this argument. One strain of critique contends that because systems neuroscientists don’t agree on the goal, then none exists. This is like saying there is no coherent point to sequencing the genome because cell biologists, evolutionary biologists, translational researchers and clinicians can’t agree on a set of specific aims. I’m willing to bet that the scientists making this argument would be faced with the same heterogeneity in their own disciplines if they were brainstorming a similarly transformative infrastructure project.

Another strain of this argument is that neuroscientists don’t know enough about the basic components of their field to study the emergent properties of large interacting populations. The argument often has the form “How can you study Y when you don’t even know how X works?” where Y is some presumed higher order function (like color perception) and X is some supposed component subsystem (like retinal neurons). In some ways this is a really just an element of personal scientific disposition. Some people like to study systems, some like reductionist approaches, some like top-down, some like bottom-up, PO-TAY-TO, PO-TAH-TO. Atomists argue that you can’t possibly understand systems without exhaustive explication of components, while systems people think reductionists fail to see the forests for the trees. My suspicion is that people who make the reductionist argument about the BRAIN Initiative really just don’t believe in systems neuroscience as a productive discipline at all. I’m certainly not going to convince those people in a blog entry. Nonetheless, we shouldn’t forget that all science involves judgments about the right level of analysis, the right amount of abstraction, the right topic, the right experimental model, the right modeling equations or the right techniques. We like to argue that these decisions are empirically founded, but mostly we don’t have enough information to make those claims, so we often default to personal preference. Am I arguing that we should throw up our hands and give scientists money to do whatever the hell they want? No. The proof is still in the pudding. Does the approach/model  produce concrete predictions and observable  tests of those predictions? That is not a questions we can answer simply by saying “but you don’t even know…” Returning to the genome example, we did manage to wring some useful insights from sequencing despite the fact that we still don’t have a general solution to how genes relate to protein form/function.

A related argument contends that neuroscience is too atheoretical to formulate relevant questions on which to spend the kind of money that BRAINI proposes.  Again, this argument rests on somewhat idiosyncratic ideas about what a good theory is (as I’m sure philosophers of science can attest). What one scientist sees as a foundational framework, another sees as fuzzy-headed hand waving.  Judging the appropriateness of a particular theory is even more wrought than picking an experimental model. Good theories provide a unifying framework to understand disparate data, but just how unifying can we expect neuroscience theories to be? What these critics seem to be asking for is some grand unified theory of human cognition, consciousness and intelligence. That’s a rather high bar. In fact, there are many fruitful neuroscience theories out there in particular animals, systems and circuits– theories of locomotion, vision, navigation, memory, olfaction, learning, rhythm generation, homeostatic regulation, etc. Different neural systems evolved under different constraints and selection pressures, so we would expect a certain level of heterogeneity in the details. Critics again seem to be conflating the lack of a single theory with the lack of any theory.

One critic of the BRAIN Initiative who seems to find systems neuroscientists particularly lacking in creativity or insight is DrugMonkey (and @drugmonkeyblog), who argues that BRAINI proponents are simply trying to recreate some previous era of generous funding for “neuron recording neuroscience.” S/He suggests that the proposals amount to nothing more than an effort to “record some more neurons.”  If s/he truly finds our entire field as intellectually so sterile, I’m certainly not going to change his/her mind. But I would like to argue that there is a transformative, qualitative difference in the jump from recording tens of cells to recording thousands of cells. This is because you begin to encompass functionally important networks with nearly complete sampling.

For example, what would I do with recordings from thousands of neurons? My dissertation research involved understanding how groups of cells fire together to create the rhythm that drives normal breathing in mice (and presumably, other mammals), so let’s take that work as an example . The cell group that I studied (called the pre-Bötzinger complex) is part of a broader network of circuits that coordinate muscles involved in different phases of breathing under different conditions.  These cell groups, located in the brainstem, are relatively hard for experimenters to access, so much of the basic science has been done in brain slice experiments, which necessarily disconnect networks from each other (and from the behavioral context of the living animal). Other researchers have used multicellular optical methods or multielectrode recordings in anesthetized animals, but for the most part, the interactions of different cell groups has been pieced together from separate recordings of single neurons. For our thought experiment let’s suppose that I had access to the proposed molecular ticker tape technology talked about for BRAINI. What kinds of questions could I answer?

The fact that respiratory neuroscience has not been able generate much data on intact, awake behaving animals means that the new technology would immediately provide physiologically relevant tests of theories from more ‘reduced’ experimental models. Where are the neurons that underlie breathing in the adult animal? How do the neurons fire in relation to breathing in or out? How do they fire in relation to different respiratory behaviors, like gasping or sighing or vocalization? How do the different underlying networks interact? Do some drive exhalation and others inhalation? Do different networks come online during exercise or asphyxia? How does the feedback from low blood oxygen or high carbon dioxide drive respiration? How are interactions between respiration and heart rate mediated?

The first month of experiments using BRAINI technology could address these questions in a depth that would replicate 100 years of research in respiratory neurophysiology. What would we do with the second month?  Development. Disease models. Pharmacology. It’s just the beginning.

And that’s just what I can think of in ten minutes. My systems neuroscience colleagues could easily come up with similar lists of questions in their particular subfields, and  the comparative rate of progress would be just as dramatic. Of course, I can’t guarantee that BRAINI technology would actually work, but I can assure you that systems neuroscientists are not at a loss to know what to do with the data if it does.

 

Update (minutes after posting). I originally assumed DrugMonkey was a ‘he’ purely from discursive style. I actually don’t know one way or another, so I changed the pronoun references.

Image: Visualization of multicellular activity from a simulation of the respiratory network.

Timing is Everything: Cracking another Neural Code

•April 29, 2013 • 8 Comments

Rate codes are among the simplest ways to interpret the activity of neurons as they represent information in brains. Essentially, the stronger the stimulus, the more active the neuron, meaning that it fires more action potentials in given period of time. But what does that mean exactly? Answering that question requires diving into some cellular biology.

Generally speaking, most cells can be thought of as batteries. Skipping the details, they are required to maintain a certain balance of charged particles within their membranes and that creates a charge difference between the inside and the outside. This difference is measured as a voltage between the inside and outside of the cell. For most cells in the body, the voltage stays pretty much the same most of the time, but in neurons, the voltage (called the membrane potential) can change quite rapidly, which is ultimately what allows neurons to transmit information. Given an absence of input from their neighbors, many neurons would just sit idly with an outside-inside potential of about 65 millivolts (purely by convention, we call the outside-to-inside difference a negative potential), but that would be a non-starter for any nervous system worth its salt (inside joke). In fact, neurons typically receive many inputs from other neighboring (and even quite distant) neurons. Inputs that tend to spur the neuron to fire more action potentials are called excitatory, and tend to push the membrane potential toward zero. Those that suppress firing are called inhibitory, because they tend to hold the membrane potential more negative.

actionPotenial1

Here’s where this little biology lesson detour connects with neural coding. Once the membrane potential of a neuron gets pushed a few millivolts toward zero from its normally negative value, the cell generates a stereotypical waveform, the action potential (or spike). To a first approximation, there is no information in the detailed shape of the action potential (since that is largely invariant); it is an all-or-nothing event that is transmitted essentially unchanged to downstream neurons. This fact is essential to understanding why systems neuroscientists often focus on spikes as the core currency of information processing in brains. While it is true that there are other ways of moving messages around in nervous systems (and other tissues), the ability to generate and transmit action potentials over relatively long distances with speed, reliability and precision has been exquisitely honed by natural selection.

In general terms, if the details of the shape of the action potential waveform can be set aside, and if the waveform propagates relatively unchanged to other neurons in the network, then we can ignore a lot of gory details and think of spikes simply as events described only by the time that they occur. In other words, if you know that one neuron connects with another, pretty much all the information transmitted from the first to the second must be embodied in a simple list of the times of spikes.

abstractionl1

So, like Lenny in the next room with the Geiger counter, if you are a neuron (or an eavesdropping neuroscientist) getting a string of pop…pop…pops from another neuron, you only have the timing of those inputs to divine what that neuron is trying to tell you. As I mentioned last time, one coding strategy is a rate code, in which the exact timing of the spikes is random, but firing rate contains all the relevant information. The problem with this strategy is that you have to listen to the sending neuron for a while before you can get a good sense of the value that is being coded. Let’s say for example that the neuron is firing at an average rate of 5 spikes per second, and you listen for 1/10 of a second. Just by chance you might hear a spike or you might not. If you did, you might be tempted to say the spike rate is 10 spikes/second, but if you didn’t hear a spike you might guess 0 spikes/second. What if you listen a little longer? Even listening for a full second, just by chance you could easily end up only getting 4 spikes or perhaps catching 6. The result is that if you really want a good estimate of the coded value, you have to commit to listening for a while and counting up all the spikes.

windows

If you are an eavesdropping neuroscientist, you might be able to sit around all day counting spikes and trying to relate them to the experimental environment, but the nervous system as a whole doesn’t have that luxury. Fast animals are hungry and slow animals are dinner. A more efficient coding system, where the exact timing of each spike now contains the relevant data, could be quite handy for animals preferring not to be meat. To see how that might work, we’ll look back at the rate code in a slightly different way. Instead of thinking about how many spikes happen in a given period of time, we can turn the problem on its head and think about how long we have to wait between each spike. For example, if the spike rate is 10 spikes/second, then on average, we have to wait 1/10th of a second between action potentials. It’s only an average, because the exact times are random (like the pop…pop…pop of the Geiger counter), but what if you could enforce a more strict discipline on the radioactive ingot or the neuron? What if a 10 spikes/second rate was coded by a train of spikes separated precisely by 1/10th of a second. Now that the randomness has been squeezed out of the process, each spike now codes the underlying value exactly. 1/30th of a second from the previous spike to this one, that’s 30 spikes/second. Next spike after 1/5th of a second: five spikes/second. This coding strategy is called a spike timing code, for obvious reasons.

timingCode

Such a system could be extremely efficient in terms of how quickly information could be conveyed, but it relies on the ability of neurons to generate (and interpret) precisely timed action potentials in a reliable way. Even if your only lab experience has been high school biology class, you are likely aware that living systems tend to be messy, random, unpredictable things. It turns out that some neural circuits in some animals exhibit very precise spike timing that could be evidence of such a code, but so far the case is weaker for timing codes in mammalian neocortical circuits that underlie the most adaptive and intelligent behaviors.