Conspiracy Theories and Incompetence Thoeries

•January 16, 2015 • 4 Comments

Oliver North (Wikicommons)

 

It’s never wise to click through to the crap that people link to on FB, but it a weak moment, I succumbed to curiosity for a headline that said something about getting rid of your microwave oven. I guess I assumed it would be something more about artisanal slow food techniques or something, but instead I was treated to a ridiculous diatribe about how microwave ovens have been killing us by destroying nutrients and GIVING US ALL CANCER. I should have looked away. Should have let it go. But that looming “Someone is wrong on the Internet” rabbit hole was too strong a pull for this Alice. Instead of just getting a drink and tweeting about the decline of critical thinking skills in America, I thought I would explain a little about why we scientists are such wet-blankets all the time.

In fact, these kinds of posts continue to surprise me. They require such foreign view of how science actually works and what the incentives are in society. Let’s think this through: First these are truly extraordinary claims, so they reasonably require extraordinary evidence.

Yet, there is no extraordinary evidence presented on the linked page (and I won’t give them the clicks by posting it here). There is fear-mongering, unreviewed studies, no actual primary sources and vague claims about a Russian ban on microwave ovens (that apparently never happened http://skeptoid.com/episodes/4080).

All that aside, believing these claims requires views of scientists and scientific institutions that are complete fantasy. Yes, large corporations can and do fund research in some areas (drug discovery for example), but for most fields there is no monolithic scientific industrial complex. Imagine you are a public health researcher or epidemiologist and you discover that a common household appliance can be linked to poor health outcomes (especially the dreaded cancer). It’s not like it’s a hard study to do, requiring millions funding from GE, so you are not likely to be corrupted by some oh-too-cozy consulting relationship with a funder. Instead, you have every incentive to make sure you do your science right and publish in the highest profile journal you can. Now, unless you discover some hitherto unknown biochemical mechanism underlying the effect, you might not get a Nobel, but if your findings are real, and expanded and reproduced, you will be a science superstar. You get tenure, grants, adulation, you will be the Jonas Salk of the age. And that is in addition to the pure humanitarian joy of saving millions of people from cancer (or other diseases).

So imagine these health dangers are real, and there are thousands of scientists, mostly independent of corporate funding, coming from diverse backgrounds (even liberal hippy treehugger schools like I did), all with huge incentives to find evidence of a massive public health threat. Where’s the evidence? Where are replications (even in marginal journals wanting to make a name for themselves)? Why can’t the website cite any peer-reviewed studies in real journals? I can think of two possibilities:

Mass cognitive bias. Presumably there could some sort of fundamental social bias against seeing these effects, like persisting to believe the earth is flat or the sun revolves around our planet. Having lived through the introduction of microwave ovens into home kitchens in the 70s, I can attest that if there was any cultural bias, it was toward fear (not safety). Deep in the cold war, there was a conflation of microwaves with nuclear radiation in the public mind, and I remember the admonitions to always stand more than 6 feet away from the devices. There were exposés on ovens with leaky seals and on how the safety locks might fail. Not exactly the environment for ignoring the possibility of health harms from these devices.

Massive conspiracy: Again, assuming that the effect exists and that most scientists are highly motivated to find it, the other common scenario in these sorts of alternative health claims in some kind of mass conspiracy of misinformation or soft co-opting corruption. The main trouble with these mass conspiracy theories is that they are so contrary to what we know about how individuals behave in large complex institutions. Some younger readers will remember the Iran/Contra Affair, in which Oliver North and John Poindexter (as part of one of the most secretive agencies in government, the NSA) tried to broker a rapprochement with the Iranians while funneling money a counter-insurgency group that the US supported in Nicaragua. The details of the plan are interesting for many historical reasons in how they reflect US foreign policy, but the most telling part of the whole incident is the congressional hearings that followed, where super spies North and Poindexter recounted a series of bumbling mistakes that eventually led to them being discovered. One precious example was that the committee was able to retrieve email that North thought he had deleted, because it was being dutifully backed up by the IT department. This is simply to say that large organizations have a hard time keeping secrets; they are much more likely screw things up (what I call “incompetence theory”).  And this was an organization presumably a highly controlled, homogeneous workforce with largely convergent values. Does that describe the thousands of public health researchers in American or around the world?

Logistically, how would this suppression of findings even work? Would agents from GE’s microwave division, maybe Jack Donaghy, run around with bags of cash for scientists to keep them quiet? Maybe they get a cameo on 30 Rock. I digress. How much money would it take to convince someone to keep quiet about the biggest public health breakthrough of the last fifty years? How many scientists would say “No, keep your money, I’d rather cure cancer. Oh yes, and I’ll tell the New York Times what you are trying to do.”

The other logical flaw in these corporate conspiracy theories is that they assume corporate interests are themselves monolithic. Even assuming a vast network of microwave industrial complex agents spanning the globe, wouldn’t there be other corporate interests that would motivated to expose the truth? Remember those giant insurance and health care companies that are supposedly controlling the world and pulling the strings of their own conspiratorial puppet empires? They could expose the truth about microwave ovens and save millions of dollars. Natural gas companies would benefit from a return to gas ranges. Competitors could design safer versions that would highlight the negative effects of the previous generation.

In other words, though it is certainly possible that microwave ovens may cause adverse health effects, the hucksters, quacks and naturopathic scam artists have not made even the most basic case. The fact that people continue to fall for their sales ploys suggests nearly delusional misunderstandings of not only basic science, but sociology and psychology as well.

 

Nine Things A Journalist Should Never Write in a Neuroscience Listicle:

•November 30, 2014 • 2 Comments

1) “Your Brain is Hard-wired for…”

Brains don’t have hardware and software; it’s all one squishy mess where the “wiring” is constantly changed by the activity.

2) “Your Brain Made You Do It.”

Your brain is you and you are your brain. Mind/brain dualism had a certain charm in the Descartes’ time, but now has all the respectability of alchemy.

3) “Left Brain People are from Venus.”

Yes, if you surgically cut the connections between one side of the human brain and the other and do some sophisticated experimental tricks, you can find subtle differences. Does that say anything about everyday personality or learning? No.

4) “The Singularity is Nigh!”

Ray Kurzweil may know about computers, but as far as I can tell he knows fuck all about brains. Projections of that Skynet moment when Moore’s Law pushes machines past the complexity of human brains are based on that age-old scientific method: pulling numbers out of your ass.

5) “Miraculous Human Brain Breakthrough Discovered in Petri Dish.”

Experiments in simple models provide important information about basic neurobiological mechanism. Don’t oversell them.

6) “This is Your Lizard Brain on Bad Science.”

Like the “ascent of man” trope in evolutionary biology, the ice cream code model of brain evo-devo is wrong-headed.

7) “Because Dopamine.”

Pro-tip: if you can switch any of the Greek bodily humors (or a zodiac sign) for the neuromodulator in your prose then you are just using it as a sciency placeholder for “unseen forces.”

8) “Brain Training…”

Yes, I know Lumosity is buying banner ads on your site, but that doesn’t mean it’s real.

9) “Neuro-leadership/marketing/management/paradigm-shifting/disruption/bullshit.”

If you see the word “neuro” tacked onto the front of anything that is not an actual science you are most likely stepping into the world of late night cable TV infomercials.

GIF-credits: The silly GIFs are from giphy.com with credit to original sources: downheartedness.tumblr.comelectricspacekoolaid.tumblr.comwww.reddit.comthe-science-llama.tumblr.comdextershowgifs.tumblr.comwww.gifbay.com).

The Neural Signal and the Neural Noise

•November 22, 2014 • 12 Comments

The quantitatively inclined will no doubt recognize my reference to the recent book by Nate Silver about the potential and perils of prediction. While not exactly a reference for high level predictive techniques in statistics, the book was a good introduction to the general reader from a bright guy who is best known for revealing the uselessness of political pundits during recent election cycles.

And accurate prediction is at the heart of the scientific method; it’s what sets that method apart from other ways of knowing about the world. From the movement of the stars to the constituents of atoms, the true test of any scientific hypothesis is not the elegance of its theory (though that is typically held in high regard as well) but its ability to make concrete (typically quantitative) and accurate predictions about events that have either not been observed or not yet happened at all.

But to paraphrase either Niels Bohr or Yogi Berra (or someone completely different), ‘prediction is difficult, especially about the future.’ No less so in neuroscience, with its famously squishy subject matter. Whether you stick an electrode into a neuron and measure its membrane potential or image the combined activity of billions of neurons (and glia, by the way) with an fMRI scanner, there is a lot of variability in the response that seems to persist no matter how meticulously you control the inputs to the system. The typical approach to deal with this problem is to do your experiments over and over again with the expectation that the “noise” in the system (whatever its source) will eventually average out. So, you present a stimulus to a single cell or a network or a whole brain, measure the result, and maybe on that experimental trial the response is a little stronger. You repeat the stimulus. On the next trial, despite keeping everything as precisely identical as you can, the response is a little weaker. Rinse and repeat.

After a while you’ll have enough examples of the response that you can average all these together and expect the ups and downs not associated with your stimulus to balance each other (overall). It’s exactly the same principle as taking the average of all the scores on a given test for all of the students in a class. You expect that the average will tell you something about the performance of the class as a whole independent of the individual background and daily drama of the particular students within the class.

This leads to one of the most important issues with separating the signal from the noise. The difference between the two is mostly dependent on what information you want to extract. It’s like being at a party and trying to watch something on TV. For you, all that chit-chat is noise, a distraction from what you are interested in, while for someone else at the party that damn TV is interfering with her efforts to follow a conversation. Given a set of data about student grades, a teacher may be interested in the variability that relates to teaching methods while a demographer might be interested in differences associated with socio-economic status and a policy-maker might be concerned with how differences in funding in different schools are reflected in achievement (Needless to say, any of these people would likely have at least some interest in the other sources of variability as well).

Still, there are some examples of noise that are not just “shit that doesn’t interest me.” Some of it is “shit I just can’t get my head around.” Imagine a pair of dice, for example. At the macro, everyday, craps table level, they are pretty much unpredictable (random), meaning that all the variability in each throw is unexplained (really no signal there at all, unless you believe you have a “system”). Still you can imagine that if you had enough information about the mass, precise shape, and molecular composition of the dice (and table), and enough control over the throw, that at least in principle you could predict the outcome.

Nonetheless, at the micro (or rather nano, or ato) level, sometimes it’s not even possible in principle to make fully accurate predictions. Quantum theory argues that the very small bits that make up our universe don’t behave in that nice Newtonian billiard ball regime we are so used to. The counter-intuitiveness of that fundamental, intrinsic, elephants-all-the-way-down, randomness famously led Einstein to protest that “God doesn’t play dice with the world.” In other words, he thought the indeterminacy of quantum physics reflected “shit I just can’t get my head around” rather than true randomness.

There is one other source of unpredictability: chaos. Chaotic behavior is a feature of some systems that, despite being essentially deterministic, are fundamentally unpredictable, except over very short time horizons. Without going too far into the details, the important point is that the unpredictability of chaotic systems comes not from intrinsic randomness, but from the fact that they can produce wildly erratic behavior from the most infinitesimal differences in starting points.

Coming back to neuroscience, it turns out that the sources of “noise” in the nervous system can be quite controversial (and with important consequences for computational theories). As I said above, variability between trials using the same stimulus, between different neurons, between different brains, subjects, or days of the week are all vexingly real in experimental neuroscience. Nonetheless, in many experiments it remains maddeningly unclear whether the variability comes from intrinsic randomness percolating up from the nano-scale fluctuations of individual molecules, from the vast number of unmeasured and uncontrolled variables in any network, or from more strictly defined chaotic dynamics. Kind of like elections. At least we don’t have worry about the variability caused by the Koch brothers.

Camelot!

•November 30, 2013 • 3 Comments

camelot

That derisive comeback, throwing cold water on what would otherwise be a profound discovery, is repeated on a constant basis in science. While they are mostly not miniature versions of Arthurian castles designed for cinematic effect, scientists use models all the time, and criticizing other scientist’s models is an essential part of the process.

Most obviously, in experimental biology disciplines like neuroscience, scientists use animal models. That means that the mouse or rat or monkey or what-have-you is meant to be a stand-in for the real system of interest (often human), and it is used to answer those pesky questions that researchers are incapable of investigating in the actual organism. By the way, this is not to dismiss the many neuroscientist and other biologists who are not primarily interested in human application of their work, but who instead study animals to understand fundamental principles of nervous systems or ecosystems or evolution. That’s science too, but their animal models are more of an exemplar than a substitute. In any case, by way of example, there are several good reasons why neuroscientists do experiments with cute furry creatures. First of all, mice are cheap and easy to raise, and their nervous systems are comparable to ours in many important ways. The most important reason, however, is that they happen to be very easy to manipulate genetically. This makes them our nearest kindred vertebrate in which scientists can exert really precise molecular control over all aspects of development. This means, for example, that neuroscientists can breed mice in which a particular subset of neural types in a particular circuit express a protein that makes them glow for easy identification. Or, they can express another protein that makes the cells activate or inactivate when illuminated with a particular color of light. More speculatively, this genetic control might soon mean the ability to allow millions of neurons to record their own activity patterns or layout a whole brain circuit diagram in a single experiment. Obviously, this kind of experimental power is extremely attractive for the experimentalist.

Scientist also use more ephemeral models as well. These can range from the sort of rule-of-thumb abstractions (conceptual models) or simple metaphors for unobservable cause and effects to explicit mathematical models with thousands of variables. Whether they admit it or not, all scientists use conceptual models to organize their thinking about observations and experiments, just as all people organize their perceptions around more or less explicit understanding of how the universe works. Researchers who do mathematical/computational models (as I have) are required by the technique to specify all the moving parts in their models and to make explicit exactly how those parts interact. In that way, quantitative modeling disallows certain kinds of hand-waving (“the super-ego keeps the id in check”) while introducing others (“This symbol I wrote down means ‘supply’”).

With scientists using the range of models, from unconscious conceptual schemata to differential equations to living animals, it’s no wonder criticizing models makes up a large part of scientific discourse. So what makes a good model? Trick question. There is no such thing as a good model, at least not outside of the context of a particular scientific question. It all depends. Want to study executive function and decision-making? Cell cultures in a petri dish may not be the right choice. On the other hand, if you are studying the basic molecular functions that all living cells use, that petri dish might be just the thing. Of course this example makes the point trivially obvious, but scientists who use different model systems often speak at cross-purposes. The choice of model systems always involves trade-offs, some of which are carefully thought out and some of which are simply a matter of intellectual disposition (to-MAY-to, to-MAH-to). For example, in studying processes underlying human diseases, some people like the precise control of a cell culture system and others like the immediacy of clinical drug trials. The central tension is between the simple models that can be more tractable and more complex models that soon become as difficult to understand as the systems they are trying to explicate.

Just as an aside, some animal rights activists have claimed that cell-cultures and computational models can replace many animal experiments. While I’m absolutely in favor of reducing the suffering we cause animals as part of biomedical research, as someone who has used all three of these techniques, I can say this about people who claim animal research can be supplanted by the other techniques: they know nothing about biology and less about computers. Unless they are ethical absolutists on this question, animal rights activists would be sensible to support techniques (like those proposed by the BRAIN Initiative) that would allow collection of huge amounts of data from individual animals in one experiment.

So you have to be content with the fact that there is no sweet spot. Reductionists will always fault systems scientists for not understanding the underlying mechanisms, and systems scientists will always disparage reductionists for not explaining the emergent phenomena. It’s important to realize that despite the bickering tone this discussion may sometimes take, it is essential to understanding how findings from one model apply (or don’t apply) to others.

Photo Credit: Classic scene from the classic movie Monty Python’s Holy Grail. From a pre-CGI era, when special effects meant building miniature replicas of big things and getting the camera up real close.

Update  12/01/2013: Fixed a couple of minor typos/redundancies.

 

Preregistration, a Boring Ass Word for a very Important Proposal

•July 25, 2013 • 23 Comments

science

In response to open letter from more than 80 scientists in the Guardian last month, Sophie Scott (deputy director of the Institute of Cognitive Neuroscience at University College London) has a post in the Times Higher Education. She argues that preregistration would threaten scientific progress by limiting exploratory and speculative research, imposing a sort of methodological tyranny (my word, not hers) requiring researchers to adhere to a strict hypothesis-testing format. She also claims that preregistration could lock researchers into publishing in a journal in which they have preregistered a study, even when surprising results may be a better fit for another venue. She closes by suggesting that preregistration could stifle the kind of genius exhibited by scientists like Newton, whose lesser known pursuits included forays into alchemy.

Newton! Galileo! Science just wants to be free! Fight the man, right?

Well, no. The mistake that Scott makes in this column is so common from people who oppose preregistration, that it’s frankly getting a little tiresome. Preregistration would not suppress exploratory research. It would not press the jackboot against the throat of scientific serendipity. It doesn’t aim to root out speculation and force all investigations into the mold of radical empiricism. Why? Because preregistration is not exclusive.

Yes, I suppose that some people have talked about journals and funding agencies requiring preregistration in the same way that it is required for clinical trials, but there are several reasons to believe such proposals would not bring the death of creative research. First of all, nothing about these plans demands that you publish only results based on preregistered hypotheses. Spontaneous observations, chance findings, novel analyses all can reasonably appear in a preregistered study. Should I repeat that?  It doesn’t seem to be getting through to the critics. Again, preregistration doesn’t exclude unanticipated results, it simply requires that you label them as such. Also, despite the huge groundswell of public support [crickets], and the hegemonic domination of science publishing by preregistration advocates (really?), there will always be journals that will publish observational and speculative studies. If those studies are good, they will be cited.

Really, what’s mind-boggling about Scott’s defense of the status quo is the failure to even acknowledge what’s wrong with current statistical practices in the sciences. As pointed out skillfully in Slate by Andrew Gelman, researchers are able to instantaneously test huge numbers of hypotheses essentially unconsciously while ‘exploring’ their data, and yet these multiple comparisons remain uncorrected in the final analysis. Instead, the paper is written as if the researchers had the specific comparisons in mind from the beginning: p<0.05, BAM! Science Accomplished!

Is this just the inevitable Kuhnian lurching of science, as Scott suggests, or is it a systemic problem for which we can create institutional incentives for better practices?  First of all, it’s completely ridiculous to turn the observation that scientific progress has historically been paroxysmal (and politically contentious) into an argument that we don’t need to worry about the way we do hypothesis testing. Preregistration is not about excluding exploratory or speculative work, it is about setting up an infrastructure to reward researchers for using practices that are more likely to generate results which are meaningful, reliable and reproducible. What could be wrong with that?

So maybe the problem is branding. Instead of “preregistration”, perhaps we should call it “honest science labeling.”

Photo Credit: From flickr user judy_breck under Creative Commons.

Update: Minor edit about 15 minutes after posting to clarify Sophie Scott’s affiliation.

What’s Computational about Computational Neuroscience?

•July 7, 2013 • 11 Comments

Turing_Machine_Model_Davey_2012

When non-scientists ask me what I studied for my PhD, if I want to put a complete halt to the conversation, I answer ‘computational neuroscience.’

To continue, for those readers who have not already clicked away, the usual response (eye’s glazing over) is something like ‘wow,’ followed by a quick deflection to another topic. So, I’ve learned instead to start with ‘neuroscience,’ which turns off fewer people (though the number is still substantial). It seems there is something about the word ‘computational’ that just pushes into incomprehensibility for many people.

So what is computational about computational neuroscience? The short answer is that it means thinking about how brains are computers. Notice that I didn’t say brains are ‘like’ computers.  Many people think that when scientists talk about brains as computers they are speaking in a kind of metaphor, but that’s not the case. Still, they are not saying that brains are literally ‘silicon microprocessor based generic math devices.’ They are saying that brains are literally ‘things that compute.’ And that’s not just a semantic dodge; it comes from Alan Turing, who originally defined a conceptually simple machine that could theoretically calculate pretty much anything. This led to the concept of computational equivalence, meaning that anything (mechanical, electronic or biological) that can do the elementary steps that a Turing Machine can do is qualitatively similar: they are all computers. One caveat is that Turing’s machine had an infinite memory and an infinite lifespan, so all physical computers are only approximations of the ideal. Actual computers, as opposed to theoretical ones, can be quite different from one another. A particular mathematical operation might be completed by one computer in a nanosecond while requiring another to chug along for a hundred million years, but they are both still computers in the Turing sense.

In other words, brains are computers because they can do elemental mathematical/logical operations. Other systems can also be described by equations too, but they are not considered computers because they do not generalize. For example, a falling object can be described by equations, and the path of the object can be considered a solution to those equations. But the falling object can only really solve one problem (perhaps with different initial conditions), while computers have abstract representations (neural codes) that can be used to do general purpose problem-solving. The internal states of the computer can stand for different things at different times, providing an essentially infinite flexibility.

Saying that brains are computers is not the same thing as saying that brains are only computers (a confusion found often in criticisms of the BRAIN Initiative). Still, this computational view of brains makes some people very uncomfortable. For obvious reasons this discomfort is less acute when we are describing animal brains, but it becomes quite an issue when we start talking about human gray matter.  It’s related to the discomfort that people have with the idea that the mental states are in fact brain states, that the mind is actually the brain, the philosophical/theoretical position called materialism (in opposition to the idea that the ‘soul’ or ‘mind’ is somehow distinct from the brain, called dualism).  People imagine that talking about brains as computers implies that we are all mindless automatons, subject to the sort of ‘does not compute’ rigidity that Captain Kirk always seemed to use to cause errant robots to self-destruct (puff of smoke out the ears and all). In truth, whatever you may think of the colossal literalness of your computer’s insistence on overwriting the file you intended to actually save, computational theory explores a much more flexible idea of computers. This model has proven delightfully fruitful for understanding all sorts of neural systems, even those that presume to wrestle with their own existence.

Photo Credit: Photo of a Turing Machine (without the infinite tape memory) created by Mike Davey, CC 3.0 license by Wikipedia user Rocky Acosta (Arttechlaw)

How the Scientific Sausage Gets Made: Preregistration Escrow for Basic Science?

•June 2, 2013 • 6 Comments

safe with creditScientific research is at the same time highly collaborative and extremely competitive.  The overall culture encourages cooperation and sharing of results (and data) with colleagues, but  scientists are also extremely sensitive to the possibility of being scooped on a major finding by rivals working in the same area. Everyone wants to be Darwin or Edison, but no one wants to be Wallace or Tesla. So despite the basic collegial ethos, we are often wary of publicizing early hypotheses or experimental approaches for fear that some other enterprising grad student or post doc might be able to muster more extreme sleep deprivation and experimental luck to get a quicker publication of the same findings. That’s why there is always a bit of cat-and-mouse at scientific conferences, where scientists present early results to their most able rivals (and ardent critics).

So it’s not surprising that basic science researchers would be a little wary of proposals to pre-register their hypotheses and methods in a public database. Many regulators and journals require such registration for clinical trials (of drugs and medical devices) as a way of improving the quality of clinical research; preregistration helps prevent statistical (and publishing) practices that can bias results. But you could argue that basic research is by definition much more speculative, and therefore more vulnerable to misappropriation than clinical trials of drugs and devices that have most likely already been protected by patents. In fact, there is already a sort of preregistration in basic science because grant proposals to federal funding agencies contain the most preliminary hypotheses and cutting-edge experimental methods. While these proposals are confidential during review, they can apparently be requested through open government procedures (such as freedom-of-information requests) after they are funded (though still long before the submitter can establish a dominant head start on the research). For this reason, there has been some attempt to limit open government laws from being applied to scientific research (hat tip to @mrgunn).

How can we reconcile the need for quality control that comes with publicizing research aims in advance with some protection for scientific intellectual property (or even just the thrill of being first)? I have an idea. It’s such a simple idea that I’m sure many people must have thought of it before, and it may even be implemented somewhere in PubPeer or figshare or some other open/reproducible science platform. If it is, I’d love to hear about it, and I’ll be glad to publicize it. Here it is: Publishers, journals and granting agencies could require that project proposals (including hypotheses, methods, etc.) be preregistered in an escrow system that remains confidential until publication. During peer review, the database would be opened to reviewers and editors, and since entries would be date-stamped by the data escrow service, reviewers could verify the timeline of the research. Once published, the account could be opened up to public access for post-publication review.

As I say, this proposal is so simple that it has most likely been made, argued and abandoned at some point already (perhaps in this excellent blog by @ceptional about open science and the replication crisis). This proposal also fails to address other possible flaws in preregistration schemes, such as that they might suppress speculative research or reward data mining over data collecting (also, see a response here). It may be that some of these issues can be addressed within the scheme I describe, or it may not. I’d be interested to hear what people think either way.

UPDATE (7/26/2013): I should have added this sooner, but as I suspected the concept of escrow has been part of open science discussions, and as pointed out in the comment below, is part of the Open Science Framework.

Image credit: Hatena Fotolife user Tanemori (under Creative Commons Attribution 2.1 Japan).

Other Recent Topics:

My most recent review on the BRAIN Initiative.

My tutorials on neural coding.

 
Follow

Get every new post delivered to your Inbox.

Join 63 other followers