A Jury of our Peers

Despite the old PR saw that there is no such thing as bad publicity, there are two web sites no scientist wants to be seen on. I’m not talking about sites for global warming skeptics, intelligent design aficionados or Mayan apocalypse predictions. I’m talking about Retraction Watch and Science Fraud, both sites which seek to expose the foibles and fabrications of my ‘noble’ profession. Retraction Watch deals mainly with official retractions published in biomedical and basic science journals, with an eye toward encouraging (dare I say shaming) editors, authors and institutions into providing detailed explanations of the reasons for those scientific mea culpas (mea clupae?). You would think that such transparency would be an obvious adjunct to the methodological detail required in scientific papers, but it turns out that people are somewhat hesitant to provide the particulars of their misdeeds (intended or not). Science Fraud is more of a rogues gallery of image manipulation in cellular and molecular research. It’s a kind of crowdsourced cheat detector, where contributors who seem to have autistic-spectrum eye for detail dig up examples of scientists who fudge their figures. Both web sites are very interesting reading, though the attraction derives at least in part from schadenfreude. Still I wouldn’t want to be given a feature spread in either of them. [Update: As I finished this post I found out that Science Fraud has apparently been intimidated by legal threats and has removed their content. Perhaps someone will host a mirror. Also, surprise, surprise, Retraction Watch reports that the a scientist who had been threatening the web site with legal action has had a paper retracted.]

Most people would be amazed at some of the things that show up on these sites, but I think scientists may be a little less shocked. In part because we know we are people with the same capacity for error and fraud as anyone else, and in part because we know that peer review is not quite the pure truth-divining process assumed by the general public. Peer review is one of the main things that separate what we do as scientists from other people who try to write true stuff, like journalists or non-fiction writers Unfortunately, I think most people (including science journalists) don’t really understand how the process is supposed to work, or more importantly, how it works in practice.

Here’s the ideal. You, the journal editor, gather some fair-minded fellow scientists to have a look at the work and evaluate its merit. For a scientific paper, that usually means two things: judging the technical quality of the underlying scientific work and evaluating the potential impact of the findings on the field. So you need reviewers who are knowledgeable about the specific questions and methods used in the narrow topic area. You also want these reviewers to be as candid as possible, so you don’t include the reviewer’s names as you forward the reviews back to the paper’s authors. Since the ultimate purpose of the process is to improve the research, you give the authors a chance to make changes or even run additional experiments or analyses to address the reviewers’ concerns. You repeat this cycle until you are satisfied that the paper meets the standards of your field, or you conclude that it never will.

Here’s the way it actually works. Scientific research is extremely specialized, so pretty much everyone in a specific subfield of research knows everyone else. This fraternal atmosphere is actually essential to the process since, in addition to being highly competitive, scientific research is also high collaborative. But when it comes to picking reviewers for a particular paper, finding scientists who are both knowledgeable about the specifics and dispassionate about results is essentially impossible. In addition, since individual editors often cover wide swaths of topic areas, they don’t necessarily know who is qualified to review a particular paper. So here’s the dirty little secret of peer review: many journals allow the paper’s authors to make suggestions to the editor for potential reviewers. They also can suggest that potential reviewers be excluded. The obvious strategy for an author is to suggest sympathetic reviewers and to indicate antagonists for exclusion from the process.

Personally, I’ve also nominated presumptively sympathetic reviewers for my (admittedly few) papers, after consultation with my advisers. We also named one potential reviewer to avoid for one of our papers. I didn’t have any particular beef with this researcher myself but my adviser apparently felt the person would not be fair. That’s the point, really—not to exclude reviewers who would ask tough questions, but to exclude those that are so fundamentally opposed to your basic assumptions that you can’t even have a productive conversation. Ultimately, the editor decides who will review the research, but you have to assume that if they are sympathetic to the basic aims of the work at all, they will assign reviewers who will help make the publication better rather than sink it outright.

This reliance on the good faith participation of the authors and reviewers partially explains why peer review is fundamentally incapable of detecting deliberate scientific fraud. The rest is logistics. Scientific manuscripts typically present the tip of the analytic pyramid, far removed from the huge base of raw data. The methods and analytic techniques are explained, mostly on the honor system, such that an interested researcher could replicate the findings, but no one really checks the numbers at a fundamental level. Even if the raw data were available, validation would require almost as much time as the original research. In my experience, no reviewer has ever questioned my actual data as presented, though they have doubted my assumptions, methods, interpretations and conclusions.

So peer review is really just the smell test of quality control in scientific research, a guarantee only that a couple of reasonably knowledgeable people (perhaps with their own axes to grind, perhaps friends of the authors) have looked over the work and provided some corrective feedback to the authors. It’s meant to balance the competitive and collaborative motives of the all-too-human participants, but provides only weak protection against outright deception, and still remains susceptible to cliquish groupthink. This leads many scientists and editors to consider it (like democracy) the worst possible system that is also the best we have.

~ by nucamb on January 5, 2013.

3 Responses to “A Jury of our Peers”

  1. […] A Jury of our Peers, a post on the parallel blog Nucleus Ambiguous, I introduce two web sites which highlight […]

  2. […] that most scientific findings might be just plain wrong. Aside from peer review, which is largely incapable of detecting deliberate fraud, replication of previous results in different labs is an essential element of maintaining the […]

  3. […] Scientific research is at the same time highly collaborative and extremely competitive.  The overall culture encourages cooperation and sharing of results (and data) with colleagues, but  scientists are also extremely sensitive to the possibility of being scooped on a major finding by rivals working in the same area. Everyone wants to be Darwin or Edison, but no one wants to be Wallace or Tesla. So despite the basic collegial ethos, we are often wary of publicizing early hypotheses or experimental approaches for fear that some other enterprising grad student or post doc might be able to muster more extreme sleep deprivation and experimental luck to get a quicker publication of the same findings. That’s why there is always a bit of cat-and-mouse at scientific conferences, where scientists present early results to their most able rivals (and ardent critics). […]

Leave a reply to Is Peer Review Broken? « Nucleus Ambiguus Cancel reply