The wringers of hands in the scientific community have been busy lately fretting over the current state of affairs in science publishing. Since I’m not really a science historian, I can’t speak to the novelty of these concerns, whether they represent some kind of unprecedented crisis of confidence or simply navel-gazing declinism. But there is ample reason to believe that scientific communication is encountering some of the same structural shifts that have upended the publishing business in general, and print journalism in particular. We’ve all seen newspapers around the country close under pressure from novel forms of media, but those not among the twitterati-blogosophers might be surprised to hear that many scientists now consider the main avenues of science communication hopelessly broken.
Here’s why: Scientific publishing is still largely modeled on assumptions and economics of the dead-tree publishing era. In those glory days, publishers provided editing, typesetting, printing, marketing and distribution services that were otherwise impractical for scientists to obtain on their own. These days, not so much. While most journals do continue to produce a few paper copies, the associated costs of producing those have dropped dramatically (of course, there are now other costs, like hosting websites and archiving materials). You would think that competitive forces would then drive publishers to lower their prices, but you would be wrong. The prices that publishers charge (mainly to academic libraries) for their work has instead increased, along with the profits of those publishers. Of course, moralizing to for-profit companies about making a profit is pointless, so what are the factors that contribute to this lack of market-driven pricing?
One possibility is that the market isn’t fully competitive. In fact, as with publishing in general, the field has become dominated by a few very large publishers. So institutional libraries claim they lack the clout to negotiate against these oligopolies. Another contributing factor is a kind of prestige science culture that has evolved in universities. Scientific journals are rated by what is called an impact factor. Specifically, it is the average number of citations each paper in a journal receives (over a two-year period). Since scientific papers follow a tradition of citing antecedent work, the number of citations a paper receives is a reasonable measure of scientific influence (though influence is certainly no assurance of quality). Most journals have impact factors less than about 3 or so, but a few journals have very high values. For basic science, the biggies are the multidisciplinary journals Nature, with an impact factor of 36.2, and Science, with 31.2. Publication in either of these journals, or another called Cell, is often considered a must-have for a scientist’s CV. Without at least one of these glamour pubs, a researcher’s career can stall out at one of the less stable, lower-paid rungs of the scientific career ladder. So scientists need to publish in the big journals, and university libraries at research-oriented institutions are likewise essentially required to carry subscriptions to those big journals in order to give students and faculty access the latest and greatest.
All this would be somewhat less galling if publishers were still providing a great deal of added value to the scientific process, but as mentioned above, most of the publishing, typesetting and marketing services they provided in days past are now nearly universally available at very low cost. As always, the vast majority of the work of science publishing is actually provided to publishers for free by the scientists themselves, the volunteer editors and peer reviewers who contribute the essential intellectual muscle to the process. To review the accusations against the industry: scientific publishers rely largely on volunteer labor to produce journals based on outdated communication models, for which they charge increasing prices to the institutions that provide that labor (universities) in order to generate high profit margins for themselves. Furthermore, despite the fact that much of the funding that pays for these journals ultimately comes from taxpayers and public-interest foundations, the publishers continue to charge high fees for electronic access to even the oldest articles in their archives.
The other thing that is keeping worriers worrying about the current state of the scientific process is a quality control issue. Some highly publicized examples of scientific fraud, in addition to a more (not-so-)benign neglect of statistical best practices, have led scientists in some quarters to warn of a replication crisis, suggesting that most scientific findings might be just plain wrong. Aside from peer review, which is largely incapable of detecting deliberate fraud, replication of previous results in different labs is an essential element of maintaining the integrity of research. However, since replication studies aren’t sexy they tend not to be pursued or published, a problem that seems to be exacerbated by the relentless pursuit of the precious impact factor.
Taking these critiques of science publishing and science process at face value, what are the possible solutions? In general, the proposals entail using modern communication technologies and social networks to crowd-source quality control in science while democratizing access to the results. For example, open access journals have become quite popular recently. In this model, authors pay a fee to cover the publishing overhead costs, but the articles are then free for anyone to download. Several of these journals have also sought to bring transparency to the peer review process by opening it up to more democratic participation. Ultimately, the basic publishing overhead is still being borne by the grants that fund the research, but the scientists themselves can take comfort at least in the fact that publishers aren’t wringing profit from their labor while restricting access to their work in perpetuity. Other efforts at created a Science 2.0 infrastructure have focused on bringing social media elements to the process of reviewing results after they have been published. PubPeer for example, provides a threaded commenting system for published scientific papers. It’s fair to say that the site has yet to develop the sort of robust exchanged of informed opinion we would all hope for, commenters on the site recently identified some sketchy image duplication in a hastily published article in Cell. A more radical approach to fixing what ails scientific publishing has been to avoid established dissemination routes altogether, opting instead to self-publication on data-sharing sites like figshare or on personal blogs. Needless to say, considering how entrenched the current prestige system is in academic career advancement, early career scientists are reasonably wary of using this approach exclusively.
Another element of the current science reform movement stems from harsh criticisms that have been leveled against biomedical clinical research. Whether from unintentional bias, or more deliberate suppression of unfavorable results, the efficacy of the drugs and medical devices we entrust our lives to has been dangerously inflated in the scientific literature. For this reasons, granting agencies and journal publishers began a few years ago to require advance registration of clinical studies, meaning that researchers have to publicly declare (on the web) their intended research aims, hypotheses, methods and outcome measures before beginning a study (There is also a campaign to make these requirements retroactive). This transparency allows peer reviewers to look back at the original design of the study and identify statistical shenanigans that scientists can otherwise use to make a silk purse out of the sow’s ear of negative results. It also prevents research sponsors, like pharmaceutical companies, from disappearing unfavorable studies (Yes, they throw the studies out of helicopters over the jungle) to improve the apparent efficacy of their products.
This kind of preregistration has not really caught on in basic science research. Hypotheses and methods are not routinely publicized before studies begin, so the studies are susceptible to the same sorts of biases (and outright fraud) that have been identified in clinical research. Still Science 2.0 transparency advocates (such as @Neuro_Skeptic) suggest that what’s good for the goose (profiteering drug companies) may also be good for the gander (altruistic humanitarian scientists; sarcastic emphasis added), though some scientists are still hesitant to make that strong commitment to transparency.
Image Credit: This image comes from A Principal Component Analysis of 39 Scientific Impact Measures (Bollen et al. 2009) in the open access journal PLoS ONE. Unlike images from proprietary journal articles (even my own), I’m allowed to use this image (without recourse to fair use claims) because the journal publishes under a Creative Commons license.