Craigslist Killed the Newspaper, but Science Publishing Thrives (for All the Wrong Reasons)

JIF graphic

The wringers of hands in the scientific community have been busy lately fretting over the current state of affairs in science publishing. Since I’m not really a science historian, I can’t speak to the novelty of these concerns, whether they represent some kind of unprecedented crisis of confidence or simply navel-gazing declinism. But there is ample reason to believe that scientific communication is encountering some of the same structural shifts that have upended the publishing business in general, and print journalism in particular. We’ve all seen newspapers around the country close under pressure from novel forms of media, but those not among the twitterati-blogosophers might be surprised to hear that many scientists now consider the main avenues of science communication hopelessly broken.

Here’s why:  Scientific publishing is still largely modeled on assumptions and economics of the dead-tree publishing era. In those glory days, publishers provided editing, typesetting, printing, marketing and distribution services that were otherwise impractical for scientists to obtain on their own. These days, not so much. While most journals do continue to produce a few paper copies, the associated costs of producing those have dropped dramatically (of course, there are now other costs, like hosting websites and archiving materials).  You would think that competitive forces would then drive publishers to lower their prices, but you would be wrong. The prices that publishers charge (mainly to academic libraries) for their work has instead increased, along with the profits of those publishers. Of course, moralizing to for-profit companies about making a profit is pointless, so what are the factors that contribute to this lack of market-driven pricing?

One possibility is that the market isn’t fully competitive. In fact, as with publishing in general, the field has become dominated by a few very large publishers. So institutional libraries claim they lack the clout to negotiate against these oligopolies. Another contributing factor is a kind of prestige science culture that has evolved in universities. Scientific journals are rated by what is called an impact factor. Specifically, it is the average number of citations each paper in a journal receives (over a two-year period). Since scientific papers follow a tradition of citing antecedent work, the number of citations a paper receives is a reasonable measure of scientific influence (though influence is certainly no assurance of quality). Most journals have impact factors less than about 3 or so, but a few journals have very high values. For basic science, the biggies are the multidisciplinary journals Nature, with an impact factor of 36.2, and Science, with 31.2.  Publication in either of these journals, or another called Cell, is often considered a must-have for a scientist’s CV. Without at least one of these glamour pubs, a researcher’s career can stall out at one of the less stable, lower-paid rungs of the scientific career ladder. So scientists need to publish in the big journals, and university libraries at research-oriented institutions are likewise essentially required to carry subscriptions to those big journals in order to give students and faculty access the latest and greatest.

All this would be somewhat less galling if publishers were still providing a great deal of added value to the scientific process, but as mentioned above, most of the publishing, typesetting and marketing services they provided in days past are now nearly universally available at very low cost. As always, the vast majority of the work of science publishing is actually provided to publishers for free by the scientists themselves, the volunteer editors and peer reviewers who contribute the essential intellectual muscle to the process. To review the accusations against the industry: scientific publishers rely largely on volunteer labor to produce journals based on outdated communication models, for which they charge increasing prices to the institutions that provide that labor (universities) in order to generate high profit margins for themselves. Furthermore, despite the fact that much of the funding that pays for these journals ultimately comes from taxpayers and public-interest foundations, the publishers continue to charge high fees for electronic access to even the oldest articles in their archives.

The other thing that is keeping worriers worrying about the current state of the scientific process is a quality control issue. Some highly publicized examples of scientific fraud, in addition to a more (not-so-)benign neglect of statistical best practices, have led scientists in some quarters to warn of a replication crisis, suggesting that most scientific findings might be just plain wrong. Aside from peer review, which is largely incapable of detecting deliberate fraud, replication of previous results in different labs is an essential element of maintaining the integrity of research. However, since replication studies aren’t sexy they tend not to be pursued or published, a problem that seems to be exacerbated by the relentless pursuit of the precious impact factor.

Taking these critiques of science publishing and science process at face value, what are the possible solutions? In general, the proposals entail using modern communication technologies and social networks to crowd-source quality control in science while democratizing access to the results. For example, open access journals have become quite popular recently. In this model, authors pay a fee to cover the publishing overhead costs, but the articles are then free for anyone to download. Several of these journals have also sought to bring transparency to the peer review process by opening it up to more democratic participation. Ultimately, the basic publishing overhead is still being borne by the grants that fund the research, but the scientists themselves can take comfort at least in the fact that publishers aren’t wringing profit from their labor while restricting access to their work in perpetuity. Other efforts at created a Science 2.0 infrastructure have focused on bringing social media elements to the process of reviewing results after they have been published. PubPeer for example, provides a threaded commenting system for published scientific papers. It’s fair to say that the site has yet to develop the sort of robust exchanged of informed opinion we would all hope for, commenters on the site recently identified some sketchy image duplication in a hastily published article in Cell. A more radical approach to fixing what ails scientific publishing has been to avoid established dissemination routes altogether, opting instead to self-publication on data-sharing sites like figshare or on personal blogs. Needless to say, considering how entrenched the current prestige system is in academic career advancement, early career scientists are reasonably wary of using this approach exclusively.

Another element of the current science reform movement stems from harsh criticisms that have been leveled against biomedical clinical research. Whether from unintentional bias, or more deliberate suppression of unfavorable results, the efficacy of the drugs and medical devices we entrust our lives to has been dangerously inflated in the scientific literature. For this reasons, granting agencies and journal publishers began a few years ago to require advance registration of clinical studies, meaning that researchers have to publicly declare (on the web) their intended research aims, hypotheses, methods and outcome measures before beginning a study (There is also a campaign to make these requirements retroactive). This transparency allows peer reviewers to look back at the original design of the study and identify statistical shenanigans that scientists can otherwise use to make a silk purse out of the sow’s ear of negative results. It also prevents research sponsors, like pharmaceutical companies, from disappearing unfavorable studies (Yes, they throw the studies out of helicopters over the jungle) to improve the apparent efficacy of their products.

This kind of preregistration has not really caught on in basic science research. Hypotheses and methods are not routinely publicized before studies begin, so the studies are susceptible to the same sorts of biases (and outright fraud) that have been identified in clinical research. Still Science 2.0 transparency advocates (such as @Neuro_Skeptic) suggest that what’s good for the goose (profiteering drug companies) may also be good for the gander (altruistic humanitarian scientists; sarcastic emphasis added), though some scientists are still hesitant to make that strong commitment to transparency.

Image Credit: This image comes from A Principal Component Analysis of 39 Scientific Impact Measures (Bollen et al. 2009) in the open access journal PLoS ONE. Unlike images from proprietary journal articles (even my own), I’m allowed to use this image (without recourse to fair use claims) because the journal publishes under a Creative Commons license.


~ by nucamb on May 23, 2013.

6 Responses to “Craigslist Killed the Newspaper, but Science Publishing Thrives (for All the Wrong Reasons)”

  1. […] some of the best coverage and commentary available on the BRAIN Initiative. The second is about the current state of affairs in science publishing, which is a topic I find very important and […]

  2. […] A very nice summary of the tragic dead end that science has blundered into, and the open science movement that is trying to repair it, is at Nucleus Ambiguous […]

  3. OK…

    A) research activity and publication are an increasing international trend – greater volume of articles to handle and publish lends itself to the emergence of larger publishers to efficiently manage this through improved economies of scale.
    B) scientific publishing has been digital first for a long time already, which has been a foundation stone for moves to new business models in this field, and the first copy costs remain affected by the growth in the volume of articles.
    C) scientists wouldn’t put work into article publication for free if they didn’t see the value, though it would interesting to see a world in which all work had to be accounted for solely in monetary terms (beyond reputation, career development, perceived duty to the field and other personal motivations and interests) – I shudder to think of the costs, let alone the effects on budgets in such a world…
    D) impact factor is quite a straightforward metric, though its misuse and abuse, particularly given that it measures a range of journal article types in aggregate over time but is often used for prestige for a individual articles to keep assessors happy, but also because it is only an effective guide to comparison of journals within the same field of study.
    E) in case people haven’t noticed, even open access publishers are in it for profit/surplus, with many private companies in the game – the value proposition is somewhat different though most of the essentials should remain the same, except for where it is perceived that some of the checks/balances/additional elements belong to a bygone era due to cost cutting, despite author/reader perceptions otherwise; in any case, it should be clear what people are actually paying for, and if the value isn’t recognised, then there is good cause to adapt.
    F) while peer review is often knocked as imperfect, it is pretty much the least worst option available – there are many options open to streamlining it, but the inescapable element is the reviewer and the culture of review; working out of step with that is not what publishers are there to do.
    G) new avenues for data publishing and collaboration are indeed exciting, though precisely how they fit in and how some of these services are paid for and sustained (as part of the modern world, if increasingly digital version of it) is yet to be fully decided.
    H) interesting reuse of an image, though is this necessary to the article, and does it add value through this use? Indeed, must all authors be mandated to use such licences, or should they have a choice over what they put forward for publication and how they’d prefer it to be used?
    I) an underlying theme not really picked up on is how the culture of science can be changed from the bottom up to reflect digital possibilities, sustainability and choice through an open market approach, and perhaps another is whether it needs to change? Publishers can only ever control/oversee copyright in the particular expression of an idea – the ideas, and supporting raw data, are themselves free of copyright and so open to use in advancing science (though beware other IP trapdoors!). Failure to reflect on and respect scientific culture and norms cannot help to build a case for cultural change – for which scientist really needs yet another person telling them what they should be doing – can’t they just be given the option to decide for themselves and vote with their submissions, or is the thesis here that we’re beyond such democratic processes?


    • A) research activity and publication are an increasing international trend…
      It does seem like there is an increase in publishing activity, but is that because there is an increase in research activity or is it just churn of lower-quality work because the overhead costs are lower? I’m really not moralizing here. I’m trying to figure out why, with production costs dropping, subscription prices continue to rise (along with profits). One hypothesis is that consolidation reduced competition in the market.

      B) scientific publishing has been digital first for a long time already…
      So you are saying that the first copy costs mean that increased publishing volume implies increased costs which justified increased subscription and reprint charges? I’m not in publishing, but I assume that is a term of art referring to the overhead for editing, proofing, typesetting and printing the first copy of an article; a cost that is fixed independent of the number of subsequent copies made. As a whole, the industry may be producing more, but if you look at individual journals, are they getting fatter? You seem to be suggesting that they are editing/reviewing so many more papers that their increased first copy costs justify their prices even despite the falling per-article costs. That’s an empirical question that I’m willing to be convinced about, but it doesn’t ring true on the face of it.
      C) scientists wouldn’t put work into article publication for free if they didn’t see the value…
      Certainly, scientists believe they are gaining value (be it prestige, reputation, etc.) from the activity or they wouldn’t be doing it. I don’t really think anyone is suggesting monetizing that whole process. People instead are suggesting that there’s no particular reason why the fruits of that labor should accrue to for-profit publishers that provide services that are increasingly available through other means. To turn the question on its head, why should the organizations that employ scientists (mostly universities) subsidize these for-profit organizations by allowing their employee to provide them uncompensated labor?

      D) impact factor is quite a straightforward metric…
      I believe we are mostly in agreement. It is a straightforward metric, and it has some face validity. I was pointing out that many people are dissatisfied with the culture that it has engendered (for which publishers are certainly not to blame).

      E) in case people haven’t noticed, even open access publishers are in it for profit/surplus…
      I have nothing against profit per se, either for publishers providing open access or those that maintain paywalls. But, it is reasonable for tax-payers, foundation donors and universities to ask if they are getting what they pay for in terms of access to the fruits of scientific labor.

      F) while peer review is often knocked as imperfect…
      I agreed that peer review seems to be the best worst solution we have. But, the structure of institutions also interacts with and creates the culture of users. The feedback goes both ways.

      G) new avenues for data publishing and collaboration are indeed exciting…

      H) interesting reuse of an image…
      I should have pointed out the specific meaning of the image more clearly to make the case that it was more than just random eye-candy for my blog. The paper is about whether principal component analysis of multiple quality metrics suggests those metrics are all getting at the same thing, or is ‘impact’ multi-dimensional (they come down for the latter). I think current technology is beginning to allow for very specific framing of reproduction rights by authors, so it certainly could be possible to have a system in which the authors wanted the broad reach of an open access journal, but reserve the right to prevent bloggers from appropriating their image for unrelated work (and obviously I would honor that request should the authors make it).

      I) an underlying theme not really picked up on is how the culture of science can be changed…
      Not my thesis at all. As I say, I’m not for some kind of state control of scientific communication to insure fair attribution and free access love for all. I’m not even really moralizing to other scientists about where they should submit their papers. I’m simply pointing out some of the problems people have been talking about as a way of stimulating debate about possible improvements.

  4. […] can we reconcile the need for quality control that comes with publicizing research aims in advance with some protection for scientific […]

  5. […] of the status quo is the failure to even acknowledge what’s wrong with current statistical practices in the sciences. As pointed out skillfully in Slate by Andrew Gelman, researchers are able to instantaneously test […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: