There’s been quite a bit of talk in the discipline in recent years about how overwhelmed journals are with submissions these days, and what to do about it. Liam Kofi Bright, Remco Heesen, and I argue in our paper, “Jury Theorems for Peer-Review“, for transitioning to a crowdsourced approach where (much as in math and physics) authors would standardly upload preprints to an online repository (e.g. PhilArchive) and have papers openly reviewed there. Although Bright and Heesen argue elsewhere for abolishing peer-review at journals altogether, I demur. In math and physics, authors standardly upload preprints to the ArXiv prior to sending papers to journals. ArXiv papers are often discussed publicly on blogs and the like, and often a fairly clear consensus emerges on the merits of the paper. Papers can also be revised in light of feedback, and new versions uploaded. But none of this takes place instead of journal review. Authors still send their papers to journals, and there’s still a formal peer-review process in addition to the more informal public peer-review for preprints.
I like this hybrid model for several reasons. First, it seems to have worked quite well in math and physics. Second, it seems like a kind of “best of both worlds” approach. In my experience, some people are (understandably) skeptical about moving to an open, crowdsourced-only approach. Keeping formal peer-review at journals may assuage those who are skeptical of that alternative, ensuring that journals still serve a purpose: namely, ensuring that there is a stage of peer-review where hand-selected experts and editors with demonstrable expertise put a formal “stamp of approval” on a paper. On the flip side, a crowdsourced approach may serve as a helpful kind of “calibration check” on how journals are doing in this regard, and vice versa. If open, online reviews of particular papers diverged substantially from journal decisions, then it could be illuminating to examine why. For example, as I’ve noted before, a pretty wide variety of influential and Nobel Prize-winning economics papers were rejected from very good journals–and as Bright, Heesen, and I note in our paper, there’s a pretty substantial literature that indicates a conservative bias in peer-review, which we contend may be partially explainable in terms of journal incentives (e.g. greater incentives to reject than accept), etc. So, having both types of peer-review work side-by-side could be very illuminating. Finally, to return to the issues that I began his post with, I’m optimistic that a hybrid approach might serve to lessen the load that journals face, improving review times, etc. For a couple of nice things about the crowdsourced approach are that online reviews can help (1) suss out which papers are publishable or unpublishable before authors even submit to journals, and (2) improve the quality of papers sent to journals, by providing authors with ample feedback before submitting. I’ve heard many people (including editors and referees) say that the biggest problem these days is that far too many “half-baked” papers are being sent to journals, wasting editors’ and reviewers’ time. There’s every reason to believe that a hybrid approach to peer-review would mitigate this problem by giving authors crowdsourced feedback on whether their paper is “ready” to be sent to a journal at all.
All this being said, I have been thinking recently about what kinds of obstacles stand in the way to transitioning to this kind of model. And, at least offhand, one major obstacle stands out. Presently, authors in philosophy seem to have strong incentives not to post preprints openly to PhilArchive or publicize doing so in the way that mathematicians and physicists standardly do. The reason why is simple: currently, there’s plausibly a very real danger of “undermining anonymized review.” If, for example, an author posts a preprint on PhilArchive, it seems possible that journal editors or referees might react negatively–seeing it as an attempt to circumvent the anonymization process involved in journal review. For example, would posting a preprint “pollute” the reviewer pool, making it harder for a journal to find willing reviewers, given than many of them would then know who authored the paper? Alternatively, although I can’t quite recall where I came across it, I seem to recall at least one journal having an editorial policy that authors must not post preprints online prior to peer-review–which, at least from the standpoint of anonymized review, seems to make some sense.
I hope to examine the ethical dimensions of these matters (e.g. the ethics of authors posting papers openly prior to journal review) in an upcoming post. For example, I currently tend not to post preprints prior to journal review precisely because, although I personally favor a hybrid approach to peer-review, I recognize that this isn’t our discipline’s current model, and I tend to think we have at least a prima facie duty to abide by norms of preserving anonymity. So, for now, I’d just like to pose a few questions: how do referees and journal editors feel about authors posting preprints to PhilPapers/PhilArvive or the PhilSci Archive before submitting to a journal? Do journals have policies that authors shouldn’t do this? Should referees decline to review a preprint that they’ve seen, precisely because they know who wrote it? Do you think preprints problematically compromise anonymized peer-review? If the answer to any or all of the above is ‘yes,’ would a proposal that Bright, Heesen, and I advance in our paper–anonymizing preprints–suitably address these concerns? I’m really curious to hear what everyone thinks!
Originally appeared on The Philosophers’ Cocoon Read More