Monthly Archives: December 2017

Making Peer Review More Open

“Marginalia” by Open Library is licensed under CC BY 2.0.

Traditional peer review relies on anonymous reviewers to thoughtfully assess and critique an author’s work. The idea is that blind review makes the evaluation process more fair and impartial–but many scholars have questioned whether this is always the case.

Open review has the potential to make scholarship more transparent and more collaborative. It also makes it easier for researchers to get credit for the work they do reviewing the scholarship of their peers. Publishers in both the sciences and the humanities and social sciences have been experimenting with open review for almost two decades now, but it is only recently that open review seems to have reached a tipping point. So what exactly is open review, and what does it entail?

There are two main types of open review: named review and crowd-sourced review. With named review, the names of the peer reviewers as well as their reports are published online alongside the scholarship in question, making them available for anyone to read. A number of open access journal publishers use named review, including BioMed Central, Frontiers, F1000, and eLife. The open access journal Nature Communications allows authors to choose whether or not they want open review. About 60% of them do so. Elsevier recently started allowing peer-review reports to be published alongside articles for a few of its journals. However, peer reviewers do not have to attach their names to reports (and a little less than half do so). Giving reviewers the choice to remain anonymous if they wish, while making the reports open, can be one way to address one of the main critiques of open review: that it will cause scholars to be less candid. Early career faculty might be particularly sensitive to this issue.

With crowd-sourced review, a draft of the article or book is made available online for the public to comment on before it is officially published. This allows for the authors to get feedback from a greater variety of individuals, including people who might never have been approached to be a peer reviewer. One of the first monographs to go through this type of review was Kathleen Fitzpatrick’s Planned Obsolescence (NYU Press, 2010). MediaCommons Press, which hosted this draft of Fitzpatrick’s book, used CommentPress, a WordPress plugin, to facilitate feedback from interested readers. Most recently, Matthew J. Salganik used crowd-sourced review for his book, Bit By Bit: Social Research in the Digital Age (Princeton University Press, 2017). Salganik created the Open Review Toolkit (which uses the annotation tool hypothes.is) in order to help other scholars do the same thing. Crowd-sourced review can occur before traditional peer review, or concurrently. One of the challenges associated with this type of review is letting the world know that the manuscript is online and ready to be commented on. It may require that the author take a more active role in the review process than they are used to. Another challenge is directing readers to leave the type of substantial comments that would be most useful to the author.

In addition to these more formal types of open review, which are usually facilitated by publishers, researchers are also taking it upon themselves to use the web in different ways to solicit feedback on their work. More and more scholars, particularly in the sciences, are posting preprints. Some humanities scholars are even sharing drafts of their work in public Google docs and enabling commenting.

What do you think of open review? Would you be willing to make your peer-review reports public?