As an author and a former editor, I’ve long been interested in peer review and the value that publishers bring to the scholarly record. For this, my first Peer Review Week with the MIT Knowledge Futures Group, I thought I’d take a look at new experiments, including open and collaborative (or community) review. With an increasing focus on openness and transparency, the publishing industry is slowly starting to try new things to make their peer review process more clear to both authors and readers.
My investigation of peer review began in earnest a few years ago while I served as Director of Partnerships at Hypothesis, an open source annotation technology company. We saw publishers experimenting with annotation in the peer review process. These initiatives included integration of Hypothesis by eJournalPress across the publications of the American Geophysical Union. In addition, OJS enabled annotation across the new journal Murmurations, an interdisciplinary publication in the Humanities, which used the tool for invited open peer review, including the author in the process. I was also intrigued to see SpringerNatures’ collaboration with Research Square to create In Review, an author opt-in program for submissions which opens the submitted manuscript to community review in parallel to the traditional review process. And finally, an American Society for Plant Biologists project to make peer review reports more visible to readers. “It’s been exciting to see the variety of ways that publishers are using annotation to streamline the peer review process.” says Dan Whaley, Founder and CEO of Hypothesis. “We’ll be announcing further innovative integrations in the coming months.”
Since those early days, I’ve been fortunate to moderate some panels on peer review, including an upcoming session at the SSP New Directions Seminar in Washington DC on October 2-3, which includes speakers from Research Square’s In Review, the Center for Open Science’s partnership with Peer Community In—an attempt to make independent peer review through community efforts, and from ASAPBio. Seeing exciting developments in collaborative review on PubPub, the open source hosting project of the Knowledge Futures Group, has made me more keen than ever to pen my thoughts on where we are and where we’re going. (More on that later this week!)
A few caveats to this subject, as yet there is no current clear agreement on open peer review taxonomy. (Although there is an effort underway with the Peer Review Transparency Project.) With different terms and understandings in use across the industry, attempts to make publisher efforts in the space more visible could confuse rather than clarify. Nor is there a clear understanding as to whether more open or transparent peer review (namely published reports with or without reviewer identities revealed) might harm the peer review process with reviewers hesitant to be critical in such a public forum. While there have been some studies on the impact of publishing peer review reports that indicate no obvious negative impacts, journals remain skeptical and progress is slow. I raised this issue with Bahar Mehmani, Reviewer Experience Lead at Elsevier, who noted that “transparency means different things to different audiences. The obvious one is open peer review, but the more fundamental transparent peer review is when the journals peer-review data is shared with the research community to study the topic as a scientific subject. We need to move away from anecdotes and study the impact of transparency on reviewer performance. That's why we collaborated with PEERE .”
The PRT project, the report for which is currently in its second version, Transparency in Standard and Practices of Peer Review, is an attempt to build consensus around a taxonomy and suggest a common signaling process through badges. I spoke recently with MIT Press Director Amy Brand about PRT. “It’s a solvable problem with ‘nuts and bolts’ aspects like extensible metadata and the use of DOIs to connect reviews with documents. There’s clearly a lot of work that remains to be done, but there is already a movement underway.” The TRANSPOSE Project, which focuses on journal policies around preprints, also attempts to provide a catalog of peer review practices to clarify potential submission paths for authors. Jessica Polka, who works on TRANSPOSE as part of her role as Executive Director at ASAPbio, notes that there is lot of fear around adding review comments, whether on preprints or on manuscripts. “We’re not yet seeing an annotation culture, but new tools and practices are coming together to get us on our way.” Other projects which touch on peer review are the HIRMEOS project, which looks at practices of peer review for books in the humanities, and the Journal Publishing Practices and Standards (JPPS) project overseen by INASP, an international development organization working with researchers and publications from Africa, Latin America, and Asia. It attempts to chart the progress of new journals in implementing quality peer review (among other practices) by awarding badges.
Finally, I want to draw a distinction between open review and collaborative community review (sometimes called crowdsourced review). Open review tends to either focus on the peer review output, making details of the reviewer process and decision available for readers either with or without reviewer names. This type of review tends to happen prior to publication. Collaborative community review, while it may happen pre- or post-publication, is being experimented with more on drafts of manuscripts, either as a substitution for or an augmentation of traditional review. I’ll be taking a close look at some experiments with this new effort on our PubPub platform in a post later this week, comparing the author experience, process, and other lessons learned across three monographs set to publish later in 2019 or early in 2020.