Skip to main content

Review Maps

A framework for exposing and sharing human- and machine-readable review histories.
Published onOct 02, 2019
Review Maps
·

This is a draft document. Feel free to comment, critique, and fork as desired.

Overview

In news, academic, and other types of publishing, the way a document was edited, reviewed, commented on, and fact-checked before being published, and the way it was modified post-publication, are important indicators of thoroughness and trustworthiness. However, these indicators are not generally available, and when they are, are not generally machine-readable. We propose that organizations publish a discoverable, machine-readable “map” of document changes alongside the original document to provide social platforms, aggregators, and users with a new first-party indicator of credibility. Ultimately, by analyzing large sets of review maps, we believe that researchers will be able to identify, classify, and validate the effectiveness of different types of reviews, leading to better practices across publishing communities.

Proposers

MIT Knowledge Futures Group, PubPub Team
Travis Rich, Catherine Ahearn, Gabe Stein
[email protected]

Introduction

Trust is verifiable by an outsider. Not about brands, about types of maps.

Rationale

In most kinds of publishing, documents go through many rounds of changes from draft to publication. And documents are sometimes updated after they’re published. Many of these changes are trivial like spelling and grammar changes. But some, like changes to documents after publication, can signal that a document was wrong. Others, like peer review or fact-checking stages, can be a core signal of the credibility of a document. Taken together, all of the changes made to documents may reveal common review patterns that can be identified and used as credibility indicators by platforms and researchers.

For example: at what point does the frequency and type of a publication’s corrections to news articles go from a positive signal about its willingness to correct the record to a negative signal about the quality of its fact-checking? Is there a type of peer review that leads to fewer retractions? What is the ideal length of a public comment period for proposed regulations?

Review maps are intended to make asking these questions possible by exposing document change and discussion data as machine-readable, first-party credibility indicators that are currently inaccessible to platforms, readers, and researchers:

News Publishing

  • Changes between draft and published versions of an article

  • Identity of editors and fact-checkers (including computer programs)

  • Changes made to an article after publication, as demonstrated by Newsdiffs

  • Full or partial retraction of an article

Academic Publishing

  • Type of peer review an article received

  • Changes between submitted and published versions of an article

  • Reputation of reviewers

  • Community discussion that lead to changes in an article

  • Changes between preprint and published versions of an article

  • Full or partial retraction of an article

Regulatory Publishing

  • Changes in bills from proposal to submission to passage

  • Amendments to bills

  • Identities of amenders

  • Community comments

Implementation

Creation

Review Maps could be automatically generated by the systems already responsible for tracking changes and comments on documents: CMS systems. Developing a WordPress plugin and working with large publishers to implement review maps (potentially with funder support) would make adoption simple for many publishers. For publishers that can’t or don’t want to support review maps, third parties could create limited versions of them by periodically scraping articles and compiling their own maps that platforms and researchers could use (i.e. Newsdiffs). If maps became a commonly used signal, publishers would be incentivized to adopt their own, more complete maps rather than rely on incomplete third party versions.

Discovery

We propose that review maps be published as separate, structured documents alongside original articles. They should be made discoverable to platforms, crawlers, researchers, and others via HTML meta tags, in a similar fashion to the open graph protocol.

Using this approach, maps can be published separately from their source documents, allowing browsers to request maps optionally, which decreases load time and data transfer. Separating maps and source documents also allows syndicated documents to point to maps on remote servers, avoiding map duplication and allowing for the creation of centralized review authorities and aggregators.

User Experience

Badges, extensions, authorities. Developer vs. reader.

Format

Similar to RSS, review maps need not be format-specific. They could be published in multiple formats and file-types designed for specific use-cases and rely on some common

<meta property="review-map" content="https://server.com/maps/article-map.json" />

Example Data Structure

const map = [
  {
    document: uuid /* e.g. versionId */
    template: {
      id: uuid
      reviewers: [
        {
          id: uuid
          blind: boolean
          discussionChannel: uuid
          dueDate: timestamp
          instructions: uuid/text/json?
          dependency: func?
        }
      ]
    }
    events: [
      {
        id: uuid
        reviewerId: uuid
        document: uuid
        discussion: uuid 
        decision: uuid?
      }
    ]
  }
]

Glossary

Map: A map is a series of objects (we need a name for these objects - stages?), each of which is anchored by a single document

Stage.document: A URI or uuid to a specific document. In PubPub’s case, this is a versionId

Stage.template: A template defines the parameters expected from the review process. The template is used to organize events for privacy, blindness, and meaning. A full review map may have single objects whose templates are never completed, due to either the introduction of a new version, or insufficient events. (There is some goofy language here. A template as described before is the completed set of details on how a review will go, with specific reviewers, etc. We may want template templates, that for example create 3 blind reviewers, and let you simply select the individuals. Maybe we call these ‘pre-built templates’ or something like that to avoid the confusing ‘template templates’ issue.

Stage.events: Events are objects that describe action taken within the context of a review. These can be notes, suggested changes, decisions, etc.

Considerations & Questions

  • Should this be a rigid standard, a loose one, or merely an implementation, like a sitemap?

  • Who are the core “users” of review maps — platforms, researchers, publishers, or readers, and how do their uses differ?

  • Should events be signed in some way so that they can be authenticated against and confirmed?

  • How will bad actors attempt to game and manipulate review maps for their benefit?

  • How do we handle the identity question for reviewers and editors? In some cases, reviewers will need to be anonymous, but we’ll still want to aggregate their reviews. In other cases, we’ll want to federate identity with third-party tools.

  • Every element of an event (except its id perhaps) should have a variety of privacy settings, as groups won’t always want to expose details about events to the public. The decision, the notes, the identity of the reviewer can all be private, public, or restricted to a limited audience.

  • Can elements within the review template have privacy settings too? Perhaps we don’t want to show the due date assigned to a reviewer.

  • How do you make template[n].dependency complex? To support things like: ‘when there is consensus, invite HR to review’, or ‘unless unanimous reviewer approval, reject’.

  • How do we make event.decision diverse? How do we support binary, 1-10, etc.

  • Which of these features are core to ReviewMaps, and which are specific to individual platforms/organizations?

Comments
0
comment
No comments here
Why not start the discussion?