Reviewing Software Revisited

Great ideas from the Journal of Open Research Software

Several weeks ago I wrote a post describing issues I commonly raise when reviewing software papers. This provoked a lively discussion among authors, editors, and other reviewers of software papers (most of us having been on at least both reviewing and authoring end), with quite varied perspectives as to whether such criteria were appropriate in this context. As I describe there, I view the concept of dedicated software papers as a somewhat necessary “hack” of the publishing system, and hope a more mature system will eventually come into place, as is now happening for data through efforts such as Dryad and associated journal archiving requirements.

Sustainability of the software is important for the same reason archiving the literature is important – without this, we cannot build on existing work. Building on “researcher/developer” software (e.g. the kind of software typically covered in ‘software papers’) is currently an uncommon and risky, as eloquently argued by Brian O’Meara in the comments of an earlier post describing just such a failure.

Through that discussion, I’ve come to see the goal of reviewing software papers as one of software sustainability. After all, the publication becomes a part of the researcher’s permanent record and part of the permanent scholarly archive, designed to outlive the journal itself. It seems reasonable that the software it describes should at least exist and do something five years later. Though this sentiment underlies most of my criteria, my phrasing is often too specific and a little too grounded in language explicit to software development.

As I learned from Neil Chue Hong (Software Sustainability Institute) in a follow up from that discussion, The Journal of Open Research Software has recently revised its guidelines making much of this more explicit and much better worded than I did. They provide separate lists for the reviewer to comment on both the paper and the software – making it clear that the review goes beyond the paper. The questions for the paper are every bit as valuable as those from the software section: for instance, reference to sections on “Reuse”, “Quality Control”, and “Implementation and Architecture”. While these things might not get their own section headings in an MEE paper, the ideas should certainly be addressed. I think these guidelines provide an excellent checklist for promoting sustainable software without putting undo emphasis on questions of performance, scalability, extensibility, etc.

Journal of Open Research Software’s Guidelines

(content below CC-BY, quoted from Journal of Open Research Software)

Please provide a list of your recommendations, indicating which are compulsory for acceptance and which are optional but would improve the quality of the paper or the reusability of the software.

The paper

  1. Is the title of the paper descriptive and objective?
  2. Does the Abstract give an indication of the software’s functionality, and where it would be used?
  3. Do the keywords enable a reader to search for the software?
  4. Does the Introduction give enough background information to understand the context of the software’s development and use?
  5. Does the Implementation and Architecture section give enough information to get an idea of how the software is designed, and any constraints that may be placed on its use?
  6. Does the Quality Control section adequately explain how the software results can be trusted?
  7. Does the Reuse section provide concrete and useful suggestions for reuse of the software, for instance: other potential applications, ways of extending or modifying the software, integration with other software?
  8. Are figures and diagrams used to enhance the description? Are they clear and meaningful?
  9. Do you believe that another researcher could take the software and use it, or take the software and build on it?

The software

  1. Is the software in a suitable repository? (see https://openresearchsoftware.metajnl.com/about/repositories/ for more information)
  2. Does the software have a suitable open licence? (see https://openresearchsoftware.metajnl.com/faq/#q5 for more information)
  3. If the Archive section is filled out, is the link in the form of a persistent identifier, e.g. a DOI? Can you download the software from this link?
  4. If the Code repository section is filled out, does the identifier link to the appropriate place to download the source code? Can you download the source code from this link?
  5. Is the software license included in the software in the repository? Is it included in the source code?
  6. Is sample input and output data provided with the software?
  7. Is the code adequately documented? Can a reader understand how to build/deploy/install/run the software, and identify whether the software is operating as expected?
  8. Does the software run on the systems specified? (if you do not have access to a system with the prerequisite requirements, let us know)
  9. Is it obvious what the support mechanisms for the software are?

Like my own list, a repository, a license, and sample input-output data are all mentioned explicitly. Software review section suggests there should be a mechanism in place to “identify whether the software is operating as expected” – such as basic unit tests. Extensibility is also mentioned, but without any reference to features that may or may not assist in that – a fault of my own list, which was probably over-specific on that account. I think these are reasonable and solid guidelines, and would love to see other journals that frequently publish software papers (Looking at you, MEE, JSS, Bioinformatics, R Journal, PLoS Comp Bio) adopt similar criteria.

Of course I’d love to hear what others think about this.