Difference between revisions of "Evaluating a research paper"

Line 16: Line 16:
  
 
===Context===
 
===Context===
* Provides adequate context: why is there need for the paper?
+
* Provides adequate context: why is there a need for the paper?
 
* Cites relevant prior work
 
* Cites relevant prior work
 
* Makes reasonable and explicitly stated assumptions
 
* Makes reasonable and explicitly stated assumptions

Revision as of 14:30, 16 February 2016

Evaluating a research paper

A workshop I recently attended led me to consider the question of "What is a good quality paper?" Here is my attempt at creating a checklist to judge paper quality. In creating this list, I would like to acknowledge feedback from the participants of the Dagsthul workshop on "Publication Culture of Computing Research", from the ISS4E research group, and from Timothy Roscoe's excellent paper on "Writing reviews for systems conferences."

Attributes of a good paper

Clarity

  • Is grammatically correct
  • Explicitly states the research question
  • Has good mathematical notation
  • Uses standard terminology
  • Easy to understand
    • Good flow
    • Well-chosen examples
    • Clear figures with descriptive captions
    • Each section of the paper lays out its relation to the rest of the paper

Context

  • Provides adequate context: why is there a need for the paper?
  • Cites relevant prior work
  • Makes reasonable and explicitly stated assumptions
  • Clearly states the 'message' of the paper

Contributions

  • Makes a novel contribution over prior work either in the solution approach or the problem domain
  • Makes a non-trivial contribution
    • Focus is not too narrow
  • Does not overstate contributions - be honest!
  • If this is an implementation paper, the work is implementable by others
  • Explicitly identifies limitations

Uses sound methodology

  • Sufficiently evaluates contributions
    • The larger the claim, the more the need for careful evaluation
  • Uses an appropriate data set
  • Uses appropriate statistical techniques in reporting results
  • Has justifiable and well-chosen metrics to evaluate performance
  • Compares results with that from prior work or a well-chosen, non-trivial benchmark
  • Is mathematically correct
  • Has sufficient detail to allow the work to be reproduced, at least in principle
  • Is reasonably complete: does not have major unaddressed issues or gaps
  • Validates tools used in the work, such as simulators
    • Does not do 'proof by simulation'