Now showing 1 - 2 of 2
  • Publication
    A scoping review of simulation models of peer review
    (Springer Science and Business Media LLC, 2019-08-19) ; ; ; ; ;
    Peer review is a process used in the selection of manuscripts for journal publication and proposals for research grant funding. Though widely used, peer review is not without flaws and critics. Performing large-scale experiments to evaluate and test correctives and alternatives is difficult, if not impossible. Thus, many researchers have turned to simulation studies to overcome these difficulties. In the last ten years this field of research has grown significantly but with only limited attempts to integrate disparate models or build on previous work. Thus, the resulting body of literature consists of a large variety of models, hinging on incompatible assumptions, which have not been compared, and whose predictions have rarely been empirically tested. This scoping review is an attempt to understand the current state of simulation studies of peer review. Based on 46 articles identified through literature searching, we develop a proposed taxonomy of model features that include model type (e.g. formal models vs. ABMs or other) and the type of modeled peer review system (e.g. peer review in grants vs. in journals or other). We classify the models by their features (including some core assumptions) to help distinguish between the modeling approaches. Finally, we summarize the models’ findings around six general themes: decision-making, matching submissions/reviewers, editorial strategies; reviewer behaviors, comparisons of alternative peer review systems, and the identification and addressing of biases. We conclude with some open challenges and promising avenues for future modeling work.
    Scopus© Citations 21  338
  • Publication
    Grade Language Heterogeneity in Simulation Models of Peer Review
    Simulation models have proven to be valuable tools for studying peer review processes. However, the effects of some of these models’ assumptions have not been tested, nor have these models been examined in comparative contexts. In this paper, we address two of these assumptions which go in tandem: (1) on the granularity of the evaluation scale, and (2) on the homogeneity of the grade language (i.e. whether reviewers interpret evaluation grades in the same fashion). We test the consequences of these assumptions by extending a well-known agent-based model of author and reviewer behaviour with discrete evaluation scales and reviewers’ interpretation of the grade language. In this way, we compare a peer review model with a homogeneous grade language, as assumed in most models of peer review, with a more psychologically realistic model where reviewers interpret the grades of the evaluation scale heterogeneously. We find that grade language heterogeneity can indeed affect the predictions of a model of peer review.
    Scopus© Citations 2  156