Now showing 1 - 4 of 4
  • Publication
    Challenges of Designing and Implementing Simulation Models of Peer Review
    Science relies on peer review. Through this mechanism, manuscripts are selected for publication and grant proposals for funding. However, the processes of peer review do not operate in a vacuum; they reflect the priorities, norms, and practices of the institutions in which they are embedded, such as scientific communities, funding agencies, publishers, and scholarly societies, each with their own perspectives and logics (Bollen et al. 2014; Benner & Sandstrom 2000). Peer review is a multi-level system. At the macro level a funding agency sets its priorities and goals for funding based on national priorities and legal mandates. At the meso level, funding agencies use peer review to select which proposals to fund, but also integrate their own strategic objectives (gender balance, geographical diversity, disciplinary needs for example) into the selection process. At the micro level, individual reviewers and panels bring their own perspectives to bear on the review processes. In particular, the dynamics of meso- and micro-level complexity provides an area of exploration that could benefit from simulation studies for two reasons. Simulation studies help us understand what features of the peer review process emerge from different norms, relationships, attitudes and behaviors of the actors and organizations involved. These methods also allow us to develop and test policy recommendations for the improvement of peer review in these same organizations. In our own project we started by mapping existing simulation models of peer review and identified knowledge gaps in the literature, then started developing a simulation model to address these gaps. We found that numerous researchers had studied peer review systems by means of formal and computational modeling, such as agent-based models (ABM) (Squazzoni & Takács 2011). We counted 44 papers on simulation models of peer review published since 1969: some were used to compare the efficiencies of alternative peer review systems (e.g. Kovanis et al. 2017); some compared different behavioral strategies of authors, editors or reviewers (e.g. Thurner & Hanel 2011; Squazzoni & Gandelli 2013); some sought the origin of the issues of peer review, such as biases, high costs and inefficiencies (e.g. Righi & Takács 2017).
  • Publication
    A scoping review of simulation models of peer review
    (Springer Science and Business Media LLC, 2019-08-19) ; ; ; ; ;
    Peer review is a process used in the selection of manuscripts for journal publication and proposals for research grant funding. Though widely used, peer review is not without flaws and critics. Performing large-scale experiments to evaluate and test correctives and alternatives is difficult, if not impossible. Thus, many researchers have turned to simulation studies to overcome these difficulties. In the last ten years this field of research has grown significantly but with only limited attempts to integrate disparate models or build on previous work. Thus, the resulting body of literature consists of a large variety of models, hinging on incompatible assumptions, which have not been compared, and whose predictions have rarely been empirically tested. This scoping review is an attempt to understand the current state of simulation studies of peer review. Based on 46 articles identified through literature searching, we develop a proposed taxonomy of model features that include model type (e.g. formal models vs. ABMs or other) and the type of modeled peer review system (e.g. peer review in grants vs. in journals or other). We classify the models by their features (including some core assumptions) to help distinguish between the modeling approaches. Finally, we summarize the models’ findings around six general themes: decision-making, matching submissions/reviewers, editorial strategies; reviewer behaviors, comparisons of alternative peer review systems, and the identification and addressing of biases. We conclude with some open challenges and promising avenues for future modeling work.
      293Scopus© Citations 15
  • Publication
    How to evaluate ex ante impact? An analysis of reviewers’ comments on impact statements in grant applications
    (Oxford University Press, 2020-10-01) ; ; ;
    Impact statements are increasingly required and assessed in grant applications. In this study, we used content analysis to examine the ‘comments on impact’ section of the postal reviews and related documents of Science Foundation Ireland’s Investigators’ Programme to understand reviewers’ ex ante impact assessment. We found three key patterns: (1) reviewers favoured short-term, tangible impacts, particularly commercial ones; (2) reviewers commented on process-oriented impact (formative) in a more concrete and elaborate manner than on outcome-oriented impact (summative); and (3) topics related to scientific impacts were widely discussed even though the impact section was to be used for evaluating economic and societal impacts. We conclude that for ex ante impact assessment to be effective, funding agencies should indicate the types of impact expected from research proposals clearly instead of a general ‘wish list’ and that more focus should be put on process-oriented impact than outcome-oriented impact.
      29Scopus© Citations 6
  • Publication
    Grade Language Heterogeneity in Simulation Models of Peer Review
    Simulation models have proven to be valuable tools for studying peer review processes. However, the effects of some of these models’ assumptions have not been tested, nor have these models been examined in comparative contexts. In this paper, we address two of these assumptions which go in tandem: (1) on the granularity of the evaluation scale, and (2) on the homogeneity of the grade language (i.e. whether reviewers interpret evaluation grades in the same fashion). We test the consequences of these assumptions by extending a well-known agent-based model of author and reviewer behaviour with discrete evaluation scales and reviewers’ interpretation of the grade language. In this way, we compare a peer review model with a homogeneous grade language, as assumed in most models of peer review, with a more psychologically realistic model where reviewers interpret the grades of the evaluation scale heterogeneously. We find that grade language heterogeneity can indeed affect the predictions of a model of peer review.
      98Scopus© Citations 2