Options
Technical assessment and evaluation of environmental models and software : letter to the Editor
Date Issued
2011-03
Date Available
2011-08-03T11:28:00Z
Abstract
This letter details the collective views of a number of independent researchers on the technical assessment and evaluation of environmental models and software. The purpose is to stimulate debate and initiate action that leads to an improved quality of model development and evaluation, so increasing the capacity for models to have positive outcomes from their use. As such, we emphasize the relationship between the model evaluation process and credibility with stakeholders (including funding agencies) with a view to ensure continued support for modelling efforts.
Many journals, including EM&S, publish the results of environmental modelling studies and must judge the work and the submitted papers based solely on the material that the authors have chosen to present and on how they present it. There is considerable variation in how this is done with the consequent risk of considerable variation in the quality and usefulness of the resulting publication. Part of the problem is that the review process is reactive, responding to the submitted manuscript. In this letter, we attempt to be proactive and give guidelines for researchers, authors and reviewers as to what constitutes best practice in presenting environmental modelling results. This is a unique contribution to the organisation and practice of model-based research and the communication of its results that will benefit the entire environmental modelling community. For a start, our view is that the community of environmental modellers should have a common vision of minimum standards that an environmental model must meet. A common vision of what a good model should be is expressed in various guidelines on Good Modelling Practice. The guidelines prompt modellers to codify their practice and to be more rigorous in their model testing. Our statement within this letter deals with another aspect of the issue – it prompts professional journals to codify the peer-review process. Introducing a more formalized approach to peer-review may discourage reviewers from accepting invitations to review given the additional time and labour requirements. The burden of proving model credibility is thus shifted to the authors. Here we discuss how to reduce this burden by selecting realistic evaluation criteria and conclude by advocating the use of standardized evaluation tools as this is a key issue that needs to be tackled.
Many journals, including EM&S, publish the results of environmental modelling studies and must judge the work and the submitted papers based solely on the material that the authors have chosen to present and on how they present it. There is considerable variation in how this is done with the consequent risk of considerable variation in the quality and usefulness of the resulting publication. Part of the problem is that the review process is reactive, responding to the submitted manuscript. In this letter, we attempt to be proactive and give guidelines for researchers, authors and reviewers as to what constitutes best practice in presenting environmental modelling results. This is a unique contribution to the organisation and practice of model-based research and the communication of its results that will benefit the entire environmental modelling community. For a start, our view is that the community of environmental modellers should have a common vision of minimum standards that an environmental model must meet. A common vision of what a good model should be is expressed in various guidelines on Good Modelling Practice. The guidelines prompt modellers to codify their practice and to be more rigorous in their model testing. Our statement within this letter deals with another aspect of the issue – it prompts professional journals to codify the peer-review process. Introducing a more formalized approach to peer-review may discourage reviewers from accepting invitations to review given the additional time and labour requirements. The burden of proving model credibility is thus shifted to the authors. Here we discuss how to reduce this burden by selecting realistic evaluation criteria and conclude by advocating the use of standardized evaluation tools as this is a key issue that needs to be tackled.
Sponsorship
Not applicable
Type of Material
Journal Article
Publisher
Elsevier
Journal
Environmental Modelling and Software
Volume
26
Issue
3
Start Page
328
End Page
336
Copyright (Published Version)
2010 Elsevier Ltd. All rights reserved.
Subject – LCSH
Environmental sciences--Computer simulation--Evaluation
Environmental sciences--Software--Evaluation
Environmental impact analysis--Computer simulation--Evaluation
Environmental impact analysis--Software--Evaluation
Web versions
Language
English
Status of Item
Peer reviewed
ISSN
1364-8152
This item is made available under a Creative Commons License
File(s)
Loading...
Name
Alexandrov_Bruen et al.pdf
Size
256.8 KB
Format
Adobe PDF
Checksum (MD5)
8aa2aa23f3709e113b9eafe7f489a30a