Page 196 -
P. 196

9 Verifying and Validating Simulations                          193

              reliable and more comparable to each other. Maximum model exposure is
              achieved if the simulation is runnable on the browser. This is much simpler
              nowadays, with technologies such as HTML5 and JavaScript dispensing the
              need for browser plug-ins. ABM toolkits such as AgentScript (Densmore 2016)
              and AgentBase (Wiersma 2015) use this approach. In any case, making the
              computational model widely available and easily runnable is crucial for others
              to be able to experiment with it.
            – Besides source code availability, documentation about the computational model
              should also be provided in the form of (1) detailed source code comments, and
              (2) a user guide and/or technical report. The former should clearly explain what
              each code unit (e.g. function or class) does, while the latter should describe the
              program’s architecture, preferably with the aid of visual description standards
              such as UML diagrams. In either case, the computational model documentation
              should contain information about technical options where the translation from
              the conceptual model was neither straightforward nor consensual.
            – Detailed information about the results should be made publicly available. This
              includes statistical methods and/or scripts implementing or using them, raw
              simulation outputs, distributional information, sensitivity analyses performed or
              qualitative measures. A number of specialised scientific data repositories exist
              for this purpose (Assante et al. 2016; Amorim et al. 2015). Furthermore, there
              is an increasing awareness of how important it is to have published, citable
              and documented data available in the scholarly record due to its crucial role in
              reproducible science (Altman et al. 2015; Kratz and Strasser 2014).
              The CoMSES Net Computational Model Library (Rollins et al. 2014), an
            open digital repository for disseminating computational models associated with
            publications in the social and life sciences, should be highlighted in this regard
            since it enforces some of the best practices discussed above. Models are organised
            as searchable entries, by title, author or other relevant metadata. A formatted citation
            is shown for each entry so that researchers who use the model can easily credit its
            creators. Model entries have separate sections for code, documentation, generated
            outputs, solution exploration analyses and other relevant information. The library
            accepts not only original models, but also explicitly welcomes replications of
            previous studies. It also offers a certification service that verifies (1) if the model
            code successfully compiles and runs, and (2) if the model adheres to documentation
            best practices, with the ODD protocol being the recommended documentation
            template.



            9.4.3 Model Comparison Techniques


            Replication is evaluated by comparing the outputs of the original computational
            model against the output of the replicated implementation (Thiele and Grimm 2015).
            However, how do we determine whether or not two models produce equivalent
   191   192   193   194   195   196   197   198   199   200   201