Page 198 -
P. 198

9 Verifying and Validating Simulations                          195

            hypothesis tests check if the statistical summaries obtained from the outputs of two
            (or more) model implementations are drawn from the same distribution. Confidence
            intervals are usually preferred for comparing the output of a model with the output of
            the system being modelled, as they provide an indication of the magnitude by which
            the statistic of interest differs between the two. Nonetheless, confidence intervals
            can also be used for model comparison, but in contexts different from replication,
            such as the evaluation of different models that might represent competing system
            designs or alternative operating policies (Balci and Sargent 1984;Law 2015).
            Graphical methods, such as Q–Q plots (e.g. Alberts et al. 2012) or scatter plots
            (e.g. Arai and Watanabe 2008; Fachada et al. 2017), can also be employed for
            comparing output data, though their interpretation is more subjective than the
            previous methods.




            9.5 Modelling Strategies and Its Relationship to Validation

            In this section we review the purpose of validation and its relationship to different
            modelling strategies with respect to the level of descriptive detail embedded in a
            simulation.
              Several taxonomies of modelling strategies have been described in the literature
            (David et al. 2004; Boero and Squazzoni 2005; Gilbert 2008, pp. 42–44). Normally,
            the adoption of these strategies does not depend on the class of the target being
            modelled, but on different ways to address it as the problem domain. For example,
            if a simulation is intended to model a system for the purpose of designing policies,
            this implies representing more information and detail than a simulation intended
            for modelling social mechanisms of the system in a metaphorical way. However,
            varying levels of model detail imply a trade-off between the effort required for
            verifying the simulation and the effort required for validating it. As more context
            and richness are embedded in a model, the more difficult it will be to verify it.
            Conversely, as one increases the descriptive richness of simulations, more ways
            will be available to assess its validity. A tension that contrasts the tendency
            for constraining simulations by formal-theoretical constructs—normally easier to
            verify—and constraining simulations by theoretical-empirical descriptions—more
            amenable to validation by empirical and participative-based methods. In the next
            sections, two contrasting modelling strategies are discussed and the typical cycle of
            formal and informal approaches for modelling and validation is described.



            9.5.1 Subjunctive Agent-Based Models


            A popular strategy in social simulation consists of using models as a means for
            expressing subjunctive moods to talk about possible worlds using what-if scenarios,
            like “what would happen if something were the case.” The goal is building artificial
   193   194   195   196   197   198   199   200   201   202   203