Page 192 -
P. 192

9 Verifying and Validating Simulations                          189

            9.3.3.7  Participatory Approaches for Validation

            Participatory approaches refer to the involvement of stakeholders both in the design
            and the validation of a model. Such an approach, also known as Companion
            Modelling (Barreteau et al. 2001), assumes that model development must be
            itself considered in the process of social intervention, where dialogue among
            stakeholders, including both informal and theoretical knowledge, is embedded in
            the model development process. Rather than just considering the final shape of
            the model, both the process and the model become instruments for negotiation and
            decision making. Documentation and visualisation techniques can play a crucial role
            in bridging the opinions and intentions of all interested parties. Such approaches are
            particularly suited for policy or strategy development. This topic is discussed in
            Chap. 12 “Participatory Approaches” (Barreteau et al. 2017).



            9.4 Replicating and Comparing Models

            Computational models in social science can be very sensitive to implementation
            details, and the influence that seemingly negligible aspects such as data structures or
            sequences of events can have on simulation results is striking (Merlone et al. 2008).
            Furthermore, model implementations can be considerably elaborate, making them
            prone to programming errors (Will and Hegselmann 2008). This can seriously affect
            V&V when data from the system being modelled cannot be obtained easily, cheaply
            or at all—often the case in social simulation. Moreover, even if data were available,
            the goodness of fit between real and simulated data, albeit reflecting evidence about
            the validity of the model as a data-generating process, does not provide evidence
            on how it operates. Model replication—the reimplementation of an existing model
            and the replication of its results—is a potential but frequently neglected solution to
            this problem (Will and Hegselmann 2008; Thiele and Grimm 2015). Replicating a
            model in a different context will sidestep the biases associated with the language or
            toolkit used to develop the original model, bringing to light inconsistencies between
            conceptual and computational models (Edmonds and Hales 2003; Wilensky and
            Rand 2007).
              Replication strongly contributes to the V&V of simulation models (Wilensky
            and Rand 2007; Thiele and Grimm 2015). Verification is improved because if two
            or more distinct implementations of a conceptual model yield equivalent results,
            it is more likely that the implemented models correctly describe the conceptual
            model (Wilensky and Rand 2007). In turn, validation is stimulated since its very
            idea is comparing models with other descriptions of the problem modelled, and
            this may include cross-model validation, i.e. the comparison with other simulation
            models that have been validated to some level. Thus, it is reasonable to assume
            that a computational model cannot be considered fully verified and validated until
            it has been successfully replicated (Edmonds and Hales 2003). Nonetheless, the
            most important reason for replicating and comparing models is simply one of good
            scientific practice, since replication is the gold standard against which scientific
            claims are evaluated (Peng 2011).
   187   188   189   190   191   192   193   194   195   196   197