Page 181 -
P. 181

178                                                     N. David et al.

            purpose programming languages. This is not a black or white choice, since several
            simulation toolkits offer the option of developing models using general-purpose
            programming languages (e.g. Repast Simphony), and/or provide high-performance
            and scalable workflows, with Repast HPC (Collier and North 2013) being a case in
            point.
              When the direct use of general-purpose programming languages is involved, the
            adoption of good programming practices for designing and implementing the model
            is fundamental. Techniques such as object-oriented design, modularity and encapsu-
            lation not only simplify testing and debugging, but also promote incremental model
            development and the mapping of programming units (e.g. classes or functions) to
            model concepts, thus making computational models easier to understand, extend and
            modify. Additionally, defensive programming methodologies, such as assertions and
            unit tests, are well suited for the exploratory nature of simulation, making models
            easier to debug and verify.
              Two important verification methods, traces and structured walk-throughs, com-
            plement the techniques discussed thus far. The former entails following a specific
            model variable (e.g. the position of an agent or the value of a simulation output)
            throughout the execution of the computational model, with the goal of assessing
            whether the implemented logic is correct and if the necessary precision is obtained.
            Modelling toolkits and programming language tools typically offer the relevant
            functionality, making the use of traces relatively simple (Sargent 2013). In turn,
            structured walk-throughs consist of having more than one person reading and
            debugging a program. All members of the development team are given a copy of a
            particular module to be debugged and the module developer goes through the code
            but does not proceed from one statement to the next until everyone is convinced that
            a statement is correct (Law 2015).
              Nevertheless, and while the techniques described here are an important part of
            the verification process, a computational model should only be qualified as verified
            with reasonable confidence if it has been successfully replicated and/or aligned with
            a valid pre-existing model. We will return to this topic in greater detail in Sect. 9.4.




            9.2.2 What Does It Mean to Validate a Model?

            Model validation is defined as ensuring that both conceptual and computational
            models are adequate representations of the target. The term “adequate” in this sense
            may stand for a number of epistemological perspectives. From a practical point of
            view we could assess whether the outputs of the simulation are close enough to
            empirical data.
              Alternatively, we could assess various aspects of the simulation, such as if the
            mechanisms specified in the simulation are well accepted by stakeholders involved
            in a participative-based approach. In Sect. 9.3 we will describe the general idea of
            validation as the process that assesses whether the pre-computational models—put
   176   177   178   179   180   181   182   183   184   185   186