Page 189 -
P. 189

186                                                     N. David et al.

            available historical data is used to design the model then a related concept is called
            out-of-sample tests in which the remaining data are used to test the predicative
            capacity of the model.


            9.3.3.4  Event Validity

            Event validity compares the occurrence of particular events in the model with
            the occurrence of events in the source data. This can be assessed at the level of
            individual trajectories of agents or at any aggregate level. Events are situations
            that should occur according to pre-specified conditions, although not necessarily
            predictable. Some events may occur at unpredictable points in time or circum-
            stances. For instance, if the target system data shows arbitrary periods of stable
            behaviours interwoven with periods of volatility with unpredictable turning points,
            the simulation should produce similar kinds of unpredictable turning events.


            9.3.3.5  Validity of Simulation Output

            Since data is hard to collect in social systems, investigating the behaviour of
            simulation output becomes a crucial model validation technique (Sargent 2013).
            This can be performed by running the simulation with different parametrisations
            and checking if the output is reasonable (Law 2015), either based on subjective
            expert opinion when using “typical” simulations parameters, or by objectively
            evaluating output behaviour under trivial or extreme parametrisations. For instance,
            concerning the latter, if interaction among agents is nearly suppressed the modeller
            should be surprised if such activities as trade or culture dissemination continues in
            a population.
              The concept of internal validity (Sargent 2013) or verification between the exe-
            cutable computational model and post-computational models (lower left quadrant
            of Fig. 9.1) can also be considered here, since it directly relates to simulation
            output behaviour. In order to assess the level of stochastic variability in a model,
            a number of simulation runs are performed using different random number streams.
            A sizeable level of variability between simulation runs can question the model at
            different levels. For example, the validity of simulation output for the executable
            computational model may be disputed, or the stability of a given policy (and the
            parametrisation that expresses it) in the overall model may be challenged.
              For a more in-depth look at issues concerning simulation output behaviour, we
            refer the reader to the following references. Visualisation-oriented approaches for
            understanding simulation output are debated in Chap. 10 of this volume (Evans et al.
            2017). Visualisation, and statistical and analytical analysis of model outputs are
            examined and reviewed by Lee et al. (2015). For a pure statistical outlook, Fachada
            et al. (2015) discuss a generic and systematic approach for evaluating time-series
            output of simulation models.
   184   185   186   187   188   189   190   191   192   193   194