Page 78 -
P. 78

5 Informal Approaches to Developing Simulation Models           73

            understanding of your simulation and in most cases is usable even when you change
            the simulation setup. Chapter 10 in this volume (Evans et al. 2017) discusses a range
            of visualisation techniques aimed at aiding the understanding of a simulation model.



            5.6 The Consolidation Phase


            The consolidation phase should occur after one has got a clear idea about what
            simulation one wants to run, a good idea of what one wants to show with it and
            a hypothesis about what is happening. It is in this stage that one stops exploring
            and puts the model design and results on a more reliable footing. It is likely
            that even if one has followed a careful and formal approach to model building,
            some consolidation will still be needed, but it is particularly crucial if one has
            developed the simulation model using an informal, exploratory approach. The
            consolidation phase includes processes of simplification, checking, output collection
            and documentation. Although the consolidation phase has been isolated here, it is
            not unusual to include some of these processes in earlier stages of development,
            intermingling exploration and consolidation. In such circumstances, it is essential
            that a final consolidation pass is undertaken, to ensure that the model is truly robust.
              Simplification is where one decides which features/aspects of the model you
            need for the particular paper/demonstration you have in mind. In the most basic
            case, this may just be a decision as to which features to ignore and keep fixed as
            the other features are varied. However, this is not very helpful to others because
            (a) it makes the code and simulation results harder to understand (the essence of
            the demonstration is cluttered with excess detail) and (b) it means your model is
            more vulnerable to being shown to be brittle (there may be a hidden reliance on
            some of the settings for the key results). A better approach is to actually remove
            the features that have been explored but turned out to be unimportant so that only
            what is important and necessary is left. This not only results in a simpler model for
            presentation but is also a stronger test of whether or not the removed features were
            irrelevant.
              The checking stage is where one ensures that the code does in fact correspond
            to the original intention when programming it and that it contains no hidden bug
            or artefact. This involves checking that the model produces “reasonable” outputs
            for both “standard” inputs and “extreme” inputs (and of course identifying what
            “standard” and “extreme” inputs and “reasonable” outputs are). Commonly, this
            involves a series of parameter sweeps, stepping the value of each parameter in
            turn to cover as wide a combination as possible (limited usually by resources).
            When possible, the outputs of these sweeps should be compared against a standard,
            whether that is real-world data on the target phenomenon or data from a comparable
            (well-validated) model.
              The output collection stage is where data from the various runs is collected
            and summarised in such a way that (a) the desired results are highlighted and
            (b) sufficient “raw” data is still available to understand how these results have
   73   74   75   76   77   78   79   80   81   82   83