Page 226 -
P. 226

10 Understanding Simulation Results                             223

            in its structure and the interaction of these ranges. For example, a simple model
            a D b has no constraints, but a D b/c, where c D distance between a and b, adds an
            additional constraint even though there are more parameters. As such rules build up
            in complex systems, it is possible that parameter values become highly constrained,
            even though, taken individually, any given element of the model seems reasonably
            free. This may mean that if a system is well modelled, exploration of the model’s
            parameter space by an AI might reveal the limits of parameters within the constraints
            of the real complex system. For example, Heppenstall et al. (2007) use a genetic
            algorithm to explore the parameterisation of a petrol retail model/market and find
            that while some GA-derived parameters have a wide range, others consistently fall
            around specific values that match those derived from expert knowledge of the real
            system.
              The same issues as hold for causality hold for data uncertainty and error. We have
            little in the way of techniques for coping with the propagation of either through
            models (see Evans 2012 for a review). It is plain that most real systems can be
            perturbed slightly and maintain the same outcomes, and this gives us some hope
            that errors at least can be suppressed; however we still remain very ignorant as to
            how such homeostatic forces work in real systems and how we might recognise or
            replicate them in our models. Data and model errors can breed patterns in our model
            outputs. An important component of understanding a model is understanding when
            this is the case. If we are to use a model to understand the dynamics of a real system
            and its emergent properties, then we need to be able to recognise novelty in the
            system. Patterns that result from errors may appear to be novel (if we are lucky), but
            as yet there is little in the way of toolkits to separate out such patterns from truly
            interesting and new patterns produced intrinsically.
              Currently our best option for understanding model artefacts is model-to-model
            comparisons. These can be achieved by varying one of the following contexts
            while holding the others the same: the model code (the model, libraries and
            platform), the computer the model runs on or the data it runs with (including
            internal random number sequences). Varying the model code (for instance, from
            Java to CCC or from an object-orientated architecture to a procedural one) is a
            useful step in that it ensures the underlying theory is not erroneously dependent
            on its representation. Varying the computer indicates the level of errors associated
            with issues like rounding and number storage mechanisms, while varying the data
            shows the degree to which model and theory are robust to changes in the input
            conditions. In each case, a version of the model that can be transferred between
            users, translated onto other platforms and run on different data warehouses would
            be useful. Unfortunately, however, there is no universally recognised mechanism for
            representing models abstracted from programming languages. Mathematics, UML
            and natural languages can obviously fill this gap to a degree, but not in a manner
            that allows for complete automatic translation. Even the automatic translation of
            computer languages is far from satisfactory when there is a requirement that the
            results be understood by humans so errors in knowledge representation can be
            checked. In addition, many such translations work by producing the same binary
            executable. We also need standard ways of comparing the results of models, and
   221   222   223   224   225   226   227   228   229   230   231