Page 51 -
P. 51

4 Different Modelling Purposes                                  45

            • That information about the following are included: exactly what aspects it
              predicts, guidelines on when the model can be used to predict and when not,
              some guidelines as to the degree or kind of accuracy it predicts with and any
              other caveats a user of the model should be aware of
            • That the model code is distributed so others can explore when and how well it
              predicts




            4.3 Explanation

            4.3.1 Motivation


            Often, especially with complex social phenomena, one is particularly interested in
            understanding why something occurs—in other words, explaining it. Even if one
            cannot predict something before it is known, you still might be able to explain
            it afterwards. This distinction mirrors that in the physical sciences where there
            are both phenomenological and explanatory laws (Cartwright 1983)—the former
            matches the data, whilst the latter explains why that came about. In mature science,
            predictive and explanatory laws are linked in well-understood ways but with less
            well-understood phenomena one might have one without the other. For example, the
            gas laws that link measurements of temperature, pressure and volume were known
            before the explanation in terms of molecules of gas bouncing randomly around and
            the formal connection between both accounts only made much later. Understand-
            ing is important for managing complex systems as well as understanding when
            predictive models might work. Whilst generally with complex social phenomena
            explanation is easier than prediction, sometimes prediction comes first (however, if
            one can predict then this invites research to explain why the prediction works).
              If one makes a simulation in which certain mechanisms or processes are
            built in and the outcomes of the simulation match some (known) data, then this
            simulation can support an explanation of the data using the built-in mechanisms. The
            explanation itself is usually of a more general nature, and the traces of the simulation
            runs are examples of that account. Simulations that involve complicated processes
            can thus support complex explanations—that are beyond natural language reasoning
            to follow. The simulations make the explanation explicit, even if we cannot fully
            comprehend its detail. The formal nature of the simulation makes it possible to
            test the conditions and cases under which the explanation works and to better its
            assumptions.
            Definition
              By ‘explanation’ we mean establishing a possible causal chain from a set-up to its
              consequences in terms of the mechanisms in a simulation.
              Unpacking some parts of this:

            •The possible causal chain is a set of inferences or computations made as part
              of running the simulation—in simulations with random elements, each run will
   46   47   48   49   50   51   52   53   54   55   56