Page 52 -
P. 52

46                                                        B. Edmonds

              be slightly different. In this case, it is either a possibilistic explanation (A could
              cause B), in which case one just has to show one run exhibiting the complete
              chain, or a probabilistic explanation (A probably causes B, or A causes a
              distribution of outcomes around B) in which case one has to look at an assembly
              of runs, maybe summarising them using statistics or visual representations.
            • For explanatory purposes, the structure of the model is important, because that
              limits what the explanation consists of. If, for example, the model consisted of
              mechanisms that are known not to occur, any explanation one established would
              be in terms of these non-existent mechanisms—which is not very helpful. If one
              has parameterised the simulation on some in-sample data (found the values of
              the free parameters that made the simulation fit the in-sample data), then the
              explanation of the outcomes is also in terms of the in-sample data, mediated by
              these ‘magic’-free parameters. 8
            •The consequences of the simulations are generally measurements of the out-
              comes of the simulation. These are compared with the data to see if it ‘fits’. It is
              usual that only some of the aspects of the target data and the data the simulation
              produces are considered significant—other aspects might not be (e.g. might be
              artefacts of the randomness in the simulation or other factors extraneous to the
              explanation). The kind of fit between data and simulation outcomes needs to be
              assessed in a way that is appropriate to what aspects of the data are significant
              and which are not. For example, if it is the level of the outcome that is key, then a
              distance or error measure between this and the target data might be appropriate,
              but if it is the shape or trend of the outcomes over time that is significant, then
              other techniques will be more appropriate (e.g. Thorngate and Edmonds 2013).
            Example Stephen Lansing spent time in Bali as an anthropologist, researching
            how the Balinese coordinated their water usage (among other things). He and
            his collaborator, James Kramer, build a simulation to show how the Balinese
            system of temples acted to regulate water usage, through an elaborate system of
            agreements between farmers, enforced through the cultural and religious practices
            at those temples (Lansing and Kramer 1993). Although their observations could
            cover many instances of localities using the same system of negotiation over
            water, they were necessarily limited to all their observations being within the
            same culture. Their simulation helped establish the nature and robustness of their
            explanation by exploring a close universe of ‘what if’ questions, which vividly
            showed the comparative advantages of the observed system that had developed over
            a considerable period. The model does not predict that such systems will develop
            in the same circumstances, but it substantially adds to the understanding of the
            observed case.





            8
            I am being a little disparaging here, it may be that these have a definite meaning in terms of
            relating different scales or some such, but too often, they do not have any clear meaning but just
            help the model fit stuff.
   47   48   49   50   51   52   53   54   55   56   57