Page 139 -
P. 139

136                                                    J.M. Galán et al.

            • Rerun the same code in different computers, using different operating systems,
              with different pseudorandom number generators. These are most often accessory
              assumptions of the executable model that are considered non-significant, so any
              detected difference will be a sign of an artefact. If no significant differences are
              detected, then we can be confident that the code comprises all the assumptions
              that could significantly influence the results. This is a valuable finding that can
              be exploited by the programmer (see next activity). As an example, Polhill et
              al. (2005) explain that using different compilers can result in the application of
              different floating-point arithmetic systems to the simulation run.
              Programmer’s activities:
            • Reimplement the code in different programming languages. Assuming that the
              code contains all the assumptions that can influence the results significantly,
              this activity is equivalent to creating alternative representations of the same
              executable model. Thus, it can help to detect errors in the implementation.
              There are several examples of this type of activity in the literature. Bigbee et
              al. (2007) reimplemented Sugarscape (Epstein and Axtell 1996) using MASON.
              Xu et al. (2003) implemented one single model in Swarm and Repast. The
              reimplementation exercise conducted by Edmonds and Hales (2003) applies here
              too.
            • Analyse particular cases of the executable model that are mathematically
              tractable. Any disparity will be an indication of the presence of errors.
            • Apply the simulation model to extreme cases that are perfectly understood
              (Gilbert and Terna 2000). Examples of this type of activity would be to run
              simulations without agents or with very few agents, explore the behaviour of
              the model using extreme parameter values, or model very simple environments.
              This activity is common practice in the field.



            7.5 Summary


            The dynamics of agent-based models are usually so complex that their own
            developers do not fully understand how they are generated. This makes it difficult, if
            not impossible, to discern whether observed significant results are legitimate logical
            implications of the assumptions that the model developer is interested in or whether
            they are due to errors or artefacts in the design or implementation of the model.
              Errors are mismatches between what the developer believes a model is and what
            the model actually is. Artefacts are significant phenomena caused by accessory
            assumptions in the model that are (mistakenly) considered non-significant. Errors
            and artefacts prevent developers from correctly understanding their simulations.
            Furthermore, both errors and artefacts can significantly decrease the validity of a
            model, so they are best avoided.
              In this chapter we have outlined a general framework that summarises the
            process of designing, implementing, and using agent-based models. Using this
   134   135   136   137   138   139   140   141   142   143   144