Page 123 -
P. 123
120 J.M. Galán et al.
7.1 Introduction
Agent-based modelling is one of multiple techniques that can be used to conceptu-
alise social systems. What distinguishes this methodology from others is the use of
a more direct correspondence between the entities in the system to be modelled and
the agents that represent such entities in the model (Edmonds 2001). This approach
offers the potential to enhance the transparency, soundness, descriptive accuracy,
and rigour of the modelling process, but it can also create difficulties: agent-based
models are generally complex and mathematically intractable, so their exploration
and analysis often require computer simulation.
The problem with computer simulations is that understanding them in reasonable
detail is not as straightforward an exercise as one could think (this also applies
to one’s own simulations). A computer simulation can be seen as the process of
applying a certain function to a set of inputs to obtain some results. This function
is usually so complicated and cumbersome that the computer code itself is often
not that far from being one of the best descriptions of the function that can be
provided. Following this view, understanding a simulation would basically consist
in identifying the parts of the mentioned function that are responsible for generating
particular (sub)sets of results.
Thus, it becomes apparent that a prerequisite to understand a simulation is to
make sure that there is no significant disparity between what we think the computer
code is doing and what is actually doing. One could be tempted to think that, given
that the code has been programmed by someone, surely there is always at least one
person—the programmer—who knows precisely what the code does. Unfortunately,
the truth tends to be quite different, as the leading figures in the field report:
You should assume that, no matter how carefully you have designed and built your
simulation, it will contain bugs (code that does something different to what you wanted
and expected). (Gilbert 2007)
An unreplicated simulation is an untrustworthy simulation—do not rely on their results,
they are almost certainly wrong. (‘Wrong’ in the sense that, at least in some detail or other,
the implementation differs from what was intended or assumed by the modeller). (Edmonds
and Hales 2003)
Achieving internal validity is harder than it might seem. The problem is knowing whether
an unexpected result is a reflection of a mistake in the programming, or a surprising
consequence of the model itself. [ ::: ] As is often the case, confirming that the model was
correctly programmed was substantially more work than programming the model in the first
place. (Axelrod 1997a)
In the particular context of agent-based simulation, the problem tends to be
exacerbated. The complex and exploratory nature of most agent-based models
implies that, before running a model, there is almost always some uncertainty about
what the model will produce. Not knowing a priori what to expect makes it difficult
to discern whether an unexpected outcome has been generated as a legitimate result
of the assumptions embedded in the model or, on the contrary, it is due to an error
or an artefact created in its design, in its implementation, or in the running process
(Axtell and Epstein 1994, p. 31; Gilbert and Terna 2000).