Page 224 -
P. 224
10 Understanding Simulation Results 221
tend to be highly clustered, while binomial models may be the most effective
when observations are highly dispersed around the mean. An interesting example
is Fleming and Sorenson (2001) in which binomial estimates of technological
innovation are compared to the complexity of the invention measured by both
the number of components and the interdependence between those components.
In behavioural space, methodologies such as association rule making (e.g. Hipp
et al. 2002) allow the Bayesian association of behavioural attributes. It is worth
noting that where models involve a distribution in physical space, this can introduce
problems, in particular where the model includes neighbourhood-based behaviours
and therefore the potential to develop spatial auto and cross-correlations. These alter
the sampling strategies necessary to prove relationships—a full review of the issues
and methodologies to deal with them can be found in Wagner and Fortin (2005).
Experimentation In terms of experimentation, we can make the rather artificial
distinction between sensitivity testing and “what if?” analyses—the distinction is
more one of intent than anything. In sensitivity analysis one perturbs model inputs
slightly to determine the stability of the outputs, under the presumption that models
should match the real world in being insensitive to minor changes (a presumption
not always well founded). In “what if?” analyses, one alters the model inputs to see
what would happen under different scenarios. In addition to looking at the output
values at a particular time slice, the stability or otherwise of the model, and the
conditions under which this varies, also gives information about the system (Grimm
1999).
Tracking Causality Since individual-based models are a relatively recent devel-
opment, there is far less literature dealing with the tracking of causality through
models. It helps a little that the causality we deal with in models, which is essentially
a mechanistic one, is far more concrete than the causality perceived by humans,
which is largely a matter of the repeated coincidence of events. Nevertheless,
backtracking through a model to mark a causality path is extremely hard, primarily
for two reasons. The first is what we might call the “find the lady problem”—that
the sheer number of interactions involved in social processes tends to be so large we
don’t have the facilities to do the tracking. The second issue, which we might call
the “drop in the ocean problem”, is more fundamental as it relates to a flaw in the
mathematical representation of objects, that is, that numbers represent aggregated
quantities, not individuals. When transacted objects in a system are represented with
numbers greater than one, it is instantly impossible to reliably determine the path
taken by a specific object through that system. For objects representing concepts,
either numerical (e.g. money) or nonnumerical (e.g. a meme), this isn’t a problem
(one dollar is much like any other; there is only one Gangnam style to know).
However, for most objects such aggregations place ambiguous nodes between what
would otherwise be discrete causal pathways. Fortunately, we tend to use numbers
in agent models as a methodology to cope with our ignorance (e.g. in the case of
calibrated parameters) or the lack of the computing power we’d need to deal with
individual objects and their transactional histories (e.g. in the case of a variable
like “number of children”). As it happens, every day brings improvements to both.