Page 227 -
P. 227
224 A. Evans et al.
these are no more forthcoming. Practitioners are only really at the stage where we
can start to talk about model results in the same way (see, e.g. Grimm et al. 2006).
Consistency in comparison is still a long way off, in part because statistics for model
outputs and validity are still evolving and in part because we still don’t have much
idea which statistics are best applied and when (for one example bucking this trend,
see Knudsen and Fotheringham 1986).
10.5 Future Directions
Recognising patterns in our modelled data allows us to:
1. Compare it with reality for validation.
2. Discover new information about the emergent properties of the system.
3. Make predictions.
Of these, discovering new information about the system is undoubtedly the
hardest, as it is much easier to spot patterns you are expecting. Despite the
above advances, there are key areas where current techniques do not match our
requirements. In particular, these include:
1. Mechanisms to determine when we do not have all the variables we need to
model a system and which variables to use.
2. Mechanisms to determine which minor variables may be important in making
emergent patterns through non-linearities.
3. The tracking of emergent properties through models.
4. The ability to recognise all but the most basic patterns in space over time.
5. The ability to recognise action across distant spaces over space and time.
6. The tracking of errors, error acceleration and homeostatic forces in models.
While we have components of some of these areas, what we have is but a
drop in the ocean of techniques we need. In addition, the vast majority of our
techniques are built on the 2500 years of mathematics that resolved to simplify
systems that were collections of individuals because we lacked the ability (either
processing power or memory) to cope with the individuals as individuals. Modern
computers have given us this power for the first time, and, as of yet, the ways we
describe such systems have not caught up, even if we accept that some reduction
in dimensionality and detail is necessary for a human to understand our models.
Indeed in the long run, it might be questioned whether the whole process of model
understanding and interpretation might be divorced from humans and delegated
instead to an artificially intelligent computational agency that can better cope with
the complexities directly.