Page 155 -
P. 155
152 G. Polhill
Table 8.1 Arguments about validation by fit-to-data and whether the model is ‘good’ or ‘bad’
Validation result Good model Bad model
Acceptable The model has fit the data, and we Although the model has fit-to-data,
estimate it will predict accurately it is oversimplified, relies on
in the future unrealistic assumptions, doesn’t
really explain anything or doesn’t
allow for the possibility that things
could have turned out differently. Its
predictions should not be trusted
Not acceptable The particular course that history The model did not fit the empirical
took was highly contingent on data we have, so it must be rejected
phenomena that it would not be and its predictions ignored
reasonable to include in any model.
There is a ‘possible world’ in
which the model would be right.
Alternatively, the model reproduces
‘patterns’ (as per Grimm et al.
1996) in the data, if not the data
itself. It might still be worth
considering the model’s predictions
have been used rather than another, but since reviewers’ statistical fetishes are
impossible to predict, we cannot provide guidance as to how to satisfy them.
However, we do give a summary of the various measures and their properties in
Appendix 2 for reference.
8.1.5 Validating Ontologies
After summarizing the foregoing arguments, this section elaborates more on the
structure of the model, which may be referred to as its ‘ontology’. After briefly
introducing ontologies, we build an argument for why agent-based models have the
scope to pay more attention to this side of modelling based on the expressivity of
a formal language for writing ontologies. We then consider various ways in which
ontologies could be ‘validated’ – in the sense of establishing confidence in them,
finding that this is far from being a settled area.
The foregoing pages had two objectives. One was to summarize all the different
ways people try to estimate how well their model has fit some empirical data, to
give them some kind of (preferably quantitative) idea of how much they should
believe in its predictions. (See also Appendix 2.) The other is to argue that there is
more to evaluating a model than just looking at its fit-to-data, largely by showing
various ways in which fit-to-data may not be as convincing an indicator of a model’s
suitability as some appear to believe it to be. To summarize the reasons, the first two
of which may seem a little ‘unfair’ but should be anticipated in complex social
systems: