Page 156 -
P. 156

8 The Importance of Ontological Structure: Why Validation by ‘Fit-to-Data’...  153


            • Simplifying assumptions that apply during calibration may not apply at
              prediction.
            • The (formal) language you have used to represent the system during calibration
              may not be adequate during prediction.
            • You may not have enough data to justify a model with a high VC dimension, but
              using a model with a lower VC dimension would be oversimplifying.
            • In complex/non-ergodic systems, at a bifurcation point, the empirical data may
              have followed a path that had a low probability in comparison with other paths it
              could have taken.
              The various methods for measuring estimated prediction ability say relatively
            little about the structure of the model itself, except, in the case of metrics like the
            AIC and BIC, by penalizing models for having too many parameters. In neural
            networks, this is the number of weights the network has, but assumptions about
            functional form are embedded in the structure of the network itself – how the nodes
            are arranged into layers and/or connected to each other. This structure, however,
            only reflects the flexibility the network will have to achieve certain combinations of
            outputs on all the inputs it might be given (its ‘wiggliness’). This is a rather weak
            ontological commitment to make to a set of data.
              Neural networks are an extreme – one in which there is the minimum repre-
            sentative connection between the empirical world and the nodes and network of
            connecting weights that determine the behaviour of the model. They are nevertheless
            useful when there is a large amount of data available for training, the modelled
            system isn’t complex, and one is not particularly concerned about how the input-
            output mapping is achieved, only that whatever mapping obtained has good
            prediction ability.
              Neural networks are very interesting to contrast with agent-based models, which
            also feature networks of behaving entities, but where the network of connections and
            the behaving entities are supposed to have a representative link with the empirical
            world. In the artificial intelligence community, this representative structure would
            be referred to as the microworld (e.g. Chenoweth 1991) of the simulation. A famous
            example is Winograd’s (1972) blocks world. However, with advances in formal
            languages for expressing such representative structure, we could also refer to these
            microworlds as ontologies.
              Ontologies in computer science are defined by Gruber (1993) as formal, explicit
            representations of shared conceptualizations. In general, ontologies cover a broad
            range of formalized representations, including diagrams, computing code and even
            the structure of a filesystem, but the development of description logics (Baader and
            Nutt 2003) means that there are formal languages for ontologies to which automated
            reasoning can be applied. One of the most popularly used languages for ontologies,
            which draws on description logics, is the Web Ontology Language (OWL; Cuenca
            Grau et al. 2008; Horrocks et al. 2003). The application of OWL to agent-based
            modelling has been discussed by a number of authors (e.g. Gotts and Polhill 2009;
            Livet et al. 2010), but of particular relevance for our purposes is the application of
            OWL to representing the structure of agent-based models (Polhill and Gotts 2009).
   151   152   153   154   155   156   157   158   159   160   161