Page 157 -
P. 157

154                                                         G. Polhill

            Mappings between the programming languages used for implementing agent-based
            models and OWL ontologies are discussed by Polhill (2015) and Troitzsch (2015).
              For the purposes of highlighting why the ontology of an agent-based model
            becomes so much more significant, one of the measures of a description logic
            is its expressivity. The expressivity of a logic is essentially the various kinds
            and combinations of axiom it allows you to create whilst still having decidable
            reasoning. We might compare different modelling approaches according to the
            ontological expressivity needed to capture descriptions of the states the model can
            have. Appendix 3 compares the expressivity of the ontologies of various modelling
            approaches and the corresponding description logics.
              The fact that agent-based models have a generally richer expressivity for defining
            the ontologies over which they operate means that some of the complaints of
            qualitative social researchers about quantitative social researchers are brought into
            sharper focus. The ontology of an agent-based model is less constrained by the
            amount of data available, aesthetic concerns about elegance or the need to reduce
            the number of variables to enable tractable mathematical evaluation of equilibria.
              It is also much clearer that the ontology is by and large a subjective choice.
            Nevertheless, we wish to have an idea of how ‘good’ that subjective choice is –
            something that may be as much about normativity in the community with an interest
            in the model as (supposedly) objective numerical measures. That said, if we are to
            move beyond fit-to-data as the sole basis for our belief in the predictions of a model,
            we still need some ways of assessing the model’s ontology as an additional basis for
            such belief. This is far from being in a position where there are established methods,
            but four ways in which an ontology can be assessed are:
            • Logical consistency
            • Populating it with instances
            • Stakeholder and/or expert evaluation
            • Comparison with existing ontologies
              If the ontology can be translated into OWL, the first of these can be achieved
            using the consistency checking available in reasoning applications such as Pellet
            (Sirin et al. 2007), FACTCC (Tsarkov and Horrocks 2006), HermiT (Shearer et
            al. 2008) and Ontop (Bagosi et al. 2014). Though consistency checking ensures we
            have at least made no logical contradictions in our specification of the ontology, it is
            rather a low bar to set as it says little about the quality of the representation. Beyond
            mere logical consistency, there are methodologies such as OntoClean (Guarino and
            Welty 2009) for validating the ontological adequacy of taxonomies. However, this
            also says more about the correctness of the operational semantics of a given set of
            axioms in an ontology, as opposed to addressing the sufficiency of that ontology to
            represent a given problem domain.
              Populating an ontology with instances is another check of the ‘validity’ of an
            ontology, as difficulties with so doing, especially with empirical data, can reveal
            where the ontology is ‘awkward’ in its specifications. Working ontologies are
            produced every time a successful IT project is implemented. Any modern enterprise
            system is usually the result of a problem domain being modelled using some
   152   153   154   155   156   157   158   159   160   161   162