Page 158 -
P. 158
8 The Importance of Ontological Structure: Why Validation by ‘Fit-to-Data’... 155
object-oriented analysis and design (Rumbaugh 2003) and as such necessarily
involves visual modelling (usually Unified Modelling Language – UML). The
resultant conception is then implemented in one of the numerous object-oriented
computer languages. Although not formally provable as in any way equivalent, such
systems are prima facie evidence of the successful construction of working ontolo-
gies, albeit normally in UML. Although not equivalent, design practices can be
implemented that result in a one-to-one translation between UML and OWL (Object
Modelling Group 2014, p. 130). Embedded software systems operating machinery
in the real world (e.g. autopilots and control systems) have their ontologies validated
every time they send a signal to a servo or relay, which over time constitutes a
robust empirical test of their conceptualizations. From an agent-based modelling
perspective, where the ontology describes the entities and state variables in the
model, pragmatic issues with the ontology could become apparent when trying to
populate the model from empirical databases. However, since the schemas of these
databases are themselves ontologies, there is the potential to argue that it is those
ontologies, or the integration thereof, that is the locus of any problems, rather than
with the model itself. Hence, unless the context is embedded software, the ability to
initialize a model from empirical data is also a rather weak test of the validity of the
model’s ontology.
The third idea of stakeholder and/or expert evaluation involves a degree of
integration of specific problem-domain knowledge and ontological engineering
expertise if we are to be convinced that the evaluators have really understood
the implications of the formalization of their knowledge. Sowa (1999, p. 452)
points out that knowledge engineering is a specialism requiring skills in logic,
language and philosophy that domain experts should not be expected to have. Even
if experts agree on a conceptualization of a domain, they will not necessarily be
able to construct ontologies of it; this will be done instead by the knowledge
engineer. The resulting ontology is the knowledge engineer’s conceptualization
of the experts’ conceptualization and may differ from one knowledge engineer to
another. Such problems and in particular their relevance to the veridicality and the
actual information content of natural language utterances such as those from domain
experts are extensively discussed by Devlin (1991, chaps. 1–2).
There are formal methodologies available for knowledge elicitation, such as On-
To-Knowledge (Sure et al. 2004), creating ontologies from existing thesauruses,
or taxonomies, as illustrated by Huhn and Schulz (2004) and those listed by
Jones et al. (1998). However, such methodologies would normally be associated
with model design rather than model validation. Since validation is only really
meaningful when using ‘out-of-sample’ data (i.e. data not used for calibration),
we should expect validation of model ontologies to be a process that behaves
equivalently, for example, through using different experts during validation than
during model design. In the case of peer-reviewed journal articles, this arguably
happens automatically assuming that reviewers have had nothing to do with the
work. However, validation by peer review detracts from the sense of reporting
on a completed piece of work in a journal article and is not something that is
typically documented, except in more innovative open access journals such as Earth