Page 180 -
P. 180
9 Verifying and Validating Simulations 177
in accordance with the researcher’s intentions. However, the outcomes of computer
programs in social simulation are often unintended or not known a priori and thus
the verification process requires more than checking that the executable model does
what it was planned to do. The goal of the whole exercise is to assess logical
inferences within, as well as between, the pre- and the post-computational models.
This requires assessing whether the post-computational model—while expressing
emergent concepts that the pre-computational model may not have been intended
to express—is consistent with the latter. From a methodological point of view this
is a complicated question, but from a practical perspective one might operationally
define the verification problem with the following procedures:
(a) For some pre-computational model definable as a set of input/output pairs in a
specified parameter range, the corresponding executable model is verified for
the range considered if the corresponding post-computational model expresses
the same set of inputs/outputs for the range considered.
(b) For some pre-computational model defined according to the researcher and/or
stakeholders’ intentions in a specified parameter range, the corresponding
executable model is verified for the range considered if the corresponding post-
computational model meets the researchers and/or stakeholders’ expectations
for the range considered.
Note that both procedures limit the verification problem to a clearly defined
parameter range. The first option is appropriate when quantitative data is available
from the target with which to test the executable model. This is normally not the
case, leaving the second option as the suitable path for the verification process. This
is possible since the aim is to assess the appropriateness of the relations that may be
established between micro-levels of description specified in the pre-computational
model and macro-levels of description expressed through post-computational mod-
els, usually amenable to evaluation by researchers and stakeholders.
In any case, the verifiability of a simulation is influenced by the process used to
develop that simulation. The tools used to implement the executable computational
model are a major factor affecting verification (Sargent 2013). The use of high-level
simulation packages has the potential to simplify verification, since the majority
of common model building blocks are provided, and these are typically already
verified. Arguably, this is even more so in the case of open source toolkits, such
as NetLogo (Wilensky 1999) or Repast Simphony (North et al. 2013), where, in
addition to the developers themselves, the respective user communities perform
verification of the provided simulation blocks and modules. Community members
can not only detect bugs, but also correct them due to the open and collaborative
nature of these projects. When such modelling toolkits are used, verification mainly
consists of guaranteeing that the model has been correctly implemented using the
available modules.
However, while the use of modelling toolkits reduces the programming and
verification effort, it typically increases simulation times (Fachada et al. 2017) and
limits the modeller’s flexibility in implementing non-standard behaviours (Sargent
2013). As such, it is often necessary to directly implement models using general-