Page 217 -
P. 217
214 A. Evans et al.
masses of data. Visualisation, for all its subjectivity and faults, remains a key
element of the research process. The standard process is to present one or more
attributes of the individuals in a map in physical or variable space. Such spaces
can then be evolved in movies or sliced in either time or space (Table 10.3 shows
some examples). In general, we cannot test the significance of a pattern without
first recognising it exists, and to that extent significance testing is tainted by the
requirement that it tests our competency in recognising the correct pattern as much
as that the proposed pattern represents a real feature of the distribution of our
data. Visualisation is also a vital tool in communicating results within the scientific
community and to the wider public. The former is not just important for the
transmission of knowledge, but because it allows others to validate the work. Indeed,
the encapsulation of good visualisation techniques within a model framework
allows others to gain deeper understanding of one’s model, and to experiment
at the limits of the model—what Grimm (2002) calls “visual debugging”. Good
model design starts like the design of any good application, with an outline of
what can be done to make it easy to use, trustworthy and simple to understand.
Traditionally, user interface design and visualisation have been low on the academic
agenda, to the considerable detriment of both the science and the engagement of
taxpayers. Fortunately, in the years since the turn of the millennium, there has
been an increasing realisation that good design engages the public and that there
is a good deal of social science research that can be built on that engagement.
Orford et al. (1999) identify computer graphics, multimedia, the World Wide Web
and virtual reality as four visualisation technologies that have recently seen a
considerable evolution within the social sciences. There is an ever-increasing array
of visualisation techniques at our disposal: Table 10.3 presents a classification
scheme of commonly used and more novel visualisation methods based on the
dimensionality and type of data that is being explored.
Another classification scheme of these techniques that is potentially very useful
comes from Andrienko et al. (2003). This classification categorises techniques based
on their applicability to different types of data:
• “Universal” techniques that can be applied whatever the data, e.g. querying and
animation
• Techniques revealing existential change, e.g. time labels, colouring by age, event
lists and space-time cubes
• Techniques about moving objects, e.g. trajectories, space-time cubes and snap-
shots in time
• Techniques centred on thematic/numeric change, e.g. change maps, time series
and aggregations of attribute values
For information on other visualisation schemes, see Cleveland (1983), Hinneburg
et al. (1999) and Gahegan (2001).
In each case, the techniques aim to exploit the ease with which humans recognise
patterns (Muller & Schumann Müller and Schumann 2003). Pattern recognition is,
at its heart, a human attribute, and one which we utilise to understand models, no
matter how we process the data. The fact that most model understanding is founded