Page 23 -
P. 23
16 K.G. Troitzsch
never mapped on to the set of integer or real numbers. More reasons for the fact that
this approach was given up for decades are given by Nowak et al. (1990, p. 371):
“the ad hoc quality of many of the assumptions of the models, perhaps because
of dissatisfaction with the plausibility of their outcomes despite their dependence
on extensive parameter estimation, or perhaps because they were introduced at
a time when computers were still cumbersome and slow and programming time-
consuming and expensive.”
Simulmatics had mainly the same fate as Abelson’s and Bernstein’s model:
Simulmatics was set up “for the Democratic Party during the 1960 campaign. :::
The immediate goal of the project was to estimate rapidly, during the campaign,
the probable impact upon the public, and upon small strategically important groups
within the public, of different issues which might arise or which might be used by
the candidates” (Ithiel de Pool and Abelson 1961, p. 167). The basic components
of this simulation were voter types, 480 of them, not individual voters, with their
attitudes towards a number of “issue clusters” (48 of them), “political characteristics
on which the voter type would have a distribution”. Voter types were mainly
defined by region, agglomeration structure, income, race, religion, gender and party
affiliation, and from different opinion polls and for different points of time, these
voter types were attributed four numbers per “issue cluster”: the number of voters
in this type and “the percentages pro, anti and undecided or confused on the issue”
(168). The simulation then ran in a way that for each voter type, empirical findings
about cross-pressure (e.g. anti-Catholic voters who had voted for the Democratic
Party in the 1958 congressional elections and were likely to stay at home instead
of voting for the Catholic candidate of the Democrats) were used to readjust the
preferences of the voters, type by type. It is an open question whether one would
call this a simulation in current social simulation communities, but as this approach
in some way resembles the classical static microsimulation, where researchers are
interested in the immediate consequences of new tax or transfer laws with no
immediate feedback, one would classify Simulmatics as a simulation project, though
with as little sophistication as static microsimulation has.
Thus the first two decades of computer simulation in the social sciences were
mainly characterised by two beliefs: that computer simulations were nothing but the
numerical solution of more adequate mathematical models and that they were most
useful for predicting the outcome of social processes whose first few phases had
already been observed. This was also the core of the discussion that was opened
in 1968 by Hayward Alker who analysed, among others, the Abelson-Bernstein
community referendum model and came to the conclusion that this “simulation
cannot be ‘solved’: one must project what will be in the media, what elites will be
doing, and know what publics already believe before even contingent predictions
are made about community decisions. In that sense an open simulation is bad
mathematics even if it is a good social system representation” (Alker 1974, p. 153).
In what Federico et al. (1981, p. 515) called “micro-operational computer
simulations”, they saw the opportunity that “computer modeling [could] contribute
to the comprehension of which parameters and variables are most decisive in
determining systemic behavior” (Federico et al. 1981, p. 519) and “produc[e]
surprising emergent properties” (Federico et al. 1981, p. 518). They predicted that