Page 98 -
P. 98
94 P.-O. Siebers and F. Klügl
Agent architectures can be seen as specific pattern for agent-based systems.
Depending on whether human decision-making shall be reproduced in ways that
resembles how humans think or whether the agents need to exhibit complex and
flexible behaviour, different architectures can be used. For the former type, the so-
called cognitive architectures such as SOAR (Laird et al. 1987; Wray and Jones
2005) or ACT-R (Anderson et al. 2004; Taatgen et al. 2006) have been suggested
(for a short overview, see (Jones 2005)). Those architectures resemble theories from
cognitive science supported by results from experiments with humans. Especially
SOAR has been used for reproducing human behaviour in military training systems
(Wray et al. 2005).
Although often indicated, the so-called BDI architecture is not a cognitive
agent architecture but a practical reasoning architecture (Wooldridge 2009). Its
underlying motivation consists of a human-inspired means-end analysis separating
the decision about which goal (“desire”) to pursue from the actual planning towards
the goal the agent is committed to achieve (“intention”). The BDI architecture
has turned out to be very useful for software agents in general. It also appears
to be a reasonable choice for organising the internal decision-making of agents
in simulation, especially when more sophisticated agent behaviour needs to be
formulated (see, e.g. Joo (2013), Caillou et al. (2015) or Norling (2003)). Even
in simulations with rather simple agent behaviour, it is advisable to use an agent
architecture to organise the behaviour description, so that the agent program is more
transparent, better readable and thus better analysable and maintainable.
Although not introduced as agent architectures, the general setup of rule-based
systems, state automata or decision trees can provide important ways to structure
agent behaviour descriptions and separate agents’ decision-making from the actual
processing. A rule-based system contains a set of rules as “if ::: then ::: ” constructs
and a mechanism that systematically tests the current perception and agent state
against the if parts of the constructs. If something is true, the second part, the
“then ::: ” part, is activated. Using such a setup instead of cascades of if-then-else
programming language statements supports clarity of design and extensibility of
the decision-making model. Similar are decision trees, which form another way
to avoid ugly, inflexible implementations with hard-wired if-then-else cascades. A
decision tree is a data structure that organises conditions in nodes and different
alternative values for those conditions in the branches out of the node. Another
architecture pattern is a state automaton with an explicit representation of the state
that the agent is in. The state is associated with particular behaviour. State changes
happen based on a trigger relevant in the current state. An older, slightly more
complex agent architecture following those ideas is the EMF frame (Drogoul and
Ferber 1994). All agent architectures presented in AOSE can also be viewed as
local pattern for developing agents. They suggest a structure that supports design
and implementation of agents with non-trivial behaviour programs. Clearly, those
architectures can be useful for ABSS as well.
In addition to software design patterns and agent architectures, there is another
category of (software) pattern relevant for ABSS. These are meta-patterns capturing
best practices in working with a model, not directly related to the model design or to