Page 59 - Socially Intelligent Agents Creating Relationships with Computers and Robots
P. 59
42 Socially Intelligent Agents
2 a set of candidate strategies for obtaining its goals (this roughly corre-
sponding to plans); each strategy would also be composed of several
parts: the goal; the sequence of actions, including branches dependent
upon outcomes, loops etc.; (possibly) its past endorsements as to its past
success.
These could be developed using a combination of anticipatory learning the-
ory [15] as reported in [21] and evolutionary computation techniques. Thus
rather than a process of inferring sub-goals, plans etc. they would be construc-
tively learnt (similar to that in [9] and as suggested by [19]). The language
of these models needs to be expressive, so that an open-ended model structure
such as in genetic programming [17] is appropriate, with primitives to cover all
appropriate actions and observations. Direct self-reference in the language to
itself is not built-in, but the ability to construct labels to distinguish one’s own
conditions, perceptions and actions from those of others is important as well
as the ability to give names to individuals. The language of communication
needs to be a combinatorial one, one that can be combinatorially generated by
the internal language and also deconstructed by the same.
The social situation of the agent needs to have a combination of complex
cooperative and competitive pressures in it. The cooperation is necessary if
communication is at all to be developed and the competitive element is nec-
essary in order for it to be necessary to be able to predict other’s actions [18].
The complexity of the cooperative/competitive mix encourages the prediction
of one’s own decisions. A suitable environment is where, in order to gain
substantial reward, cooperation is necessary, but that inter-group competition
occurs as well as competition for the dividing up of the rewards that are gained
by a cooperative group.
Many of the elements of this model have already been implemented in pilot
systems [9]; [11]; [21].
6. Consequences for Agent Production and Use
If we develop agents in this way, allowing them to learn their selves from
within a human culture, we may have developed agents such that we can relate
to them because they will be able to relate to us etc. The sort of social games
which involve second guessing, lying, posturing, etc. will be accessible to
the agent due to the fundamental empathy that is possible between agent and
human. Such an agent would not be an ’alien’ but (like some of the humans
we relate to) all the more unsettling for that. To achieve this goal we will
have to at least partially abandon the design stance and move more towards an
enabling stance and accept the necessity of considerable acculturation of our
agents within our society much as we do with our children.