Page 51 - Socially Intelligent Agents Creating Relationships with Computers and Robots
P. 51
34 Socially Intelligent Agents
This assumes that the agent (i) does not make random and arbitrary actions,
(ii) does not have a supersmart process which models everything and itself,
in other words (iii) it is rational in the sense of using some not too complex
reasoning or computational process to make its choices.
5. Mutual Planning and Control
Our agent architecture is flexibly both goal-directed and environmentally
situated. It is also quite appropriate for social interaction, since the other agents
are perceived at each level and can directly influence the action of the subject
agent. It allows agents to enter into stable mutually controlled behaviors where
each is perceived to be carrying out the requirements of the social plan of the
other. Further, this mutually controlled activity is hierarchically organized, in
the sense that control actions fall into a hierarchy of abstraction, from easily
altered details to major changes in policy.
We implemented two kinds of social behavior, one was affiliation in which
agents maintained occasional face-to-face interactions which boosted affilia-
tion measures, and the other was social spacing in which agents attempted to
maintain socially appropriate spatial relationships characterized by proximity,
displacement and mutual observability. The set of agents formed a simple
society which maintained its social relations by social action.
During an affiliation sequence, each of two interacting agents elaborates its
selected social plan conditionally upon its perception of the other. In this way,
both agents will scan possible choices until a course of action is found which
is viable for both agents.
This constitutes mutual control. Note that the perception of the world by
distal sensors is quite shared, however perception by tactile, proprioceptive,
and visceral sensing is progressively more private and less shared. Each agent
perceives both agents, which has some common and some private perception
as input, and each agent executes its part of the joint action.
In each phase of grooming, each agent’s social plan detects which phase
it is in, has a set of expected perceptions of what the other may do, and a
corresponding set of actions which are instantiated from the perception of what
is actually perceived to occur. If, during a given phase, an agent changes its
action to another acceptable variant within the same phase, then the other agent
will simply perceive this and generate the corresponding action. If, on the other
hand, one agent changes its action to another whose perception is not consistent
with the other agent’s social plan, then the other agent’s social plan will fail at
that level. In this latter case, rules will no longer fire at that level, so the level
above will not receive confirmatory data and will start to scan for a viable plan
at the higher level. This may result in recovery of the joint action without the
first agent changing, however it is more likely that the induced change in the