Page 83 - Socially Intelligent Agents Creating Relationships with Computers and Robots
P. 83
66 Socially Intelligent Agents
In our first prototype of XDM-Agent, the agent’s cooperation personality
(and therefore its helping behaviour) may be settled by the user at the begin-
ning of the interaction or may be selected according to some hypothesis about
the user. As we said before, the agent should be endowed with a plan recogni-
tion ability that enables it to update dynamically its image of the user. Notice
that, while recognising communication traits requires observing the external
(verbal and nonverbal) behaviour of the user, inferring the cooperation atti-
tude requires reasoning on the history of interaction (a cognitive diagnosis task
that we studied, in probabilistic terms, in [7]). Once some hypothesis about the
user’s delegation personality exists, how should the agent’s helping personality
be settled? One of the controversial results of research about communication
personalities in HCI is whether the similarity or the complementarity principles
hold—that is, whether an “extroverted” interface agent should be proposed to
an “extroverted” user, or the contrary. When cooperation personalities are con-
sidered, the question becomes the following: How much should an interface
agent help a user? How much importance should be given to the user experi-
ence (and therefore her abilities in performing a given task), and how much to
her propensity to delegate that task? In our opinion, the answer to this question
is not unique. If XDM-Agent’s goals are those mentioned before, that is “to
make sure that the user performs the main tasks without too much effort” and “to make
sure that the user does not see the agent as too much intrusive or annoying”, then the
following combination rules may be adopted:
CR1 (DelegatingIfNeeded U) ⇒ (Benevolent XDM): The agent helps delegating-
if-needed users only if it presumes that they cannot do the action by
themselves.
CR2 (Lazy U) ⇒ (Supplier XDM): The agent does its best to help lazy users, unless
this conflicts with its own goals.
... and so on. However, if the agent has also the goal to make sure that users
exercise their abilities (such as in Tutoring Systems), then the matching criteria
will be different; for instance:
CR3 (Lazy U) ⇒(Benevolent XDM): The agent helps a lazy user only after check-
ing that she is not able to do the job by herself. In this case, the agent’s
cooperation behaviour will be combined with a communication behaviour
(for instance, Agreeableness) that warmly encourages the user in trying
to solve the problem by herself.
XDM-Agent has been implemented by trying to achieve a distinction be-
tween its external appearance (its “Body”, developed with MS-Agent) and its
internal behaviour (its “Mind”, developed in Java). It appears as a character
that can take several bodies, can move on the display to indicate objects and