Page 108 - Socially Intelligent Agents Creating Relationships with Computers and Robots
P. 108
Social Intelligence for Computers 91
new forms of communication, building ways to secure their trust in any newly
met agent that must not stay a stranger.
I recalled that, in order to create new ways of communicating, even humans
need to take inspiration in existing institutions in order to create new relational
patterns. What is important in this case is the transposition of these institutions
in a new field of relation. It thus seems reasonable to argue that, in the case
artificial agents, transposition of old communication systems (that don’t need to
be non-contradictory) in a new context could also be at the basis of the creativity
we are looking for. The actual research on agents languages, trying to reduce
ambiguity in communication, may at some point help to design socially intel-
ligent agents by giving them examples of what communication is, before they
produce alternative ways. But at the same time it stays clear that specialisation
in a task is contradictory to the presence of creativity in social relations. The
desire for communication, a range of diverse example of quite sophisticated
interactions and a huge number of reasons to communicate among themselves,
seem to be necessary to sustain artificial agents in their attempt to find out the
intention of the others and adapt to their habits of communication constantly.
References
[1] Byrne R., Whiten A. Machiavellian Intelligence. Clarendon Press, Oxford, 1988.
[2] Boissier O., Demazeau Y., Sichman J. Le problème du contrôle dans un Système Multi-
Agent (vers un modèle de contrôle Social). In: 1ère Journée Nationale du PRC-IA sur les
Systèmes Multi-Agents, Nancy, 1992.
[3] Boltanski L., Thévenot L. De la justification : les économies de la grandeur, Gallimard,
Paris, 1987.
[4] Bordini R. H. Contributions to an Anthropological Approach to the Cultural Adapta-
tion of Migrant Agents, University College, London, Department of Computer science,
University of London, 1999.
[5] Castelfranchi C., Conte R. Distributed Artificial Intelligence and social science: critical
issues, In: Foundations in Distributed Artificial Intelligence, Wiley, 527–542, 1996.
[6] Cardon A. Conscience artificielle et systèmes adaptatifs. Eyrolles, Paris, 1999.
[7] Dautenhahn, K. I Could Be You: The phenomenological dimension of social understand-
ing. Cybernetics and Systems, 28:417–453, 1997.
[8] Doran J. Simulating Collective Misbelief, Journal of Artificial Societies and Social
Simulation, <http://www.soc.surrey.ac.uk/JASSS/1/1/3.html>, 1998.
[9] Drogoul A., Corbara B., Lalande S. MANTA: new experimental results on the emergence
of (artificial) ant societies, In: Artificial societies. The computer simulation of social life,
UCL, London, pp 190–211, 1995.
[10] Drogoul A. Systèmes multi-agents situés. Mémoire d’habilitation à diriger des recherches
(habilitation thesis), 17 march 2000.
[11] Esfandiari B., et al.. Systèmes Multi-Agents et gestion de réseaux, Actes des 5ème
Journées Internationales PRC-GDR Intelligence Artificielle, , Toulouse, 317–345, 1995.