Page 91 - Socially Intelligent Agents Creating Relationships with Computers and Robots
P. 91

74                                             Socially Intelligent Agents

                             Issue II.  The robot’s face needs several degrees of freedom to have a va-
                             riety of different expressions, which must be understood by most people.The
                             insufficient DoF of Elektra’s face was one of our motivations to build Feelix.
                             The question, however, is how many DoF are necessary to achieve a particu-
                             lar kind of interaction. Kismet’s complex model, drawn from a componential
                             approach, allows to form a much wider range of expressions; however, not all
                             of them are likely to convey a clear emotional meaning to the human. On the
                             other hand, we think that Feelix’s “prototypical” expressions associated to a
                             discrete emotional state (or to a combination of two of them) allow for eas-
                             ier emotion recognition—although of a more limited set—and association of
                             a particular interaction with the emotion it elicits. This model also facilitates
                             an incremental, systematic study of what features are relevant (and how) to
                             express or elicit different emotions. Indeed, our experiments showed that our
                             features were insufficient to express fear, were body posture (e.g., the position
                             of the neck) adds much information.

                             Issue IV.  The robot must convey intentionality to bootstrap meaningful so-
                             cial exchanges with the human. The need for people to perceive intentionality
                             in the robot’s displays was another motivation underlying the design of Feelix’s
                             emotion model. It is however questionable that “more complexity” conveys
                             “more intentionality” and adds believability, as put forward by the uncanny
                             valley hypothesis. As we observed with Feelix, very simple features can have
                             humans put much on their side and anthropomorphize very easily.


                             Issue V.   The robot needs regulatory responses so that it can avoid inter-
                             actions that are either too intense or not intense enough. Although many be-
                             havioral elements can be used for this, in our robot emotional expression itself
                             acted as the only regulatory mechanism influencing people’s behavior—in par-
                             ticular sadness as a response to lack of interaction, and anger as a response to
                             overstimulation.

                             5.     Discussion
                               What can a LEGO robot tell us about emotion? Many things, indeed. Let
                             us briefly examine some of them.

                             Simplicity.   First, it tells us that for modeling emotions and their expres-
                             sions simple is good ... but not when it is too simple. Building a highly ex-
                             pressive face with many features can be immediately rewarding as the attention
                             it is likely to attract from people can lead to very rich interactions; however, it
                             might be more difficult to evaluate the significance of those features in eliciting
                             humans’ reactions. On the contrary, a minimalist, incremental design approach
                             that starts with a minimal set of “core” features allows us not only to identify
   86   87   88   89   90   91   92   93   94   95   96