Page 201 - Designing Sociable Robots
P. 201

breazeal-79017  book  March 18, 2002  14:11





                       182                                                             Chapter 10





                         Misclassifications are strongly correlated with expressions having similar facial or pos-
                       tural components. Surprise was sometimes confused for fear; both have a quick withdraw
                       postural shift (the fearful withdraw is more of a cowering movement whereas the sur-
                       prise posture has more of an erect quality) with wide eyes and elevated ears. Surprise was
                       sometimes confused with interest. Both have an alert and attentive quality, but interest is
                       an approaching movement whereas surprise is more of a startled movement. Sorrow was
                       sometimes confused with disgust; both are negative expressions with a downward com-
                       ponent to the posture. The sorrow posture shift is more down and “sagging,” whereas the
                       disgust is a slow “shrinking” retreat.
                         Overall, the data gathered from these small evaluations suggest that people with little to
                       no familiarity with the robot are able to interpret the robot’s facial expressions and affec-
                       tive posturing. For this data set, there was no clear distinction in recognition performance
                       between adults versus children, or males versus females. The subjects intuitively correlate
                       Kismet’s face with human likenesses (i.e., the line drawings). They map the expressions
                       to corresponding emotion labels with reasonable consistency, and many of the errors can
                       be explained through similarity in facial features or similarity in affective assessment (e.g.,
                       shared aspects of arousal or valence).
                         The data from the video studies suggest that witnessing the movement of the robot’s
                       face and body strengthens the recognition of the expression. More subjects must be tested,
                       however, to strengthen this claim. Nonetheless, observations from other interaction studies
                       discussed throughout this book support this hypothesis. For instance, the postural shifts
                       during the affective intent studies (see chapter 7) beautifully illustrate how subjects read
                       and affectively respond to the robot’s expressive posturing and facial expression. This
                       is also illustrated in the social amplification studies of chapter 12. Based on the robot’s
                       withdraw and approach posturing, the subjects adapt their behavior to accommodate the
                       robot.

                       10.6  Limitations and Extensions

                       More extensive studies need to be performed for us to make any strong claims about how
                       accurately Kismet’s expressions mirror those of humans. However, given the small sample
                       size, the data suggest that Kismet’s expressions are readable by people with minimal to no
                       prior familiarity with the robot.
                         The evaluations have provided us with some useful input for how to improve the strength
                       and clarity of Kismet’s expressions. A lower eyelid should be added. Several subjects com-
                       mented on this being a problem for them. The FACS system asserts that the movement of the
                       lower eyelid is a key facial feature in expressing the basic emotions. The eyebrow mechanics
   196   197   198   199   200   201   202   203   204   205   206