Page 182 - Designing Sociable Robots
P. 182
breazeal-79017 book March 18, 2002 14:11
Facial Animation and Expression 163
displays with vocal, postural, and gaze/orientation behavior. Ultimately, this subsystem
might include learned movements that could be acquired during imitative facial games with
the caregiver.
The emotive facial expression subsystem is responsible for generating a facial expression
that mirrors the robot’s current affective state. This is an important communication signal
for the robot. It lends richness to social interactions with humans and increases their level of
engagement. For the remainder of this chapter, I describe the implementation of this system
in detail. I also discuss how affective postural shifts complement the facial expressions and
lend strength to the overall expression. The expressions are analyzed and their readability
evaluated by subjects with minimal to no prior familiarity with the robot (Breazeal, 2000a).
10.3 Generation of Facial Expressions
There have been only a few expressive autonomous robots (Velasquez, 1998; Fujita &
Kageyama, 1997) and a few expressive humanoid faces (Hara, 1998; Takanobu et al., 1999).
The majority of these robots are only capable of a limited set of fixed expressions (a single
happy expression, a single sad expression, etc.). This hinders both the believability and
readability of their behavior. The expressive behavior of many robotic faces is not life-like
(or believable) because of their discrete, mechanical, and reflexive quality—transitioning
between expressions like a switch being thrown. This discreteness and discontinuity of
transitions limits the readability of the face. It lacks important cues for the intensity of the
underlying affective state. It also lacks important cues for the transition dynamics between
affective states.
Insights from Animation
Classical and computer animators have a tremendous appreciation for the challenge in
creating believable behavior. They also appreciate the role that expressiveness plays in
this endeavor. A number of animation guidelines and techniques have been developed for
achieving life-like, believable, and compelling animation (Thomas & Johnston, 1981; Parke
& Waters, 1996). These rules of thumb explicitly consider audience perception. The rules
are designed to create behavior that is rich and interesting, yet easily understandable to the
human observer. Because Kismet interacts with humans, the robot’s expressive behavior
must cater to the perceptual needs of the human observer. This improves the quality of social
interaction because the observer feels that she understands the robot’s behavior. This helps
her to better predict the robot’s responses to her, and in turn to shape her own responses to
the robot.
Of particular importance is timing: how to sequence and how to transition between
actions. A cardinal rule of timing is to do one thing at a time. This allows the observer to

