Page 155 - Socially Intelligent Agents Creating Relationships with Computers and Robots
P. 155
138 Socially Intelligent Agents
with the applied behavior mode (vs. story mode). The automated training
was arranged to teach children to "match" four different emotion expressions:
happy, sad, angry, and surprised. A standard discrete-trial training procedure
with the automated application was used. Subjects sat facing the child-screen
that exhibited specific emotional expressions under appropriate contexts within
the child’s immediate visual field. A video clip played for between 1 and 30
seconds. The clip displayed a scene in which an emotion was expressed by a
character on the screen. The screen ‘froze’ on the emotional expression and
waited for the child to touch the doll with the matching emotional expression
(correct doll). After a pre-set time elapsed, the practitioner-cued sequence of
visual and auditory prompts would be displayed.
If the child touched the doll with the corresponding emotional expression
(correct doll), then the system affirmed the choice, e.g. the guide stated "Good,
That’s <correct emotion selected>," and an optional playful clip started to play
on the child-screen. The application then displayed another clip depicting emo-
tional content randomly pulled from the application.
If the child did not select a doll or if he selected the incorrect (non-matching)
doll, the system would prompt, e.g. the guide would say "Match <correct emo-
tion>" for no doll selection, or "That’s <incorrect emotion>, Match <correct
emotion>" for incorrect doll selection. The system waited for a set time con-
figured by the practitioner and repeated its prompts until the child selected the
correct doll. An optional replay of the clip could be set up before the session, in
which case the application replays that same clip and proceeds with the spec-
ified order of prompts configured in the set up. If the child still fails to select
the correct doll, the practitioner assists the child and repeats the verbal prompt
and provides a physical prompt, e.g., pointing to the correct doll. If the child
selects the correct doll but doesn’t touch the doll after the physical prompt is
provided, then physical assistance is given to insure that the child touches the
correct doll. This procedure was used for the discrete trials.
Two low functioning autistic children, between the ages of 2 and 3, engaged
in the video clips yet displayed little interest in the doll interface without direct
assistance. One boy, age 4, demonstrated an understanding of the interaction,
but struggled to match the appropriate doll. Another boy, aged 5, appeared to
understand the interaction, yet had such a soft touch that he required assistance
in touching the doll so that the system could detect what was selected.
A three-year-old child, with Spanish as native tongue, appeared very inter-
ested in the application regardless of the language difference. He and his fam-
ily were visiting the US and played with ASQ for one hour. Earlier two visiting
neurologists from Argentina sat in on the child session and they were certain
that the screen interface had too many images (referring to the icon, word, and
dwarf’s face) and thought that the dolls were not a good interface. After they
saw this boy interact with the application, both the physicians and the boy’s