Page 204 - Socially Intelligent Agents Creating Relationships with Computers and Robots
P. 204
Socially Situated Planning 187
of planning primitives allows one to create and manipulate plan objects. Plans
can be created and destroyed, and they can be populated with new goals and
with activities communicated by other agents. Another set of planning prim-
itives determines whether the planning algorithm can modify the activities in
one of these plan objects. One can make a plan modifiable, allowing the plan-
ner to fix any flaws with that plan, or one can freeze its current state (as when
adopting a commitment to a certain course of action). One can also modify
the execution status of the plan, enabling or disabling the execution of actions
within it. Finally, another set of planning primitives alters the way the planner
handles interactions between plans and thereby implements the idea of a social
stance. For example, what happens when Steve detects that his plan conflicts
with Jack’s. He has several options. He could adopt a rude stance towards
Jack, running to grab the keys before Jack gets a chance to take the car. This
essentially corresponds to a strategy where the planner resolves any threats that
Jack introduces into Steve’s plans, but ignores any threats that Steve introduces
into Jack’s. Alternatively, Steve could take a meek stance, finding some other
ways to get to the beach or simply staying home. This corresponds to a strat-
egy where the planner treats Jack’s plans as immutable, resolves any threats
to Jack’s plans, and tries to work around any threats that Jack introduces into
Steve’s plans. Steve could be helpful, adding activities to his plan that ensures
that Jack gets to the market. Or he could be authoritative, demanding that Jack
drive him to the beach (by inserting activities into Jack’s plans). These stances
are all implemented as search control, limiting certain of a planner’s threat
resolution options. The following are two paraphrased examples of rules that
make up Steve and Jack’s social control program. The current implementation
has about thirty such rules:
Social-Rule: plan-for-goal
IF I have a top-level goal, ?goal, ?p THEN
Do-Gesture(Thinking)
Say(to-self, ‘‘I want to ?predicate’’)
?plan = create-new-plan()
populate-plan(?plan, ?goal)
enable-modification(?plan)
Social-Rule: you-cause-problems-for-me
IF my plan, ?plan, is threatened by your plan
I don’t have an obligation to revise my plan
you don’t have an obligation to revise your plan
you don’t know my plan THEN
Say(?you, ‘‘Wait a second, our plans conflict’’)
SpeechAct(INFORM_PROB, ?plan, ?you)