Page 121 - Socially Intelligent Agents Creating Relationships with Computers and Robots
P. 121

104                                            Socially Intelligent Agents

                                4 Tambe’s proxy automatically volunteered him for a presentation, though he was actually
                                  unwilling. Again, C4.5 had over-generalized from a few examples and when a timeout
                                  occurred had taken an undesirable autonomous action.
                               Fromthegrowinglistoffailures, itbecameclearthat theapproachfacedsome
                             fundamental problems. The first problem was the AA coordination challenge.
                             Learning from user input, when combined with timeouts, failed to address the
                             challenge, since the agent sometimes had to take autonomous actions although
                             it was ill-prepared to do so (examples 2 and 4). Second, the approach did not
                             consider the team cost of erroneous autonomous actions (examples 1 and 2).
                             Effective agent AA needs explicit reasoning and careful tradeoffs when dealing
                             with the different individual and team costs and uncertainties. Third, decision-
                             tree learning lacked the lookahead ability to plan actions that may work better
                             over the longer term. For instance, in example 3, each five-minute delay is
                             appropriate in isolation, but the rules did not consider the ramifications of one
                             action on successive actions. Planning could have resulted in a one-hour delay
                             instead of many five-minute delays. Planning and consideration of cost could
                             also lead to an agent taking the low-cost action of a short meeting delay while
                             it consults the user regarding the higher-cost cancel action (example 1).

                             4.     MDPs for Adjustable Autonomy



















                                                                  Figure 12.2.  A small portion of simplified
                                                                  version of the delay MDP
                               Figure 12.1.  Dialog for meetings

                               MDPs were a natural choice for addressing the issues identified in the previ-
                             ous section: reasoning about the costs of actions, handling uncertainty, planning
                             for future outcomes, and encoding domain knowledge. The delay MDP, typical
                             of MDPs in Friday, represents a class of MDPs covering all types of meetings
                             for which the agent may take rescheduling actions. For each meeting, an agent
                             can autonomously perform any of the 10 actions shown in the dialog of Fig-
                             ure 12.1. It can also wait, i.e., sit idly without doing anything, or can reduce its
                             autonomy and ask its user for input.
   116   117   118   119   120   121   122   123   124   125   126