Page 132 - Introduction to AI Robotics
P. 132

115
                                      4.3 Subsumption Architecture
                                         more goal-directed actions such as mapping. Each of the layers can be
                                         viewed as an abstract behavior for a particular task.
                   LAYERS CAN SUBSUME  2. Modules in a higher layer can override, or subsume, the output from be-
                        LOWER LAYERS     haviors in the next lower layer. The behavioral layers operate concur-
                                         rently and independently, so there needs to be a mechanism to handle
                                         potential conflicts. The solution in subsumption is a type of winner-take-
                                         all, where the winner is always the higher layer.

                    NO INTERNAL STATE  3. The use of internal state is avoided. Internal state in this case means any
                                         type of local, persistent representation which represents the state of the
                                         world, or a model. Because the robot is a situated agent, most of its in-
                                         formation should come directly from the world. If the robot depends on
                                         an internal representation, what it believes may begin to dangerously di-
                                         verge from reality. Some internal state is needed for releasing behaviors
                                         like being scared or hungry, but good behavioral designs minimize this.

                                      4. A task is accomplished by activating the appropriate layer, which then
                                         activates the lower layers below it, and so on. However, in practice, sub-
                            TASKABLE     sumption style systems are not easily taskable, that is, they can’t be ordered
                                         to do another task without being reprogrammed.


                               4.3.1  Example
                                      These aspects are best illustrated by an example, extensively modified from
                                      Brooks’ original paper 27  in order to be consistent with schema theory termi-
                                      nology and to facilitate comparison with a potential fields methodology. A
                                      robot capable of moving forward while not colliding with anything could be
                       LEVEL 0: AVOID  represented with a single layer, Level 0. In this example, the robot has mul-
                                      tiple sonars (or other range sensors), each pointing in a different direction,
                                      and two actuators, one for driving forward and one for turning.
                                        Following Fig. 4.6, the SONAR module reads the sonar ranges, does any
                          POLAR PLOT  filtering of noise, and produces a polar plot. A polar plot represents the range
                                      readings in polar coordinates, (r; ), surrounding the robot. As shown in

                                      Fig. 4.7, the polar plot can be “unwound.”
                                        If the range reading for the sonar facing dead ahead is below a certain
                                      threshold, the COLLIDE module declares a collision and sends the halt signal
                                      to the FORWARD drive actuator. If the robot was moving forward, it now
                                      stops. Meanwhile, the FEELFORCE module is receiving the same polar plot.
                                      It treats each sonar reading as a repulsive force, which can be represented
   127   128   129   130   131   132   133   134   135   136   137