Page 145 - Sensing, Intelligence, Motion : How Robots and Humans Move in an Unstructured World
P. 145

120    MOTION PLANNING FOR A MOBILE ROBOT






                                                      T





















                                      S




           Figure 3.20  Example of a walk (dashed line) in a maze under Algorithm VisBug-21
           (compare with Figure 3.11). S,Start; T ,Target.


           in a crowded scene where at any given moment it can see only a small part of the
           scene, the efficacy of vision will be obviously limited. Nevertheless, unless the
           scene is artificially made impossible for vision, one can expect gains from it. This
           can be seen in performance of VisBug-21 algorithm in the maze borrowed from
           Section 3.3.2 (see Figure 3.20). For simplicity, assume that the robot’s radius of
           vision goes to infinity. While this ability would be mostly defeated here, the path
           still looks significantly better than it does under the “tactile” algorithm Bug2
           (compare with Figure 3.11).


           3.6.3 Algorithm VisBug-22
           The structure of this algorithm is somewhat similar to VisBug-21. The difference
           is that here the robot makes no attempt to ensure that intermediate targets T i lie
           on the Bug2 path. Instead, it tries “to shoot as far as possible”; that is, it chooses
           as intermediate targets those points that lie on the M-line and are as close to
           the target T as possible. The resulting behavior is different from the algorithm
           VisBug-21, and so is the mechanism of convergence.
   140   141   142   143   144   145   146   147   148   149   150