Page 149 - Sensing, Intelligence, Motion : How Robots and Humans Move in an Unstructured World
P. 149

124    MOTION PLANNING FOR A MOBILE ROBOT

           in the SIM paradigm input information is never complete and appears as the
           robot moves, there is no information to calculate the C-space. One can, however,
           design algorithms based on C-space properties, and this is what will happen in
           the following chapters.
              For the practical side of our question, What changes for the algorithms con-
           sidered in this chapter when applying them to real mobile robots with finite
           dimensions? the answer is, nothing changes. Recall that the algorithms VisBug
           make decisions “on the fly,” in real time. They make the robot either (a) move
           in free space by following the M-line or (b) follow obstacle boundaries. For
           example, when following an obstacle boundary, if the robot arrives at a gap
           between two obstacles, it may or may not be able to pass it. If the gap is too
           narrow for the robot to pass through, it will perceive both obstacles as one. When
           following the obstacle boundaries, the robot will switch from one obstacle to the
           other without even noticing this fact.
              Additional heuristics can be added to improve the algorithm efficiency, as long
           as care is taken not to imperil the algorithm convergence. For example, if the
           robot sees its target T through a gap between two obstacles, it may attempt to
           measure the width of the gap to make sure that it will be able to pass it, before
           it actually moves to the gap. Or, if the robot’s shape is more complex than a
           circle, it may try to move through the gap by varying its orientation.
              An interesting question appears when studying the effect of location of sen-
           sor(s) on the robot body on motion planning. Assume the robot R shown in
           Figure 3.22 has a range sensor. If the sensor is located at the robot’s center then,
           as the dotted line of vision OT shows, the robot will see the gap between two
           obstacles and will act accordingly. But, if the sensor happened to be attached
           to the point A on the robot periphery, then the dotted line AB shows that the
           robot will not be able to see if the gap is real. The situation can be even more
           complex: For example, it is not uncommon for real-world mobile robots to have
           a battery of sonar sensors attached along the circumference of the robot body.
           Then different sensors may see different objects, the robot’s intelligence needs to
           reconcile those different readings, and a more careful scheme is needed to model
           the C-space sensing. Little work has been done in this area; some such schemes
           have been explored by Skewis [64].



           3.8  OTHER APPROACHES

           Recall the division of all sensor-based motion planning algorithms into two
           classes (Section 3.5). Class 1 combines algorithms in which the robot never
           leaves an obstacle unless and until it explores it completely. Class 2 combines
           algorithms that are complementary to those in Class 1: In them the robot can
           leave an obstacle and walk further, and even return to this obstacle again at
           some future time, without exploring it in full.
              As mentioned above, today Class 1 includes only one algorithm, Bug1. The
           reason for this paucity likely lies in the inherent conservatism of algorithms in
   144   145   146   147   148   149   150   151   152   153   154