Page 134 - Sensing, Intelligence, Motion : How Robots and Humans Move in an Unstructured World
P. 134

VISION AND MOTION PLANNING  109

              Since algorithm Bug2 is known to converge, one way to incorporate vision
            is to instruct the robot at each step of its path to “mentally” reconstruct in its
            current field of vision the path segment that would have been produced by Bug2
            (let us call it the Bug2 path). The farthest point of that segment can then be
            made the current intermediate target point, and the robot would make a step
            toward that point. And then the process repeats. To be meaningful, this would
            require an assurance of continuity of the considered Bug2 path segment; that is,
            unless we know for sure that every point of the segment is on the Bug2 path,
            we cannot take a risk of using this segment. Just knowing the fact of segment
            continuity is sufficient; there is no need to remember the segment itself. As it
            turns out, deciding whether a given point lies on the Bug2 path—in which case
            we willcallita Bug2 point —is not a trivial task. The resulting algorithm is
            called VisBug-21, and the path it generates is referred to as the VisBug-21 path.
              The other algorithm, called VisBug-22, is also tied to the mechanism of
            Bug2 procedure, but more loosely. The algorithm behaves more opportunisti-
            cally compared to VisBug-21. Instead of the VisBug-21 process of replacing some
            “mentally” reconstructed Bug2 path segments with straight-line shortcuts afforded
            by vision, under VisBug-22 the robot can deviate from Bug2 path segments if
            this looks more promising and if this is not in conflict with the convergence
            conditions. As we will see, this makes VisBug-22 a rather radical departure from
            the Bug2 procedure—with one result being that Bug2 cannot serve any longer as
            a source of convergence. Hence convergence conditions in VisBug-22 will have
            to be established independently.
              In case one wonders why we are not interested here in producing a vision-
            laden algorithm extension for the Bug1 algorithm, it is because savings in path
            length similar to the VisBug-21 and VisBug-22 algorithms are less likely in this
            direction. Also, as mentioned above, exploring every obstacle completely does
            not present an attractive algorithm for mobile robot navigation.
              Combining Bug1 with vision can be a viable idea in other motion planning
            tasks, though. One problem in computer vision is recognizing an object or finding
            a specific item on the object’s surface. One may want, for example, to automati-
            cally detect a bar code on an item in a supermarket, by rotating the object to view
            it completely. Alternatively, depending on the object’s dimensions, it may be the
            viewer who moves around the object. How do we plan this rotating motion?
            Holding the camera at some distance from the object gives the viewer some
            advantages. For example, since from a distance the camera will see a bigger part
            of the object, a smaller number of images will be needed to obtain the complete
            description of the object [63].
              Given the same initial conditions, algorithms VisBug-21 and VisBug-22 will
            likely produce different paths in the same scene. Depending on the scene, one of
            them will produce a shorter path than the other, and this may reverse in the next
            scene. Both algorithms hence present viable options. Each algorithm includes a
            test for target reachability that can be traced to the Bug2 algorithm and is based
            on the following necessary and sufficient condition:
   129   130   131   132   133   134   135   136   137   138   139