Page 156 - Sensing, Intelligence, Motion : How Robots and Humans Move in an Unstructured World
P. 156

DISCUSSION  131

              This said, the material in this chapter demonstrates a remarkable success in
            the last 10–15 years in the state of the art in sensor-based robot motion plan-
            ning. In spite of the formidable uncertainty and an immense diversity of possible
            obstacles and scenes, a good number of algorithms discussed above guarantee
            convergence: That is, a mobile robot equipped with one of these procedures is
            guaranteed to reach the target position if the target can in principle be reached;
            if the target is not reachable, the robot will make this conclusion in finite time.
            The algorithms guarantee that the paths they produce will not circle in one area
            an indefinite number of times, or even a large number of times (say, no more
            than two or three).
              Twenty years ago, most specialists would doubt that such results were even
            possible. On the theoretical level, today’s results mean, to much surprise from
            the standpoint of earlier views on the subject, that purely local input information
            is not an obstacle to obtaining global solutions, even in cases of formidable
            complexity.
              Interesting results raise our appetite for more results. Answers bring more
            questions, and this is certainly true for the area at hand. Below we discuss a
            number of issues and questions for which today we do not have answers.
            Bounds on Performance of Algorithms with Vision. Unlike with “tactile”
            algorithms, today there are no upper bounds on performance of motion planning
            algorithms with vision, such as VisBug-21 or VisBug-22 (Section 3.6). While
            from the standpoint of theory it would be of interest to obtain bounds similar
            to the bound (3.13) for “tactile” algorithms, they would likely be of limited
            generality, for the following reasons.
              First, to make such bounds informative, we would likely want to incorporate
            into them characteristics of the robot’s vision—at least the radius of vision
            r v , and perhaps the resolution, accuracy, and so on. After all, the reason for
            developing these bounds would be to know how vision affects robot performance
            compared to the primitive tactile sensing. One would expect, in particular, that
            vision improves performance. As explained above, this cannot be expected in
            general. Vision does improve performance, but only “on the average,” where the
            meaning of “average” is not clear. Recall some examples in the previous section:
            In some scenes a robot with a larger radius of vision r v will perform worse than
            a robot with a smaller r v . Making the upper bound reflect such idiosyncrasies
            would be desirable but also difficult.
              Second, how far the robot can see depends not only on its vision but also
            on the scene it operates in. As the example in Figure 3.24 demonstrates, some
            scenes can bring the efficiency of vision to almost that of tactile sensing. This
            suggests that characteristics of the scene, or of classes of scenes, should be part
            of the upper bounds as well. But, as geometry does not like probabilities, the
            latter is not a likely tool: It is very hard to generalize on distributions of locations
            and shapes of obstacles in the scene.
              Third, given a scene and a radius of vision r v , a vastly different path perfor-
            mance will be produced for different pairs of start and target points in that same
            scene.
   151   152   153   154   155   156   157   158   159   160   161