Page 152 - Sensing, Intelligence, Motion : How Robots and Humans Move in an Unstructured World
P. 152

WHICH ALGORITHM TO CHOOSE?  127

              Accordingly, objectives of works in this area are usually toward complete
            exploration of objects. One such application is visual exploration of objects (see,
            e.g., Refs. 63 and 76): One attempts, for example, to come up with an economical
            way of automatically manipulating an object on the supermarket counter in order
            to locate on it the bar code.
              Extending our go-from-A-to-B problem to the mobile robot navigation in
            three-dimensional space will likely necessitate “artificial” constraints on the robot
            environment (which we were lucky not to need in the two-dimensional case), such
            as constraints on the shapes of objects, the robot’s shape, some recognizable
            properties of objects’ surfaces, and so on. One area where constraints appear
            naturally, as part of the system kinematic design, is motion planning for three-
            dimensional arm manipulators. The very fact that the arm links are tied into
            some kinematic structure and that the arm’s base is bolted to its base provide
            additional constraints that can be exploited in three-dimensional sensor-based
            motion planning algorithms. This is an exciting area, with much theoretical insight
            and much importance to practice. We will consider such schemes in Chapter 6.


            3.9 WHICH ALGORITHM TO CHOOSE?

            With the variety of existing sensor-based approaches and algorithms, one is enti-
            tled to ask a question: How do I choose the right sensor-based planning algorithm
            for my job? When addressing this question, we can safely exclude the Class 1
            algorithms: For the reasons mentioned above, except in very special cases, they
            are of little use in practice.
              As to Class 2, while usually different algorithms from this group produce dif-
            ferent paths, one would be hard-pressed to recommend one of them over the
            others. As we have seen above, if in a given scene algorithm A performs bet-
            ter than algorithm B, their luck may reverse in the next scene. For example, in
            the scene shown in Figures 3.15 and 3.21, algorithm VisBug-21 outperforms
            algorithm VisBug-22, and then the opposite happens in the scene shown in
            Figure 3.23. One is left with an impression that when used with more advanced
            sensing, like vision and range finders, in terms of their motion planning skills
            just about any algorithm will do, as long as it guarantees convergence.
              Some people like the concept of a benchmark example for comparing differ-
            ent algorithms. In our case this would be, say, a fixed benchmark scene with a
            fixed pair of start and target points. Today there is no such benchmark scene, and
            it is doubtful that a meaningful benchmark could be established. For example,
            the elaborate labyrinth in Figure 3.11 turns out to be very easy for the Bug2
            algorithm, whereas the seemingly simpler scene in Figure 3.6 makes the same
            algorithm produce a torturous path. It is conceivable that some other algorithm
            would have demonstrated an exemplary performance in the scene of Figure 3.6,
            only to look less brave in another scene. Adding vision tends to smooth algo-
            rithms’ idiosyncracies and to make different algorithms behave more similarly,
            especially in real-life scenes with relatively simple obstacles, but the said rela-
            tionship stays.
   147   148   149   150   151   152   153   154   155   156   157