Page 424 - Sensing, Intelligence, Motion : How Robots and Humans Move in an Unstructured World
P. 424

SALIENT CHARACTERISTICS OF A SENSITIVE SKIN  399

            it. Ultrasound sensors can do this measurement easily, but their resolution is
            not good.
              One possible strategy is to adhere to a binary “yes–no” measurement. In a
            sensor with limited sensitivity range, say 20 cm, the “yes” signal will tell the
            robot that at the time of detection the object was at a distance of 20 cm from
            the robot body. The technique can be improved by replacing a single sensor by
            a small cluster of sensors, with each sensor in the cluster adjusted to a different
            turn-on sensitivity range. The cluster will then provide a crude measurement of
            distance to the object.


            Sensors’ Physical Principle of Action. Vision sensing being as powerful as
            we know it, it is tempting to think of vision as the best candidate for the robot
            whole-body sensing. The following discussion shows that this is not so: Vision
            is very useful, but not universally so. Here are two practical rules of thumb:

              1. When the size of the workspace in which the robot operates is significantly
                 larger than the robot’s own dimensions—as, for example, in the case of
                 mobile robot vehicles—vision (or a laser ranger sensor) is very useful for
                 motion planning.
              2. When the size of the robot workspace is comparable to the robot dimen-
                 sions—as in the case of robot arm manipulators—proximal sensing other
                 than vision will play the primary role. Vision may be useful as well—for
                 example, for the task execution by the arm end effector.

              Let us start with mobile robot vehicles. When planning its path, a mobile
            robot’s motion control unit will benefit from seeing relatively far in the direction
            of intended motion. If the robot is, say, about a meter in diameter and standing
            about a meter tall, with sensors on its top, seeing the scene at 10–20 meters
            would be both practical and useful for motion planning. Vision is perfect for
            that: Similar to the use of human vision, a single camera or, better, a two-camera
            stereo pair will provide enough information for motion planning. On the other
            hand, remember, the full coverage requirement prescribes an ability to foresee
            potential collisions at every point of the robot body, at all times. If the mobile
            robot moves in a scene with many small obstacles, possibly occluding each other
            and possibly not visible from afar, so that they can appear underneath and at the
            sides, even a few additional cameras would not suffice to notice those details.
              The need for sensing in the vicinity of the robot becomes even stronger for
            arm manipulators. The reason is simple: Since the arm’s base is fixed, it can reach
            only a limited volume defined by its own dimensions. Thinking of vision as a
            candidate, where would we attach vision cameras to guarantee the full coverage?
            Should they be attached to the robot, or put on the walls of the robot work cell,
            or both?
              A simple drawing would show that in any of these options even a large number
            of cameras—which is impractical anyway—would not guarantee the full sensing
            coverage. Occlusion of one robot link by another link, or by cables that carry
   419   420   421   422   423   424   425   426   427   428   429