Page 131 - Sensing, Intelligence, Motion : How Robots and Humans Move in an Unstructured World
P. 131

106    MOTION PLANNING FOR A MOBILE ROBOT

           other blindfolded. Envision each of them walking in the same direction around
           the perimeter of a complex-shaped building. The path of the person with sight
           will be (at least, often enough) a shorter approximation of the path of the blind-
           folded person.
              The second algorithm, called VisBug-22, is more opportunistic in nature: it
           tries to use every chance to get closer to the target. (The number in its name
           signifies that it is the vision algorithm 2 based on the Bug2 procedure.)
              Section 3.6.1 is devoted to the algorithms’ underlying model and basic ideas.
           The algorithms themselves, related analysis, and examples demonstrating the
           algorithms’ performance appear in Sections 3.6.2 and 3.6.3.

           3.6.1 The Model
           Our assumptions about the scene in which the robot travels and about the robot
           itself are very much the same as for the basic algorithms (Section 3.1). The avail-
           able input information includes knowing at all times the robot’s current location,
           C, and the locations of starting and target points, S and T . We also assume that
           a very limited memory does not allow the robot more than remembering a few
           “interesting” points.
              The difference in the two models relates to the robot sensing ability. In the case
           at hand the robot has a capability, referred to as vision, to detect an obstacle, and
           the distance to any visible point of it, along any direction from point C, within the
           sensor’s field of vision. The field of vision presents a disc of radius r v , called radius
           of vision, centered at C. A point Q in the scene is visible if it is located within the
           field of vision and if the straight-line segment CQ does not cross any obstacles.
              The robot is capable of using its vision to scan its surroundings during which
           it identifies obstacles, or the lack of thereof, that intersect its field of vision. We
           will see that the robot will use this capability rather sparingly; the particular use
           of scanning will depend on the algorithm. Ideally the robot will scan a part of
           the scene only in those specific directions that make sense from the standpoint of
           motion planning. The robot may, for example, identify some intermediate target
           point within its field of vision and walk straight toward that point. Or, in an
           “unfortunate” (for its vision) case when the robot walks along the boundary of a
           convex obstacle, its effective radius of vision in the direction of intended motion
           (that is, around the obstacle) will shrink to zero.
              As before, the straight-line segment (S, T ) between the start S and target T
           points—it is called the Main line or M-line—is the desirable path. Given its
           current position C i ,atmoment i the robot will execute an elementary operation
           that includes scanning some minimum sector of its current field of vision in the
           direction it is following, enough to define its next intermediate target, point T i .
           Then the robot makes a little step in the direction of T i , and the process repeats. T i
           is thus a moving target; its choice will somehow relate to the robot’s global goal.
           In the algorithms, every T i will lie on the M-line or on an obstacle boundary.
              For a path segment whose point T i moves along the M-line, the firstly defined
           T i that lies at the intersection of M-line and an obstacle is a special point called
           the hit point, H. Recall that in algorithms Bug1 or Bug2 a hit point would be
   126   127   128   129   130   131   132   133   134   135   136