Page 41 - Sensing, Intelligence, Motion : How Robots and Humans Move in an Unstructured World
P. 41
16 MOTION PLANNING—INTRODUCTION
uncertainty involved; then the input information is, by definition, incomplete and
is likely obtained in real time from robot’s sensors.
Note the algorithmic consequences of this distinction. If complete information
about the workspace is available, a reasonable method to proceed is to build a
model of the robot and its workspace and use this model for motion planning. The
significant effort that is likely needed to build the model will be fully justified
by the path computed from this model. If, however, nothing or little is known
beforehand, it makes little sense to spend an effort on building a model that is
of doubtful relevance to reality.
In the above situation (b), the robot hence needs to “think” differently. From
its limited sensing data, it may be able to infer some topological properties of
space. It may be able to infer, for example, whether what it sees from its current
position as two objects are actually parts of the same object. If the conclusion
is “yes,” the robot will not be trying to pass between these two “objects.” If the
conclusion is “no,” the robot will know that it deals with separate objects and
may choose to pass between them. The objects’ actual shapes will be of little
concern to the robot.
What type of sensing is suitable for a competent motion planning? It turns
out that just about any sensing is fine: tactile, sonar, vision, laser ranger, infrared
proximity, and so on. We will learn a remarkable result that says that even the
simplest tactile sensing, when used with proper motion planning algorithms, can
guarantee that the robot will reach its target (provided that the target is reachable).
In fact, we will consistently prefer tactile sensing when developing algorithms,
before attempting to use some richer sensing media; this will allow us to clarify
the issues involved. This is not to say that one should prefer tactile sensors in
real tasks: As a blind person will likely produce a more circuitous route than a
person with vision, the same will be true for a robot.
Being serious about collision avoidance means that robot motion planning
algorithms must protect the whole robot body, every one of its points. Accord-
ingly, robot sensors must provide sufficient input information. Intuitively, this
requirement is not hard to understand for mobile robots. Existing mobile robots
typically have a camera or a range finder that rotates as needed, or sonar sensors
that cover the whole robot’s circumference.
Intuition is less helpful when talking about arm manipulators. Again, sensors
can be of any type: tactile, proximal, vision, and so on. What is harder to grasp
but is absolutely necessary is a guarantee that the arm has sensing data regarding
all points of its body. No blind spots are allowed.
We tend not to notice how strictly this requirement is followed in humans and
animals. We often tie our ability to move around solely with our vision. True,
when I walk, my vision is typically the sole source of input information. I may
not be aware of, and not interested in, objects on my sides or behind me. If
something worthwhile appears on the sides, I can turn my head and look there.
However, if I attempt to sit down and the seat will happen to have a nail
sticking out of it, I will be quickly made aware of this fact and will plan my
ensuing motions quickly and efficiently. If a small rock finds its way into my