Page 168 - Designing Autonomous Mobile Robots : Inside the Mindo f an Intellegent Machine
P. 168
Hard Navigation vs. Fuzzy Navigation
The concept of fuzzy navigation
There is an old saying, “Don’t believe anything you hear, and only half what you see!”
This could be the slogan for explaining fuzzy navigation. When police interrogate
suspects, they continue to ask the same questions repeatedly in different ways. This
iterative process is designed to filter out the lies and uncover the truth.
We could simply program our robot to collect a large number of fixes, and then sort
through them for the ones that agreed with each other. Unfortunately, as it was
doing this, our robot would be drifting dangerously off course. We need a solution
that responds minimally to bad information, and quickly accepts true information.
The trick is therefore to believe fixes more or less aggressively according to their quality.
If a fix is at the edge of the believable, then we will only partially believe it. If this is
done correctly, the system will converge on the truth, and will barely respond at all
to bad data. But how do we quantify the quality of a fix? There are two elements to
quality:
1. Feature image quality
2. Correction quality
Feature image quality
The image quality factor will depend largely on the nature of the sensor system and
the feature it is imaging. For example, if the feature were a straight section of wall,
then the feature image quality would obviously be derived from how well the sensor
readings match a straight line. If the feature is a doorway, then the image data qual-
ity will be based on whether the gap matches the expected dimensions, and so forth.
The first level of sensor processing is simply to collect data that could possibly be
associated with each feature. This means that only readings from the expected
position of the feature should be collected for further image processing. This is the
first place that our uncertainty estimate comes into use.
Figure 11.4 shows a robot imaging a column. Since the robot’s own position is uncertain,
it is possible the feature will be observed within an area that is the mirror image of the
robot’s own uncertainty. For example, if the robot is actually a meter closer to the
feature than its position estimate indicates, then to the robot the feature will appear
to be a meter closer than expected. The center of the feature may thus be in an area
the size of the robot’s uncertainty around the known (programmed) position of the
151

