Page 173 - Designing Autonomous Mobile Robots : Inside the Mindo f an Intellegent Machine
P. 173

Chapter 11

            For reasons to be explained shortly, the quality factor should be a number between
            zero and one. If we call the average magnitude of the errors of the implied centers E,
            then we could use the simple formula:

                       Q = (r – E) / r                                       (Equation 11.2)
                        i
            Here “r” is the known radius of the column. For E equals 0, the quality is always 1.
            If the column radius was 4 meters and the average error of the implied centers was 1
            meter, we would get an image quality factor of 75%. For E greater than r, the Q
            factor would be considered zero and not negative. The better our quality factor
            represents the true validity of a feature, the more quickly the robot will home in on
            its true position, without being fooled by false sensor readings.

            For relatively small poles, the above formula may produce artificially lower quality
            numbers because the resolution of the ranging system and its random noise errors
            become significant compared to the radius of the column. In such cases, we might
            want to replace the radius with a constant that is several times larger than the ty-
            pical ranging errors. For reflective fiducials, the discrimination of the lidar sensor is
            adequately good that an image quality of unity may be assumed.

            The goal is simply to achieve a relative representation of the quality of the image data. To
            this end, we must also consider whether there were enough points to be reasonably
            confident of the validity of the implied centers. We might arbitrarily decide that five
            good points or more are okay. For fewer points, we could reduce the image quality
            calculated above—by, say 25% for each point less than 5. For two points, we have
            no cross check at all, so we will assign the column a 25% quality.
            It is important to realize that the robot’s program would simply specify that it was to
            use a column feature of a certain radius, and at a certain expected location. This com-
            mand in the robot’s program would then invoke the search for and processing of the
            column data. Unfortunately, our robot cannot determine its position from a single-
            circular feature like a column. For this it would need to see at least two such features.

            The first thing we can do with the image quality is to compare it to a threshold and
            decide whether the image is good enough to use at all. If it is above the minimum
            threshold, we will save the calculated center and the quality for later use in our nav-
            igation process.











                                                   156
   168   169   170   171   172   173   174   175   176   177   178