Page 199 - Introduction to Autonomous Mobile Robots
P. 199

Chapter 5
                           184
                           eight sonars on a single platform. In acoustically reflective environments, multipath inter-
                           ference is possible between the sonar emissions of one transducer and the echo detection
                           circuitry of another transducer. The result can be dramatically large errors (i.e., underesti-
                           mation) in ranging values due to a set of coincidental angles. Such errors occur rarely, less
                           than 1% of the time, and are virtually random from the robot’s perspective.
                             In conclusion, sensor noise reduces the useful information content of sensor readings.
                           Clearly, the solution is to take multiple readings into account, employing temporal fusion
                           or multisensor fusion to increase the overall information content of the robot’s inputs.

                           5.2.2   Sensor aliasing
                           A second shortcoming of mobile robot sensors causes them to yield little information con-
                           tent, further exacerbating the problem of perception and, thus, localization. The problem,
                           known as sensor aliasing, is a phenomenon that humans rarely encounter. The human sen-
                           sory system, particularly the visual system, tends to receive unique inputs in each unique
                           local state. In other words, every different place looks different. The power of this unique
                           mapping is only apparent when one considers situations where this fails to hold. Consider
                           moving through an unfamiliar building that is completely dark. When the visual system
                           sees only black, one’s localization system quickly degrades. Another useful example is that
                           of a human-sized maze made from tall hedges. Such mazes have been created for centuries,
                           and humans find them extremely difficult to solve without landmarks or clues because,
                           without visual uniqueness, human localization competence degrades rapidly.
                             In robots, the nonuniqueness of sensor readings, or sensor aliasing, is the norm and not
                           the exception. Consider a narrow-beam rangefinder such as an ultrasonic or infrared
                           rangefinder. This sensor provides range information in a single direction without any addi-
                           tional data regarding material composition such as color, texture, and hardness. Even for a
                           robot with several such sensors in an array, there are a variety of environmental states that
                           would trigger the same sensor values across the array. Formally, there is a many-to-one
                           mapping from environmental states to the robot’s perceptual inputs. Thus, the robot’s per-
                           cepts cannot distinguish from among these many states. A classic problem with sonar-
                           based robots involves distinguishing between humans and inanimate objects in an indoor
                           setting. When facing an apparent obstacle in front of itself, should the robot say “Excuse
                           me” because the obstacle may be a moving human, or should the robot plan a path around
                           the object because it may be a cardboard box? With sonar alone, these states are aliased and
                           differentiation is impossible.
                             The problem posed to navigation because of sensor aliasing is that, even with noise-free
                           sensors, the amount of information is generally insufficient to identify the robot’s position
                           from a single-percept reading. Thus techniques must be employed by the robot programmer
                           that base the robot’s localization on a series of readings and, thus, sufficient information to
                           recover the robot’s position over time.
   194   195   196   197   198   199   200   201   202   203   204