Page 227 - Introduction to Autonomous Mobile Robots
P. 227
Chapter 5
212
possible using off-the-shelf robot sensors, including heat, range, acoustic and light-based
reflectivity, color, texture, friction, and so on. Sensor fusion is a research topic closely
related to map representation. Just as a map must embody an environment in sufficient
detail for a robot to perform localization and reasoning, sensor fusion demands a represen-
tation of the world that is sufficiently general and expressive that a variety of sensor types
can have their data correlated appropriately, strengthening the resulting percepts well
beyond that of any individual sensor’s readings.
Perhaps the only general implementation of sensor fusion to date is that of neural net-
work classifier. Using this technique, any number and any type of sensor values may be
jointly combined in a network that will use whatever means necessary to optimize its clas-
sification accuracy. For the mobile robot that must use a human-readable internal map rep-
resentation, no equally general sensor fusion scheme has yet been born. It is reasonable to
expect that, when the sensor fusion problem is solved, integration of a large number of dis-
parate sensor types may easily result in sufficient discriminatory power for robots to
achieve real-world navigation, even in wide-open and dynamic circumstances such as a
public square filled with people.
5.6 Probabilistic Map-Based Localization
5.6.1 Introduction
As stated earlier, multiple-hypothesis position representation is advantageous because the
robot can explicitly track its own beliefs regarding its possible positions in the environment.
Ideally, the robot’s belief state will change, over time, as is consistent with its motor outputs
and perceptual inputs. One geometric approach to multiple-hypothesis representation, men-
tioned earlier, involves identifying the possible positions of the robot by specifying a poly-
gon in the environmental representation [98]. This method does not provide any indication
of the relative chances between various possible robot positions.
Probabilistic techniques differ from this because they explicitly identify probabilities
with the possible robot positions, and for this reason these methods have been the focus of
recent research. In the following sections we present two classes of probabilistic localiza-
tion. The first class, Markov localization, uses an explicitly specified probability distribu-
tion across all possible robot positions. The second method, Kalman filter localization, uses
a Gaussian probability density representation of robot position and scan matching for local-
ization. Unlike Markov localization, Kalman filter localization does not independently con-
sider each possible pose in the robot’s configuration space. Interestingly, the Kalman filter
localization process results from the Markov localization axioms if the robot’s position
uncertainty is assumed to have a Gaussian form [3, pp. 43-44].
Before discussing each method in detail, we present the general robot localization prob-
lem and solution strategy. Consider a mobile robot moving in a known environment. As it