Page 249 - Introduction to Autonomous Mobile Robots
P. 249
234
Encoder Position Prediction position Estimation Chapter 5
Observation Prediction estimate (fusion)
predicted observations YES
matched
and actual
Map predictions
(data base) observations
Matching
raw sensor data or
extracted features
Perception Actual Observations
(on-board sensors)
Figure 5.28
Schematic for Kalman filter mobile robot localization (see [23]).
each relate to objects in the environment. Given a set of possible features, the Kalman filter
is used to fuse the distance estimate from each feature to a matching object in the map.
Instead of carrying out this matching process for many possible robot locations individually
as in the Markov approach, the Kalman filter accomplishes the same probabilistic update
by treating the whole, unimodal, and Gaussian belief state at once. Figure 5.28 depicts the
particular schematic for Kalman filter localization.
The first step is action update or position prediction, the straightforward application of
a Gaussian error motion model to the robot’s measured encoder travel. The robot then col-
lects actual sensor data and extracts appropriate features (e.g., lines, doors, or even the
value of a specific sensor) in the observation step. At the same time, based on its predicted
position in the map, the robot generates a measurement prediction which identifies the fea-
tures that the robot expects to find and the positions of those features. In matching the robot
identifies the best pairings between the features actually extracted during observation and
the expected features due to measurement prediction. Finally, the Kalman filter can fuse the
information provided by all of these matches to update the robot belief state in estimation.
In the following sections these five steps are described in greater detail. The presentation
is based on the work of Leonard and Durrant-Whyte [23, pp. 61–65].