Page 192 - Introduction to Autonomous Mobile Robots
P. 192
177
Perception
The catadioptric image is a 360-degree image warped onto a 2D image surface. Because
of this, it offers another critical advantage in terms of sensitivity to small-scale robot
motion. If the camera is mounted vertically on the robot so that the image represents the
environment surrounding the robot (i.e., its horizon), then rotation of the camera and robot
simply results in image rotation. In short, the catadioptric camera can be rotationally invari-
ant to field of view.
Of course, mobile robot rotation will still change the image; that is, pixel positions will
change, although the new image will simply be a rotation of the original image. But we
intend to extract image features via histogramming. Because histogramming is a function
of the set of pixel values and not the position of each pixel, the process is pixel position
invariant. When combined with the catadioptric camera’s field of view invariance, we can
create a system that is invariant to robot rotation and insensitive to small-scale robot trans-
lation.
A color camera’s output image generally contains useful information along multiple
bands: , , and values as well as hue, saturation, and luminance values. The simplest
b
r g
histogram-based extraction strategy is to build separate 1D histograms characterizing each
band. Given a color camera image, G , the first step is to create mappings from G to each
n
of the available bands. We use G i to refer to an array storing the values in band for all
i
pixels in G . Each band-specific histogram H is calculated as before:
i
• As preprocessing, smooth G using a Gaussian smoothing operator.
i
,
• Initialize H i with n levels: Hj[] = 0 for j = 1 …, . n
• For every pixel (x,y) in G , increment the histogram: H G xy,[[ ]]+=1 .
i i i
Given the image shown in figure 4.49, the image histogram technique extracts six his-
r g b
tograms (for each of , , , hue, saturation, and luminance) as shown in figure 4.50. In
order to make use of such histograms as whole-image features, we need ways to compare
to histograms to quantify the likelihood that the histograms map to nearby robot positions.
The problem of defining useful histogram distance metrics is itself an important subfield
within the image retrieval field. For an overview refer to [127]. One of the most successful
distance metrics encountered in mobile robot localization is the Jeffrey divergence. Given
two histograms and , with h i and denoting the histogram entries, the Jeffrey diver-
K
H
k
i
gence dH K,( ) is defined as
2k
2h
i
,
(
i
dHK) = ∑ h log -------------- + k log -------------- (4.88)
i
i
i
i
i h + k i h + k i
Using measures such as the Jeffrey divergence, mobile robots have used whole-image
histogram features to identify their position in real time against a database of previously