Page 181 - Introduction to Autonomous Mobile Robots
P. 181
166
a) b) Chapter 4
Figure 4.42
(a) Photo of a ceiling lamp. (b) Edges computed from (a).
4.3.2.1 Spatially localized features
In the computer vision community many algorithms assume that the object of interest occu-
pies only a sub-region of the image, and therefore the features being sought are localized
spatially within images of the scene. Local image-processing techniques find features that
are local to a subset of pixels, and such local features map to specific locations in the phys-
ical world. This makes them particularly applicable to geometric models of the robot’s
environment.
The single most popular local feature extractor used by the mobile robotics community
is the edge detector, and so we begin with a discussion of this classic topic in computer
vision. However, mobile robots face the specific mobility challenges of obstacle avoidance
and localization. In view of obstacle avoidance, we present vision-based extraction of the
floor plane, enabling a robot to detect all areas that can be safely traversed. Finally, in view
of the need for localization we discuss the role of vision-based feature extraction in the
detection of robot navigation landmarks.
Edge detection. Figure 4.42 shows an image of a scene containing a part of a ceiling lamp
as well as the edges extracted from this image. Edges define regions in the image plane
where a significant change in the image brightness takes place. As shown in this example,
edge detection significantly reduces the amount of information in an image, and is therefore
a useful potential feature during image interpretation. The hypothesis is that edge contours
in an image correspond to important scene contours. As figure 4.42b shows, this is not
entirely true. There is a difference between the output of an edge detector and an ideal line
drawing. Typically, there are missing contours, as well as noise contours, that do not cor-
respond to anything of significance in the scene.