Page 258 - Introduction to AI Robotics
P. 258
241
6.7 Range from Vision
plot. Robots such as Nomads and Pioneers originally came with Sick lasers
mounted in parallel to the floor. This was useful for obstacle avoidance (as
long as the obstacle was tall enough to break the laser plane), but not partic-
ularly helpful for extracting 3D information. Also, as with sonars, robots ran
the risk of being decapitated by obstacles such as tables which did not appear
in the field of view of the range sensor but could hit a sensor pod or antenna.
To combat this problem, researchers have recently begun mounting planar
laser range finders at a slight angle upward. As the robot moves forward,
it gets a different view of upcoming obstacles. In some cases, researchers
have mounted two laser rangers, one tilted slightly up and the other slightly
down, to provide coverage of overhanging obstacles and negative obstacles.
6.7.4 Texture
The variety of sensors and algorithms available to roboticists can actually
distract a designer from the task of designing an elegant sensor suite. In
most cases, reactive robots use range for navigation; robots need a sensor to
keep it from hitting things. Ian Horswill designed the software and camera
system of Polly, shown in Fig. 6.29, specifically to explore vision and the
relationship to the environment using subsumption. 70 Horswill’s approach
LIGHTWEIGHT VISION is called lightweight vision, to distinguish its ecological flavor from traditional
model-based methods.
Polly served as an autonomous tour-guide at the MIT AI Laboratory and
Brown University during the early 1990’s. At that time vision processing was
slow and expensive, which was totally at odds with the high update rates
needed for navigation by a reactive mobile robot. The percept for the obstacle
avoidance behavior was based on a clever affordance: texture. The halls of
the AI Lab were covered throughout with the same carpet. The “color” of the
carpet in the image tended to change due to lighting, but the overall texture
or “grain” did not. In this case, texture was measured as edges per unit area,
as seen with the fine positioning discussed in Ch. 3.
RADIAL DEPTH MAP The robot divided the field of view into angles or sectors, creating a radial
depth map, or the equivalent of a polar plot. Every sector with the texture
of the carpet was marked empty. If a person was standing on the carpet,
that patch would have a different texture and the robot would mark the area
as occupied. Although this methodology had some problems—for exam-
ple, strong shadows on the floor created “occupied” areas—it was fast and
elegant.