Page 125 - Designing Autonomous Mobile Robots : Inside the Mindo f an Intellegent Machine
P. 125
Chapter 7
To those in the academic camp, the robot’s sensor systems should recognize the
entire environment, not just a feature here and there. They envision the robot as
perceiving the environment in very much the same way humans do. Indeed, this is a
wonderful goal, and some impressive progress has been made in video scene analysis.
Figure 7.1, however, shows an example of an environment that severely challenges
this goal.
Figure 7.1. SR-3 Security robot navigating from lidar (circa 1999)
(Courtesy of Cybermotion, Inc.)
In this example, the robot is using lidar to navigate from pallets of products. The
robot patrols evenings, and is idle during the day. Thus, on successive evenings the
presence and placement of pallets may change radically. In this case, the robot has
been programmed to look only for pallets that lay near the boundary of the aisle.
The laser scans parallel to the ground at waist height. In the event that there are not
many pallets present, the robot’s programming also tells it about reflectors on a wall
to its right, and on vertical roof supports on the left (just visible at the top of the
image).
The challenge for a hypothetical self-teaching system is that almost the entire envi-
ronment changes from night to night. If such an ideal robot had a video processor
on board, and was smart enough, then it might notice that the painted stripe does
not change from night to night and learn to use it as a navigation feature. For now,
however, the safe and easy way to solve the problem is for someone to tell the robot
which features to use ahead of time.
108

