Page 190 - Introduction to Autonomous Mobile Robots
P. 190

Perception
























                           Figure 4.48                                                         175
                           Examples of adaptive floor plane extraction. The trapezoidal polygon identifies the floor sampling
                           region.



                           by looking at the appropriate histogram counts for the qualities of the target pixel. For
                           example, if the target pixel has a hue that never occurred in the “floor sample,” then the
                           corresponding hue histogram will have a count of zero. When a pixel references a histo-
                           gram value below a predefined threshold, that pixel is classified as an obstacle.
                             Figure 4.48 shows an appearance-based floor plane extraction algorithm operating on
                           both indoor and outdoor images [151]. Note that, unlike the static floor extraction algo-
                           rithm, the adaptive algorithm is able to successfully classify a human shadow due to the
                           adaptive histogram representation. An interesting extension of the work has been to not use
                           the static floor sample assumption, but rather to record visual history and to use, as the floor
                           sample, only the portion of prior visual images that has successfully rolled under the robot
                           during mobile robot motion.
                             Appearance-based extraction of the floor plane has been demonstrated on both indoor
                           and outdoor robots for real-time obstacle avoidance with a bandwidth of up to 10 Hz.
                           Applications include robotics lawn mowing, social indoor robots, and automated electric
                           wheelchairs.

                           4.3.2.2   Whole-image features
                           A single visual image provides so much information regarding a robot’s immediate sur-
                           roundings that an alternative to searching the image for spatially localized features is to
                           make use of the information captured by the entire image to extract a whole-image feature.
   185   186   187   188   189   190   191   192   193   194   195