Page 232 - Innovations in Intelligent Machines
P. 232

Toward Robot Perception through Omnidirectional Vision  225
                           1.1 State of the Art

                           There are many types of omnidirectional vision systems and the most common
                           ones are based on rotating cameras, fish-eye lenses or mirrors [3, 45, 18]. Baker
                           and Nayar listed all the mirror and camera setups having a Single View Point
                                                                                    ◦
                           (SVP) [1, 3]. These systems are omnidirectional, have the 360 horizontal
                           field of view, but do not have constant resolution for the most common scene
                           surfaces. Mirror shapes for linearly imaging 3D planes, cylinders or spheres
                           were presented in [32] within a unified approach that encompasses all the
                           previous constant resolution designs [46, 29, 68] and allowed for new ones.
                              Calibration methods are available for (i) most (static) SVP omnidirec-
                           tional setups, even where lenses have radial distortion [59] and (ii) for non-
                           SVP cameras set-ups, such as those obtained by mounting in a mobile robot
                           multiple cameras, for example [71]. Given that knowledge of the geometry of
                           cameras is frequently used in a back-projection form, [80] proposed a gen-
                           eral calibration method for general cameras (including non-SVP) which gives
                           the back-projection line (representing a light-ray) associated with each pixel
                           of the camera. In another vein, precise calibration methods have begun to
                           be developed for pan-tilt-zoom cameras [75]. These active camera set-ups,
                           combining pan-tilt-zoom cameras and a convex mirror, when precisely cali-
                           brated, allow for the building of very high resolution omnidirectional scene
                           representations and for zooming to improve resolution, which are both useful
                           characteristics for surveillance tasks. Networking cameras together have also
                           provided a solution in the surveillance domain. However, they pose new and
                           complex calibration challenges resulting from the mixture of various camera
                           types, potentially overlapping fields-of-view, the different requirements of cali-
                           bration quality and the type of calibration data used (for example, static or
                           dynamic background) [76].
                              On a final note, when designing catadioptric systems, care must be taken
                           to minimize defocus blur and optical aberrations as the spherical aberra-
                           tion or astigmatism [3, 81]. These phenomena become more severe when
                           minimising the system size, and therefore it is important to develop opti-
                           cal designs and digital image processing techniques that counter-balance the
                           image malformation.
                              The applications of omnidirectional vision to robotics are vast. Start-
                           ing with the seminal idea of enhancing the field of view for teleoperation,
                           current challenges in omnidirectional vision include autonomous and cooper-
                           ative robot-navigation and reconstruction for human and robot interaction
                           [27, 35, 47, 61].
                              Vision based autonomous navigation relies on various types of information,
                           e.g. scene appearance or geometrical features such as points or lines. When
                           using point features, current research, which combines simultaneous locali-
                           zation and map building, obtains robustness by using sequential Monte-Carlo
   227   228   229   230   231   232   233   234   235   236   237