Page 233 - Innovations in Intelligent Machines
P. 233
226 J. Gaspar et al.
methods such as particle filters [51, 20]. Using more stable features, such as
lines, allows for improved self-localization optimization methods [19]. [10, 54]
use sensitivity analysis in order to choose optimal landmark configurations for
self-localization. Omnidirectional vision has the advantage of tracking features
over a larger azimuth range and therefore can bring additional robustness to
navigation.
State of the art automatic scene reconstruction, based on omnidirec-
tional vision, relies on graph cutting methodologies for merging point clouds,
acquired at different robot locations [27]. Scene reconstruction is mainly
useful for human robot interaction, but can also be used for inter-robot inter-
action. Current research shows that building robot teams can be framed as
a scene independent problem, provided that the robots observe each other
and have reliable motion measurements [47, 61]. The robot teams can then
share scene models allowing better human to robot-team interaction.
This chapter is structured as follows. In Section 2, we present the modell-
ing and design of omnidirectional cameras, including details of the camera
designs we used. In Section 3, we present Topological Navigation and Visual
Path Following. We provide details of the different image dewarpings (views)
available from our omnidirectional camera: standard, panoramic and bird’s–
eye views. In addition, we detail geometric scene modelling, model tracking,
and appearance-based approaches to navigation. In Section 4, we present
our Visual Interface. In all cases, we demonstrate mobile robots navigat-
ing autonomously and guided interactively in structured environments. These
experiments show that the synergetic design, combining perception modules,
navigation modalities and humanrobot interaction, is effective in realworld
situations. Finally, in Section 5, we present our conclusions and future research
directions.
2 Omnidirectional Vision Sensors:
Modelling and Design
In 1843 [58], a patent was issued to Joseph Puchberger of Retz, Austria for the
first system that used a rotating camera to obtain omnidirectional images. The
original idea for the (static camera) omnidirectional vision sensor was initially
proposed by Rees in a US patent dating from 1970 [72]. Rees proposed the
use of a hyperbolic mirror to capture an omnidirectional image, which could
then be transformed to a (normal) perspective image.
Since those early days, the spectrum of application has broadened to
include such diverse areas as tele-operation [84, 91], video conferencing [70],
virtual reality [56], surveillance [77], 3D reconstruction [33, 79], structure from
motion [13] and autonomous robot navigation [35, 89, 90, 95, 97]. For a survey
of previous work, the reader is directed to [94]. A relevant collection of papers,
related to omnidirectional vision, can be found in [17] and [41].