Page 269 - Innovations in Intelligent Machines
P. 269

262    J. Gaspar et al.



























                           Fig. 18. Interactive modelling based on co-planarity and co-linearity properties
                           using a single omnidirectional image. (Top) Original image with superposed points
                           and lines localised by the user. Planes orthogonal to the x, y and z axis are shown in
                           light gray, white, and dark gray respectively. (Table) The numbers are the indexes
                           shown on the image. (Below) Reconstruction result and view of the textured mapped
                           3D model

                              Figure 18 shows the resulting texture-mapped reconstruction. This result
                           shows the effectiveness of omnidirectional imaging to visualize the immediate
                           vicinity of the sensor. It is interesting to note that just a few omnidirectional
                           images are sufficient for building the 3D model (the example shown utilized a
                           single image), as opposed to a larger number of “normal” images that would
                           be required to reconstruct the same scene [50, 79].

                           4.2 Human Robot Interface based on 3D World Models

                           Now that we have the 3D scene model, we can build the Human Robot inter-
                           face. In addition to the local headings or poses, the 3D model allows us to spec-
                           ify complete missions. The human operator selects the start and end locations
                           in the model, and can indicate points of interest for the robot to undertake
                           specific tasks. See Fig. 19.
                              Given that the targets are specified on interactive models, i.e. models built
                           and used on the user side, they need to be translated as tasks that the robot
                           understands. The translation depends on the local world models and navi-
                           gation sequences the robot has in its database. Most of the world that the
                           robot knows is in the form of a topological map. In this case the targets are
                           images that the robot has in its image database. The images used to build
   264   265   266   267   268   269   270   271   272   273   274