Page 258 - Innovations in Intelligent Machines
P. 258

Toward Robot Perception through Omnidirectional Vision  251




























                           Fig. 11. (a) An omnidirectional image obtained at 11:00, (b) one obtained at 17:00;
                           (c) An edge-detected image and (d) its retrieved image


                           eigenspace using both dilated and un-dilated model views and pre-process the
                           run time edge images to dilate the edges. In our pre-processing we use low pass
                           filtering instead of edge dilation. The purpose here is to maintain the local
                           maxima of gradient magnitude at edge points while enlarging the matching
                           area. We found this to be a good tradeoff between matching robustness and
                           accuracy.
                              To test this view-based approximation we collected a sequence of images,
                           acquired at different times, 11am and 5pm, near a large window. Figure 11
                           shows the significant changes in illumination, especially near the large window
                           at the bottom left hand side of each omnidirectional image. Even so, the view
                           based approximation can correctly determine that the unknown image shown
                           in Fig. 11(a) was closest to the database image shown in Fig. 11(b), while PCA
                           based on brightness distributions would fail. For completeness, Fig. 11 (c) and
                           (d) shows a run-time edge image and its corresponding retrieved image using
                           the eigenspace approximation to the Hausdorff fraction.


                           Integrating Topological Navigation and Visual Path Following
                           When continuously operating, the mobile robot is usually performing topo-
                           logical navigation. At some points of the mission the navigation modality is
                           required to change to the visual path following. Thus, the robot needs to
                           retrieve the scene features (straight lines in our case) chosen at the time of
                           learning to specific this particular visual path following task.
   253   254   255   256   257   258   259   260   261   262   263