Page 239 - Mechatronics for Safety, Security and Dependability in a New Era
P. 239

Ch46-I044963.fm  Page 223  Tuesday, August 1, 2006  3:57 PM
                                      1, 2006
                            Tuesday, August
                      Page 223
            Ch46-I044963.fm
                                           3:57 PM
                                                                                          223
                                                                                          223
                            VISION-BASED      NAVIGATION OF AN        OUTDOOR
                                MOBILE ROBOT USING A ROUGH              MAP
                                       Jooseop Yun, Jun Miura and Yoshiaki  Shirai

                                  Department  of Mechanical Engineering, Osaka University,
                                            Suita, Osaka, 565-0871, Japan



                  ABSTRACT

                  We describe  a method  of mobile  robot  navigation  based  on  a rough  map  using  stereo  vision,  which
                  uses  multiple  visual  features  to  detect  and  segment  the  buildings  in  the  robot's  field  of  view.  The
                  rough  map  is  a map  with  large  uncertainties  in the  shapes  and  locations  of  objects  so that  it  can  be
                  built  easily.  The  robot  fuses  odometry  and  vision  information  using  an  extended  Kalman  filter  to
                  update  the  robot  pose  and the  associated  uncertainty  based  on the  detection  of  buildings  in the map.
                  An  experimental  result  shows  the  potential  feasibility  of  our  localization  method  in  an  outdoor
                  environment.


                  KEYWORDS
                  Outdoor mobile robot, Vision-based navigation, Rough map.


                  INTRODUCTION
                  In this paper, we deal with the case that the robot has an environment map to be represented  as a set of
                  2D  segments.  The  map  approximates  the  outlines  of  buildings  except  for  feature  information  to  be
                  used  as  landmarks  (Georgiev,  et  al. 2002).  We propose  a method  to robustly  estimate  the robot  pose
                  using  multiple  visual  features:  walls  of  buildings,  vanishing  points,  and  corners  of  buildings.  The
                  walls of buildings  are extracted  from  the stereo vision observation. The vanishing points are calculated
                  from  the non-vertical  skylines of buildings. And the corners of buildings are the vertical  skylines. The
                  visual  features  are  matched  to  the  given  map  and  the  results  are  integrated  into  the  odometry
                  information  for the estimation of the robot pose using an Extended Kalman Filter.


                  FEATURE  DETECTION
                  For  the  matching  process,  we  use  multiple  visual  features:  walls  of  buildings  from  disparity  image,
                  vanishing points from non-vertical skylines, and corners of buildings  from  vertical skylines.

                  Walls of Buildings
   234   235   236   237   238   239   240   241   242   243   244