Page 173 - Autonomous Mobile Robots
P. 173

Landmarks and Triangulation in Navigation                  157

                              a feasible way to initialize the position of a mobile robot automatically, which
                              can be found in Reference 3.

                              4.3.3 KF-Based Navigation Algorithm
                              The triangulation algorithm is difficult to implement when the robot moves
                              around since it is necessary to compensate the time frame as each of three
                              landmarks is detected at different robot positions. Skewis and Lumelsky [12]
                              proposed a triangulation algorithm to attack this problem. However, there was
                              no satisfactory result being obtained after the algorithm was tested. This is
                              mainly due to the following reasons:

                                  • Each of the landmarks is in a single strip and not encoded, that is,
                                    indistinguishable from one another.
                                  • Noisy readings come from the laser scanner as some angle measure-
                                    ments are caused by random objects.
                                  • In general, the robot environment is nonconvex. Therefore, not all
                                    landmarks can be seen by the laser scanner. Moreover some land-
                                    marks may be obscured by dynamic objects such as humans and
                                    other robots.

                                 The KF algorithm is a natural choice for robot localization since it provides
                              a convenient way to fuse the data from multiple sensors, for example, the laser
                              scanner and odometry. However, it normally requires a linear dynamic model
                              and a linear output model. However, in this research, both models are nonlinear
                              as follows:

                                                x(k + 1) = f(x(k), u(k)) + w(k)         (4.3)

                                                z(k + 1) = h(B i , x(k)) + v(k)         (4.4)

                              where f(x(k), u(k)) is the nonlinear state transition function of the robot. w(k) ∼
                              N(0, Q(k)) indicates a Gaussian noise with zero mean and covariance Q(k).
                              h(B i , x(k)) is the nonlinear observation model and v(k) is Gaussian noise with
                              zero mean and covariance R(k).
                                 The control vector is calculated by two optical encoders at each cycle time k:

                                                                   T
                                                     u(k) =[ d,  θ]                     (4.5)
                              and the state transition function of the robot is

                                                                         
                                                             x(k) +  d cos θ
                                              f(x(k), u(k)) =   y(k) +  d sin θ       (4.6)
                                                               θ(k) +  θ




                              © 2006 by Taylor & Francis Group, LLC



                                 FRANKL: “dk6033_c004” — 2006/3/31 — 16:42 — page 157 — #9
   168   169   170   171   172   173   174   175   176   177   178