Page 231 - Introduction to Autonomous Mobile Robots
P. 231

216

                                       p il)p l()                                         Chapter 5
                                         (
                                 (
                                p li) =  ------------------------                            (5.21)
                                          pi()
                             The value of  p il(  )   is key to equation (5.21), and this probability of a sensor input at
                           each robot position must be computed using some model. An obvious strategy would be to
                           consult the robot’s map, identifying the probability of particular sensor readings with each
                           possible map position, given knowledge about the robot’s sensor geometry and the mapped
                           environment. The value of p l()   is easy to recover in this case. It is simply the probability
                            (
                           p r =  l)   associated with the belief state before the perceptual update process. Finally, note
                           that the denominatorp i()   does not depend upon  ; that is, as we apply equation (5.21) to
                                                                  l
                           all positions   in  , the denominator never varies. Because it is effectively constant, in
                                     l
                                         L
                           practice this denominator is usually dropped and, at the end of the perception update step,
                           all probabilities in the belief state are re-normalized to sum at 1.0.
                             Now consider the Act function of equation (5.16). Act maps a former belief state and
                           encoder measurement (i.e., robot action) to a new belief state. In order to compute the prob-
                                         l
                           ability of position   in the new belief state, one must integrate over all the possible ways in
                           which the robot may have reached   according to the potential positions expressed in the
                                                       l
                                                                                           l
                           former belief state. This is subtle but fundamentally important. The same location   can be
                           reached from multiple source locations with the same encoder measurement o because the
                           encoder measurement is uncertain. Temporal indices are required in this update equation:
                                 (
                                                     (
                                           (
                                p l o ) =  ∫ pl l' t –  1  o ,  t )pl' t 1 ) l' t 1          (5.22)
                                                           d
                                                             –
                                  t
                                     t
                                                        –
                                            t
                                                                    l
                             Thus, the total probability for a specific position   is built up from the individual con-
                           tributions from every location   in the former belief state given encoder measurement  .
                                                                                               o
                                                   l'
                             Equations (5.21) and (5.22) form the basis of Markov localization, and they incorporate
                           the Markov assumption. Formally, this means that their output is a function only of the
                           robot’s previous state and its most recent actions (odometry) and perception. In a general,
                           non-Markovian situation, the state of a system depends upon all of its history. After all, the
                           values of a robot’s sensors at time t do not really depend only on its position at time  . They
                                                                                            t
                           depend to some degree on the trajectory of the robot over time; indeed, on the entire history
                           of the robot. For example, the robot could have experienced a serious collision recently that
                                                                                             t
                           has biased the sensor’s behavior. By the same token, the position of the robot at time   does
                           not really depend only on its position at time t –  1   and its odometric measurements. Due
                           to its history of motion, one wheel may have worn more than the other, causing a left-turn-
                           ing bias over time that affects its current position.
                             So the Markov assumption is, of course, not a valid assumption. However the Markov
                           assumption greatly simplifies tracking, reasoning, and planning and so it is an approxima-
                           tion that continues to be extremely popular in mobile robotics.
   226   227   228   229   230   231   232   233   234   235   236