Page 230 - Introduction to Autonomous Mobile Robots
P. 230

Mobile Robot Localization

                           5.6.2.1   Introduction: applying probability theory to robot localization  215
                           Given a discrete representation of robot positions, in order to express a belief state we wish
                           to assign to each possible robot position a probability that the robot is indeed at that posi-
                           tion. From probability theory we use the term p A()   to denote the probability that   is true.
                                                                                          A
                                                                                              A
                                                            A
                           This is also called the prior probability of   because it measures the probability that   is
                           true independent of any additional knowledge we may have. For example we can use
                            (
                                                                                  l
                           p r =  l)   to denote the prior probability that the robot r is at position   at time  . t
                              t
                             In practice, we wish to compute the probability of each individual robot position given
                           the encoder and sensor evidence the robot has collected. In probability theory, we use the
                           term p AB(  )   to denote the conditional probability of   given that we know  . For exam-
                                                                                       B
                                                                     A
                           ple, we use p r =(  li )   to denote the probability that the robot is at position   given that
                                                                                        l
                                       t    t
                           the robot’s sensor inputs  .  i
                             The question is, how can a term such as p r =(  t  li )   be simplified to its constituent parts
                                                                    t
                           so that it can be computed? The answer lies in the product rule, which states
                                p A ∧  B) =  p AB)p B()                                      (5.18)
                                 (
                                           (
                                                                                      A
                                                                                           B
                             Equation (5.18) is intuitively straightforward, as the probability of both   and   being
                                             B
                           true is being related to   being true and the other being conditionally true. But you should
                           be able to convince yourself that the alternate equation is equally correct:
                                 (
                                p A ∧  B) =  p BA)p A()                                      (5.19)
                                           (
                             Using equations (5.18) and (5.19) together, we can derive the Bayes formula for com-
                           puting p AB(  : )
                                          (
                                         p BA)p A()
                                 (
                                p AB) =  ------------------------------                      (5.20)
                                            pB()
                             We use the Bayes rule to compute the robot’s new belief state as a function of its sensory
                           inputs and its former belief state. But to do this properly, we must recall the basic goal of
                                                                                        L
                           the Markov localization approach: a discrete set of possible robot positions   are repre-
                           sented. The belief state of the robot must assign a probability p r =(  l)   for each location l
                                                                              t
                             L
                           in .
                             The See   function described in equation (5.17) expresses a mapping from a belief state
                           and sensor input to a refined belief state. To do this, we must update the probability asso-
                           ciated with each position   in  , and we can do this by directly applying the Bayes formula
                                                  L
                                               l
                                      l
                           to every such  . In denoting this, we will stop representing the temporal index   for sim-
                                                                                          t
                           plicity and will further use p l()  to mean p r =(  l)  :
   225   226   227   228   229   230   231   232   233   234   235