Page 20 -
P. 20

1.1  Popular games in Monaco 7


                procedure markov-pi
                N hits ← 0; {x, y}← {1, 1}
                for i =1,... ,N do
                  ⎧
                  ⎪ ∆ x ← ran(−δ, δ)
                  ⎪
                  ⎪
                  ⎪ ∆ y ← ran(−δ, δ)
                  ⎪
                  ⎪
                     if (|x +∆ x | < 1 and |y +∆ y | < 1)then
                  ⎨

                         x ← x +∆ x
                  ⎪
                  ⎪
                  ⎪
                  ⎪      y ← y +∆ y
                  ⎪
                  ⎪
                     if (x + y < 1) N hits ← N hits +1
                  ⎩      2   2
                output N hits
                ——
       Algorithm 1.2 markov-pi. Markov-chain Monte Carlo algorithm for
       computing  in the adults’ game.
     rate is small and onaverage the path traveled per iteration is again
     small, because wealmost alwaysstaywhere weare.The time-honored
     rule ofthumb consists in choosing δ neither too small,nortoo large—
     such that the acceptance rate turns outto be ofthe order of  1  (half
                                                             2
     of the attempted moves are rejected). Wecan experimentally check this
     “one-halfrule” by monitoring the precision and the acceptance rate of
     Alg.1.2 (markov-pi) at a fixed, large value of N.
       Algorithm 1.2 (markov-pi) needs an initial condition. One might be
     tempted touse randominitial conditions


                            x ← ran(−1, 1),
                            y ← ran (−1, 1)


     (obtain the initial configurationthrough direct sampling),butthis is un-
     realistic because Markov-chain sampling comes into play precisely when
     direct sampling fails.Forsimplicity, westick to two standard scenar-
     ios: weeither start froma more or less arbitrary initial condition whose
     only merit is thatitis legal(for the heliport game, this is the club-
     house, at (x, y)= (1, 1)), orstart from where a previoussimulation left
     off. Inconsequence, the Markov-chain programs in this book generally
     omit the outer loop, and concentrate on that piece which leads from
     the configuration at iteration i to the configurationat i +1. The core
     heliport program then resembles Alg.1.3 (markov-pi(patch)). Wenote
     that this is what defines a Markov chain: the probabilityofgenerating
     configuration i +1 depends only on the preceding configuration, i,and
     not onearlier configurations.
       The Monte Carlo games epitomize the two basic approaches to sam-
     plingaprobability distribution π(x)ona discrete orcontinuous space:
     direct sampling and Markov-chain sampling.Both approaches evaluate
     an observable (afunction) O(x), which in ourexampleis 1 inside the
   15   16   17   18   19   20   21   22   23   24   25