Page 125 - A First Course In Stochastic Models
P. 125

COMPUTATION OF THE EQUILIBRIUM PROBABILITIES         117

                  Let us return to the problem of constructing a Markov chain with equilibrium
                probabilities {π j = π j /S, j = 1, . . . , N} when π 1 , . . . , π N are given positive

                numbers with a finite sum S. To do so, choose any Markov matrix M = m ij ,
                i, j = 1, . . . , N with positive elements m ij . Next construct a Markov chain {X n }
                with state space I = {1, . . . , N} and one-step transition probabilities

                                     
                                      m ij α ij ,           j  = i,
                                               N
                                p ij =
                                     m ii α ii +  m ik (1 − α ik ), j = i,
                                              k=1
                where the α ij are appropriately chosen numbers between 0 and 1 with α ii = 1 for
                i = 1, . . . , N. The state transitions of the Markov chain {X n } are governed by the
                following rule: if the current state of the Markov chain {X n } is i, then a candidate
                state k is generated according to the probability distribution {m ij , j = 1, . . . , N}.
                The next state of the Markov chain {X n } is chosen equal to the candidate state
                k with probability α ik and is chosen equal to the current state i with probability
                1 − α ik . By an appropriate choice of the α ij , we have

                                  π j p jk = π k p kj ,  j, k = 1, . . . , N,  (3.4.14)

                implying that the Markov chain {X n } has the equilibrium distribution

                                            N

                                   π j = π j /  π k ,  j = 1, . . . , N.    (3.4.15)
                                           k=1

                It is left to the reader to verify that (3.4.14) holds for the choice


                                          π j m ji
                                α ij = min     , 1 ,  i, j = 1, . . . , N   (3.4.16)
                                          π i m ij
                                                                      
 N
                (use that α ji = 1 if α ij = π j m ji /π i m ij ). Note that the sum S =  π k is not
                                                                        k=1
                needed to define the Markov chain {X n }.
                  Summarizing, the following algorithm generates a sequence of successive states
                of a Markov chain {X n } whose equilibrium distribution is given by (3.4.15).

                Metropolis—Hastings algorithm
                Step 0. Choose a Markov matrix M = (m ij ), i, j = 1, . . . , N with positive ele-
                ments. Let X 0 := i for some 1 ≤ i ≤ N and let n := 0.
                Step 1. Generate a candidate state Y from the probability distribution P {Y = j} =
                m X n ,j for j = 1, . . . , N. If Y = k, then set X n+1 equal to k with probability α X n ,k
                and equal to X n with probability 1 − α X n ,k , where the α ij are given by (3.4.16).
                Step 2. n := n + 1 and repeat step 1.
   120   121   122   123   124   125   126   127   128   129   130