Page 124 - A First Course In Stochastic Models
P. 124

116                   DISCRETE-TIME MARKOV CHAINS

                equation
                                           c
                                          x − e −λ(1−x)  = 0
                on the interval (1, ∞) and let η = 1/x 0 . Then it can be verified from Theorem C.1
                in Appendix C that

                                               j
                                        π j ∼ γ η  as j → ∞
                for some constant γ > 0. Thus the geometric approach enables us to compute the
                π j by solving a finite and relatively small system of linear equations.


                3.4.3 Metropolis—Hastings Algorithm
                In the context of stochastic networks, we will encounter in Chapter 5 Markov
                chains with a multidimensional state space and having the feature that the equilib-
                rium probabilities are known up to a multiplicative constant. However, the number
                of possible states is enormous so that a direct calculation of the normalization con-
                stant is not practically feasible. This raises the following question. Suppose that
                                                                    
 N
                π 1 , . . . , π N are given positive numbers with a finite sum S =  i=1  π i . How do
                we construct a Markov chain whose equilibrium probabilities are given by π j /S
                for j = 1, . . . , N? For ease of presentation, we restrict ourselves to N < ∞. To
                answer the question, we need the concept of a reversible Markov chain. Let {X n }
                be a Markov chain with a finite state space I and one-step transition probabilities
                p ij . It is assumed that {X n } has no two disjoint closed sets. Then the Markov chain
                has a unique equilibrium distribution {π j }. Assume now that a non-null vector (g j ),
                j ∈ I exists such that

                                       g j p jk = g k p kj ,  j, k ∈ I.     (3.4.11)
                Then, for some constant c  = 0,

                                             g j = cπ j .                   (3.4.12)
                The proof is simple. Fix j ∈ I and sum both sides of (3.4.11) over k. This gives


                                       g j =   g k p kj ,  j ∈ I.
                                            k∈I
                These equations are exactly the equilibrium equations of the Markov chain {X n }.
                Hence, by Theorem 3.3.2, we have that (3.4.12) holds. By (3.4.11) and (3.4.12),

                                      π j p jk = π k p kj ,  j, k ∈ I.      (3.4.13)
                A Markov chain {X n } having this property is called a reversible Markov chain. The
                property (3.4.13) states that the long-run average number of transitions from state
                j to state k per time unit is equal to the long-run average number of transitions
                from state k to state j per time unit for all j, k ∈ I.
   119   120   121   122   123   124   125   126   127   128   129