Page 106 - A First Course In Stochastic Models
P. 106

98                   DISCRETE-TIME MARKOV CHAINS

                depends on the initial state i. The reason is that in this Markov chain example there
                are two disjoint closed sets of states.

                Definition 3.3.1 A non-empty set C of states is said to be closed if

                                     p ij = 0 for i ∈ C and j /∈ C,

                that is, the process cannot leave the set C once the process is in the set C.

                  For a finite-state Markov chain having no two disjoint closed sets it is proved
                in Theorem 3.5.7 that f ij = 1 for all i ∈ I when j is a recurrent state. For such
                                                                      
 n   (k)
                a Markov chain it then follows from (3.3.2) that lim n→∞ (1/n)  k=1  p ij  does
                not depend on the initial state i when j is recurrent. This statement is also true
                for a transient state j, since then the limit is always equal to 0 for all i ∈ I by
                Lemma 3.2.3. For the case of an infinite-state Markov chain, however, the situation
                is more complex. That is why we make the following assumption.

                Assumption 3.3.1 The Markov chain {X n } has some state r such that f ir = 1 for
                all i ∈ I and µ rr < ∞.

                  In other words, the Markov chain has a regeneration state r that is ultimately
                reached from each initial state with probability 1 and the number of steps needed to
                return from state r to itself has a finite expectation. The assumption is satisfied in
                most practical applications. For a finite-state Markov chain the Assumption 3.3.1
                is automatically satisfied when the Markov chain has no two disjoint closed sets;
                see Theorem 3.5.7. The state r from Assumption 3.3.1 is a positive recurrent state.
                Assumption 3.3.1 implies that the set of recurrent states is not empty and that there
                is a single closed set of recurrent states. Moreover, by Lemma 3.5.8 we have for
                any recurrent state j that f ij = 1 for all i ∈ I and µ jj < ∞. Summarizing, under
                Assumption 3.3.1 we have both for a finite-state and an infinite-state Markov chain
                               
 n    (k)
                that lim n→∞ (1/n)  p   does not depend on the initial state i for all j ∈ I.
                                 k=1  ij
                In the next subsection it will be seen that the Cesaro limits give the equilibrium
                distribution of the Markov chain.

                3.3.2 The Equilibrium Equations
                We first give an important definition for a Markov chain {X n } with state space I
                and one-step transition probabilities p ij , i, j ∈ I.

                Definition 3.3.2 A probability distribution {π j , j ∈ I} is said to be an equilibrium
                distribution for the Markov chain {X n } if


                                       π j =   π k p kj ,  j ∈ I.            (3.3.5)
                                            k∈I
   101   102   103   104   105   106   107   108   109   110   111