Page 134 - A First Course In Stochastic Models
P. 134

126                   DISCRETE-TIME MARKOV CHAINS

                • State i r is not absorbing. Then there exists a state j with b i r j = 1. Go to step 3
                  and continue the chain i 0 , . . . , i r for the new Boolean matrix.


                3.5.2 Ergodic Theorems
                The theoretical analysis of Markov chains is much more subtle for the case of
                infinitely many states than for the case of finitely many states. A finite-state Markov
                chain is always a regenerative process with a finite mean cycle length. This is not
                true for infinite-state Markov chains. Recall the example with I = {1, 2, . . . }
                and p i,i+1 = 1 for all i ∈ I and recall the example of the symmetric random
                                                                1
                walk with I = {0, ±1, ±2, . . .} and p i,i+1 = p i,i−1 =  for all i. In the first
                                                                2
                example the Markov chain is not regenerative, while in the other example the
                Markov chain is regenerative but has an infinite mean cycle length. In practical
                applications these pathological situations occur very rarely. Typically there is a
                positive recurrent state that will ultimately be reached from any other state with
                probability one. We therefore restrict our theoretical analysis to Markov chains
                which satisfy Assumption 3.3.1. Let R denote the set of recurrent states of the
                Markov chain {X n }. We first prove the following lemma.

                Lemma 3.5.8 Suppose that the Markov chain {X n } satisfies Assumption 3.3.1.
                Then the set R is not empty and is an irreducible set consisting of positive recurrent
                states. For any j ∈ R, it holds that f ij = 1 for all i ∈ I and µ jj < ∞.
                Proof  The regeneration state r from Assumption 3.3.1 is recurrent and so R is
                not empty. Since f ir = 1 for all i ∈ I, the Markov chain {X n } has no two disjoint
                closed sets. Hence, by Theorem 3.5.4, the set R is an irreducible set of recurrent
                states. Since µ rr < ∞, it follows from part (b) of Theorem 3.5.3 that µ jj < ∞ for
                all j ∈ R. In other words, each state j ∈ R is positive recurrent. Also, by part (b)
                of Theorem 3.5.3, f rj = 1 for all j ∈ R. Together with the assumption f ir = 1 for
                all i this implies f ij = 1 for all i when j ∈ R. This ends the proof.

                  Define now the probabilities π j by
                                                n
                                              1     (k)
                                     π j = lim     p  ,  j ∈ I.              (3.5.3)
                                         n→∞ n      jj
                                                k=1
                In Theorem 3.3.1 it was shown that these limits exist. Under Assumption 3.3.1,
                we have
                                           n
                                         1     (k)
                                     lim     p   = π j ,  i, j ∈ I           (3.5.4)
                                    n→∞ n      ij
                                          k=1
                and
                                             1
                                        π j =   > 0,  j ∈ R                  (3.5.5)
                                             µ jj
   129   130   131   132   133   134   135   136   137   138   139