Page 102 - A First Course In Stochastic Models
P. 102

94                   DISCRETE-TIME MARKOV CHAINS

                               (n)
                In other words, f  is the probability that the first transition of the process into
                              ij
                state j is at time t = n when the process starts in state i. Next define the proba-
                bilities f ij by
                                                ∞
                                                     (n)
                                           f ij =   f  .                     (3.2.9)
                                                     ij
                                                n=1
                Then f ij = P {X n = j for some n ≥ 1 | X 0 = i} denotes the probability that the
                process ever makes a transition into state j when the process starts in state i.

                Definition 3.2.2 A state i is said to be transient if f ii < 1 and is said to be
                recurrent if f ii = 1.

                  Denoting for each state i ∈ I the probability Q ii by

                         Q ii = P {X n = i for infinitely many values of n | X 0 = i},

                it is not difficult to verify that Q ii = 0 if i is transient and Q ii = 1 if i is recurrent.
                A useful characterization of a transient state is given by the result that a state i is
                transient if and only if

                                            ∞
                                                (n)
                                               p  < ∞.                      (3.2.10)
                                                ii
                                           n=1
                To see this, fix i ∈ I and define the indicator variable I n as I n = 1 if X n = i and
                                      n=1 n represents the number of visits of the Markov
                I n = 0 otherwise. Then  
 ∞  I
                chain to state i over the epochs t = 1, 2, . . . . Since E(I n | X 0 = i) = P {X n =
                             (n)
                i | X 0 = i} = p  , it follows that
                             ii

                           ∞                ∞                 ∞
                                                                  (n)
                        E     I n | X 0 = i  =  E(I n | X 0 = i) =  p  ,    (3.2.11)
                                                                  ii
                           n=1             n=1               n=1
                where the interchange of expectation and summation is justified by the non-
                                                                ∞
                                                                   I

                negativity of the I n . On the other hand, letting N =  n=1 n , the distribution
                                                                         k
                of the number of visits to state i satisfies P {N ≥ k | X 0 = i} = (f ii ) for k ≥ 0
                                                      ∞

                and so, by the well-known relation E(N) =  P {N > j}, we find
                                                      j=0

                                       ∞                ∞
                                                              k
                                   E      I n | X 0 = i  =  (f ii ) .
                                       n=1             k=1
                       
 ∞
                             I
                Hence E   n=1 n | X 0 = i = ∞ when f ii = 1 and equals f ii /(1−f ii ) < ∞ oth-
                erwise. This result and (3.2.11) prove that state i is transient only if (3.2.10) holds.
   97   98   99   100   101   102   103   104   105   106   107