Page 199 - Probability, Random Variables and Random Processes
P. 199

CHAP.  51                        RANDOM  PROCESSES



              (a)  The total capital of the two players at all times is
                                                   k+m=N
                     Let  Z, (n 2 1) be  independent  r.v.3  with  P(Z,  = 1) = p  and  P(Z, = - 1) = q = 1 - p  for  all  n.
                 Then
                                           X,=X,-,  +Z,    n= 1,2 ,...
                 and X, = k. The game ends when X, = 0 or X, = N. Thus, by Probs. 5.2 and 5.24, X(n) = (X,,  n 2 0)
                 is a Markov chain with state space E = (0, 1, 2, ..., N), where states 0 and N are absorbing states. The
                 Markov chain X(n) is also known as a simple random walk with absorbing barriers.
              (b)  Since





                                       p,,,  = P(X,+, = OJX, = 0)= 1
                                       pN.,, = P(X,+I = NIX,  = N) = 1
                 the transition probability matrix P is















                 For example, when p = q = 3 and N  = 4,











        5.39.  Conside :r a hom ogen .eous Markov chain X(n) = {X,, n 2 0) with a finite state spa ce E = (0, 1,
              ..., N}, of which A  = (0, 1, ..., m), m 2 1, is a set of  absorbing states and B = {rn + 1, ..., N} is
              a  set  of  nonabsorbing  states.  It  is  assumed  that  at  least  one  of  the  absorbing  states in  A  is
              accessible from any nonabsorbing states in B. Show that a.bsorption of X(n) in one or another of
              the absorbing states is certain.
                 If  X, E A,  then there is nothing to prove, since X(n) is already absorbed. Let X,  E B. By  assumption,
              there is at least one state  in  A which  is  accessible from  any state in  B.  Now  assume that state k  E A  is
              accessible from j G B. Let njk (< co) be the smallest number n such that   > 0. For a given state j, let nj
              be the largest of njk as k varies and n'  be the largest of nj as j varies. After n'  steps, no matter what the initial
              state of X(n), there is a probability p > 0 that X(n) is in an absorbing state. Therefore


              and 0 < 1 - p < 1. It follows by homogeneity and the Markov property that
   194   195   196   197   198   199   200   201   202   203   204