Page 176 - Schaum's Outlines - Probability, Random Variables And Random Processes
P. 176

CHAP. 51                         RANDOM  PROCESSES



               Let U = [ukj], where
                                       ukj = P{X,  = j(~ I X, = k(~ B)}
                                                     A)
           It is seen that  U is an (N - m) x m matrix and its elements are the absorption probabilities for the
           various absorbing states. Then it can be shown that (Prob. 5.40)
                                            U = (I - Q)-'R  = (DR                         (5.50)
           The  matrix  (D  = (I - Q)-'  is  known  as the fundamental  matrix of  the  Markov  chain  X(n). Let  T,
           denote the total time units (or steps) to absorption from state k. Let



           Then it can be shown that (Prob. 5.74)




           where 4ki is the (k, i)th element of the fundamental matrix 0.

         F.  Stationary Distributions:
               Let P be the transition probability matrix of a homogeneous Markov chain {X,,  n 2 0). If  there
           exists a probability vector p such that
                                                  fiP  = fi
           then p is called a stationary distribution for the Markov chain. Equation (5.52) indicates that a sta-
           tionary distribution p is a (left) eigenvector of P with eigenvalue 1. Note that any nonzero multiple of B
           is also an eigenvector of  P. But  the stationary distribution  p is fixed by  being a probability  vector;
           that is, its components sum to unity.

         G.  Limiting Distributions:

               A Markov chain is called regular if  there is a finite positive integer m such that after m time-steps,
           every  state  has  a  nonzero  chance  of  being  occupied, no  matter  what  the  initial  state.  Let  A > 0
           denote that every element aij of  A  satisfies the condition aij > 0.  Then, for a regular Markov chain
           with transition  probability matrix P, there exists an m > 0 such that Pm > 0. For a regular homoge-
           neous Markov chain we have the following theorem:

         THEOREM  5.5.1
             Let  {X,,  n 2 0) be  a  regular  homogeneous finite-state Markov  chain  with  transition  matrix  P.
             Then
                                                 lim Pn  =
                                                 n+m
             where   is a matrix  whose rows  are identical and  equal  to  the  stationary distribution  p  for  the
             Markov chain defined by Eq. (5.52).


         5.6  POISSON  PROCESSES
         A.  Definitions:
               Let  t  represent a time  variable. Suppose an experiment begins at  t = 0.  Events of  a  particular
           kind occur randomly, the first at TI, the second at T2, and so on. The r.v. IT;.  denotes the time at which
           the ith event occurs, and the values ti of   (i = 1,2,  . . .) are called points of  occurrence (Fig. 5-1).
   171   172   173   174   175   176   177   178   179   180   181