Page 199 - Elements of Distribution Theory
P. 199

P1: JZP
            052184472Xc06  CUNY148/Severini  May 24, 2005  2:41





                                                   6.4 Markov Processes                      185

                        where q = (q 1 ,..., q J ) denotes the vector of state probabilities for X r . Note that (6.5) is of
                        the same form as (6.4), except with the vector p replaced by q; from part (ii) of the theorem,
                               r
                        q = pP , proving the result.

                        Example 6.11 (Two-state chain). Consider the two-state Markov chain considered in
                        Example 6.9. The vector of state probabilities for X 1 is given by

                                       α   1 − α
                         ( θ, 1 − θ )             = ( θα + (1 − θ)(1 − β),θ(1 − α) + (1 − θ)β ) . (6.6)
                                     1 − β   β
                        Hence,

                                      Pr(X 1 = 1) = 1 − Pr(X 1 = 2) = θα + (1 − θ)(1 − β).
                          The position of the chain at time 2 follows the same model, except that the vector of
                        initial probabilities (θ, 1 − θ)is replaced by (6.6). Hence,
                                       Pr(X n+1 = 1) = αPr(X n = 1) + (1 − β)Pr(X n = 2)
                                                   = (α + β − 1)Pr(X n = 1) + (1 − β).

                        Thus, writing r n = Pr(X n = 1), c = α + β − 1, and d = 1 − β,wehave the recursive
                        relationship

                                              r n+1 = cr n + d, n = 0, 1, 2,...
                        with r 0 = θ.It follows that
                                                                      2
                                                      2
                               r n+1 = c(cr n−1 + d) + d = c r n−1 + cd + d = c [cr n−2 + d] + cd + d
                                                      3        2
                                                   = c r n−2 + (c + c + 1)d,
                        and so on. Hence,
                                             n                            1 − (α + β − 1) n+1

                                                 j
                             r n+1 = c n+1 r 0 + d  c = (α + β − 1) n+1 θ + (1 − β)       .
                                                                             2 − (α + β)
                                             j=0
                        For the special case in which α + β = 1,
                                           Pr(X n+1 = 1) = 1 − β, n = 0, 1, 2,....

                          The matrix P gives the conditional distribution of X n+1 given X n = i, for any i =
                        1,..., J; hence, the probabilities in P are called the one-step transition probabilities.We
                        may also be interested in the m-step transition probabilities

                                                P  (m)  = Pr(X n+m = j|X n = i).
                                                 ij
                          The following theorem shows how the m-step transition probabilities can be obtained
                        from P.

                        Theorem 6.8. Let {X t : t ∈ Z} denote a discrete time process with distribution M(p, P).
                        Then, for any r ≤ m, the m-step transition probabilities are given by
                                                          J
                                                    (m)      (r)  (m−r)
                                                   P   =    P  P     .
                                                    ij       ik  kj
                                                         k=1
   194   195   196   197   198   199   200   201   202   203   204