Page 197 - Elements of Distribution Theory
P. 197

P1: JZP
            052184472Xc06  CUNY148/Severini  May 24, 2005  2:41





                                                   6.4 Markov Processes                      183

                        Markov chains
                        A Markov chain is simply a Markov process in which the state space of the process is a
                        countable set; here we assume that the state space is finite and, without loss of generality,
                        we take it to be {1,..., J} for some nonnegative integer J.A Markov chain may be either
                        a discrete time or a continuous time process; here we consider only the discrete time case.
                          Since a Markov chain is a Markov process, the conditional distribution of X t+1 given
                        X 1 ,..., X t depends only on X t . This conditional distribution is often represented by a
                        matrix of transition probabilities

                                        P t,t+1  ≡ Pr(X t+1 = j|X t = i), i, j = 1,..., J.
                                         ij
                        If this matrix is the same for all t we say that the Markov chain has stationary transition
                        probabilities; in the brief treatment here we consider only that case.
                          Hence, the properties of the process are completely determined by the transition proba-
                        bilities P ij along with the initial distribution, the distribution of X 0 . Let P denote the J × J
                        matrix with (i, j)th element P ij and let p denote a 1 × J vector with jth element

                                              p j = Pr(X 0 = j),  j = 1,..., J.
                        We will say that a process {X(t): t ∈ Z} has distribution M(p, P)ifitisa discrete time
                        Markov chain with transition matrix P and initial distribution p.


                        Example 6.9 (Two-state chain). Consider a Markov chain model with two states. Hence,
                        the transition probability matrix is of the form


                                                          α    1 − α
                                                   P =
                                                         1 − β  β
                        where α and β take values in the interval [0, 1]; for simplicity, we assume that 0 <α < 1
                        and 0 <β < 1. For instance,

                                   Pr(X 2 = 1|X 1 = 1) = α and Pr(X 2 = 1|X 1 = 2) = 1 − β.

                          The initial distribution is given by a vector of the form (θ, 1 − θ)so that

                                              Pr(X 0 = 1) = 1 − Pr(X 0 = 2) = θ
                        where 0 <θ < 1.


                        Example 6.10 (Simple random walk with absorbing barrier). Suppose that, at time 0, a
                        particle begins at position 0. At time 1, the particle remains at position 0 with probability 1/2;
                        otherwise the particle moves to position 1. Similarly, suppose that at time t the particle is at
                        position m.At time t + 1 the particle remains at position m with probability 1/2; otherwise
                        the particle moves to position m + 1. When the particle reaches position J, where J is some
                        fixed number, no further movement is possible. Hence, the transition probabilities have the
                        form
                                             1/2  if i < J and either j = i or j = i + 1

                                      P ij =  1   if i = J and j = J             .
                                             0    otherwise
   192   193   194   195   196   197   198   199   200   201   202