Page 96 - A First Course In Stochastic Models
P. 96

88                   DISCRETE-TIME MARKOV CHAINS

                Proof  A formal proof is as follows. By conditioning on the state of the Markov
                chain at time t = n, we find


                P {X n+m = j | X 0 = i} =  P {X n+m = j | X 0 = i, X n = k}P {X n = k | X 0 = i}
                                     k∈I

                                   =     P {X n+m = j | X n = k}P {X n = k | X 0 = i}
                                      k∈I

                                   =     P {X m = j | X 0 = k}P {X n = k | X 0 = i},
                                      k∈I
                which verifies (3.2.1). Note that the second equality uses the Markovian property
                and the last equality uses the assumption of time homogeneity.

                  The theorem states that the probability of going from i to j in n + m steps is
                obtained by summing the probabilities of the mutually exclusive events of going
                first from state i to some state k in n steps and then going from state k to state j in
                m steps. This explanation is helpful to memorize the equation (3.2.1). In particular,
                we have for any n = 1, 2, . . . ,

                                     (n+1)      (n)
                                    p     =    p  p kj ,  i, j ∈ I.          (3.2.2)
                                     ij         ik
                                            k∈I
                                                   (n)
                Hence the n-step transition probabilities p  can be recursively computed from
                                                   ij
                                                            (n)
                the one-step transition probabilities p ij . In fact the p ij  are the elements of the
                                    n
                n-fold matrix product P , where P denotes the matrix whose (i, j)th element is
                the one-step transition probability p ij . If the state space I is finite, the probabilities
                 (n)
                p   can also be found by computing the eigenvalues and the eigenvectors of the
                 ij
                matrix P.
                Example 3.2.1 The weather as Markov chain

                On the Island of Hope the weather each day is classified as sunny, cloudy or rainy.
                The next day’s weather depends only on the weather of the present day and not
                on the weather of the previous days. If the present day is sunny, the next day will
                be sunny, cloudy or rainy with respective probabilities 0.70, 0.10 and 0.20. The
                transition probabilities are 0.50, 0.25 and 0.25 when the present day is cloudy and
                they are 0.40, 0.30 and 0.30 when the present day is rainy. An interesting question
                is how often the weather is sunny, cloudy and rainy over a long period of time.
                  Let us first answer a simpler question, namely what the probability is of sunny
                weather three days later when the present day is rainy. To answer this question, we
                define a Markov chain {X n } with three states 1, 2 and 3. The process is in state 1
                when the weather is sunny, in state 2 when the weather is cloudy and in state 3
                when the weather is rainy. The matrix P of one-step transition probabilities p ij is
   91   92   93   94   95   96   97   98   99   100   101