Page 100 - A First Course In Stochastic Models
P. 100

92                   DISCRETE-TIME MARKOV CHAINS

                3.2.2 Mean First-Passage Times
                Example 3.1.1 asked how to find the expected number of steps the drunkard needs
                to return to his starting point. More generally, consider a Markov chain {X n } for
                which

                (a) the state space I is finite,
                (b) there is some state r such that for each state i ∈ I there is an integer n(= n i )
                            (n)
                   such that p  > 0.
                            ir
                What is the mean return time from state r to itself? Let

                                       τ = min{n ≥ 1 | X n = r},

                To calculate µ rr = E(τ|X 0 = r), we need the mean visit times
                                         µ ir = E(τ | X 0 = i)
                for each state i  = r. By conditioning on the next state after state r,


                                       µ rr = 1 +     p rj µ jr .            (3.2.3)
                                                j∈I,j =r
                The µ ir with i  = r are found by solving a system of linear equations. For notational
                convenience, number the states as 1, . . . , N and let state r be numbered as N.

                Theorem 3.2.2 The mean visit times µ iN for i  = N are the unique solution to the
                linear equations
                                        N−1

                               µ iN = 1 +   p ij µ jN ,  i = 1, . . . , N − 1.  (3.2.4)
                                         j=1
                Proof  The equation for µ iN follows by conditioning on the next state visited after
                state i. To prove that the linear equations have a unique solution we use the trick
                of making state N absorbing for a modified Markov chain. Let P = (	 p ij ), i, j ∈ I

                be the Markov matrix obtained by replacing the Nth row in the matrix P = (p ij ),
                i, j ∈ I by (0, 0, . . . , 1). The mean first passage times µ jN for j = 1, . . . , N−1 are
                not changed by making state N absorbing. Denote by Q = (q ij ) the (N−1)×(N−1)
                submatrix that results by omitting the Nth row and the Nth column in the matrix
                P. Let the vectors µ = (µ 1N , . . . , µ N−1,N ) and e = (1, . . . , 1). Then we can write
                (3.2.4) in matrix notation as
                                            µ = e + Qµ.                      (3.2.5)

                Since state N is absorbing for the Markov matrix P, we have for each n ≥ 1 that

                                   (n)    (n)
                                  q   = 	 p  ,  i, j = 1, . . . , N − 1,     (3.2.6)
                                   ij    ij
   95   96   97   98   99   100   101   102   103   104   105