Page 118 - A First Course In Stochastic Models
P. 118

110                   DISCRETE-TIME MARKOV CHAINS

                basis is strongly dependent of the structure of the system of linear equations to
                be solved and is typically a matter of experimentation. However, it is worthwhile
                to try such an experimentation when an extremely large but structured system of
                linear equations has to be solved many times. Enormous reductions in computing
                times can be achieved by Krylov iteration methods; see Stewart (1994).


                Recursive method
                The linear equations (3.4.1) and (3.4.2) become a Hessenberg system when the p ij
                have the property that for each state i = 1, . . . , N,

                                      p ij = 0 for all j ≤ i − 2.            (3.4.3)
                In this special case the equilibrium probabilities π j can also be computed by a
                simple recursion scheme. To obtain this recursion scheme, we extend the ‘rate
                out = rate in’ principle discussed in Section 3.3. For each set A of states with
                A  = I, we have that the long-run average number of transitions per time unit
                from a state inside A to a state outside A equals the long-run average number of
                transitions per time unit from a state outside A to a state inside A.
                  Under the property (3.4.3) the set A = {i, i + 1, . . . , N} with i  = 1 can be
                left only through state i. Applying the ‘rate out = rate in’ principle to this set A,
                we find                             
                                       i−1     N

                              p i,i−1 π i =  π k    p kj   ,  i = 2, . . . , N.  (3.4.4)
                                       k=1    j=i
                This recursion starts with the value of π 1 . Since the equilibrium equations determine
                the probabilities π j up to a multiplicative constant, it is no problem that the value
                of π 1 is not known beforehand. We initialize the recursion with an arbitrary non-
                zero value for π 1 and normalize at the end of the recursion. In applying (3.4.4) it
                is no restriction to assume that p i,i−1 > 0 for all i ≥ 2.


                Algorithm
                Step 0. Initialize π 1 := 1.
                Step 1. Compute successively π 2 , . . . , π N from (3.4.4).
                Step 2. Normalize the π i according to
                                           N

                                  π i = π i /  π k ,  i = 1, 2, . . . , N.
                                          k=1

                The recursion scheme (3.4.4) involves no subtractions and is thus numerically
                stable. However, very large numbers π i may build up when N is large. In those
                situations it is recommended to do a renormalization at intermediate steps of the
                recursion. The recursion method can also be used for a Markov chain with an
   113   114   115   116   117   118   119   120   121   122   123