Page 116 - A First Course In Stochastic Models
P. 116

108                   DISCRETE-TIME MARKOV CHAINS

                gets large. In some applications the transition probabilities p ij have the property
                that for each state i the probability p ij = 0 for j ≤ i −2 (or p ij = 0 for j ≥ i +2).
                Then the linear equations are of the Hessenberg type. Linear equations of the
                Hessenberg type can be efficiently solved by a special code using the very stable
                QR method. In solving the Markov chain equations (3.4.1) and (3.4.2) by a direct
                method, one of the equilibrium equations is omitted to obtain a square system of
                linear equations.

                Iterative method of successive overrelaxation

                Iterative methods have to be used when the size of the system of linear equations
                gets large. In specific applications an iterative method can usually avoid computer
                memory problems by exploiting the (sparse) structure of the application. An iter-
                ative method does not update the matrix of coefficients each time. In applications
                these coefficients are usually composed from a few constants. Then only these
                constants have to be stored in memory when using an iterative method. In addition
                to the advantage that the coefficient matrix need not be stored, an iterative method
                is easy to program for specific applications.
                  The iterative method of successive overrelaxation is a suitable method for solving
                the linear equations of large Markov chains. The well-known Gauss–Seidel method
                is a special case of the method of successive overrelaxation. The iterative methods
                generate a sequence of vectors x (0)  → x (1)  → x (2)  → . . . converging towards
                a solution of the equilibrium equations (3.4.1). The normalization is done at the
                end of the calculations. To apply successive overrelaxation, we first rewrite the
                equilibrium equations (3.4.1) in the form
                                         N

                                    x i =   a ij x j ,  i = 1, . . . , N,
                                         j=1
                                         j =i
                where
                                        p ji
                                 a ij =     ,  i, j = 1, . . . , N, j  = i.
                                      1 − p ii
                The standard successive overrelaxation method uses a fixed relaxation factor ω
                for speeding up the convergence. The method starts with an initial approximation
                vector x (0)   = 0. In the kth iteration of the algorithm an approximation vector x (k)  is
                                                            (k)
                found by a recursive computation of the components x  such that the calculation
                                                            i
                                  (k)                         (k)
                of the new estimate x  uses both the new estimates x  for j < i and the old
                                  i                           j
                         (k−1)
                estimates x   for j > i. The steps of the algorithm are as follows:
                         j
                                             (0)
                Step 0. Choose a non-zero vector x . Let k := 1.
                                                                     (k)
                Step 1. Calculate successively for i = 1, . . . , N the component x  from
                                                                     i
   111   112   113   114   115   116   117   118   119   120   121