Page 297 - Numerical Methods for Chemical Engineering
P. 297

286     6 Boundary value problems



                     Inthelower-SORmethod,weextractthelower-triangularpartofAanddividethediagonal
                   values by an over-relaxation parameter ω> 1,
                                               1
                                          B =   D A + L A  1 <ω < 2                  (6.140)
                                               ω
                   to obtain an update system
                                −1          [k+1]      −1               [k]
                              ω  D A + L A x   = b + (ω  − 1)D A − U A x             (6.141)
                   that is solved by forward substitution. In the upper-SOR method, we select

                                              1
                                         B =   D A + U A   1 <ω < 2                  (6.142)
                                              ω
                   to obtain the update system
                                −1          [k+1]      −1               [k]
                              ω  D A + U A x   = b + (ω  − 1)D A − L A x             (6.143)
                   which is solved by backward substitution. For a positive-definite matrix, SOR converges
                   for all 1 <ω < 2. Some tuning of ω is required to optimize the rate of convergence
                   (1.9 being a good initial guess); however, the convergence rate can be made somewhat
                   less sensitive to the choice of ω if we alternate lower and upper-SOR steps, yielding the
                   symmetric SOR (SSOR) method. We do not consider these methods in further detail, because
                   the methods described below are preferred and are implemented directly in MATLAB.For
                   further discussion, consult Stoer & Bulirsch (1993) and Quateroni et al. (2000).


                   The conjugate gradient method for positive-definite matrices

                   In Chapter 5, we considered the conjugate gradient method which solves Ax = b by mini-
                   mizing the quadratic cost function

                                                    1  T      T
                                             F(x) =  x Ax − b x                      (6.144)
                                                    2
                   in no more than N steps. At each iteration, we need only compute a single matrix-vector
                   product; therefore, no elimination occurs and we take full advantage of sparsity. The method
                   is implemented in MATLAB as pcg. It can be used for A stored in either full- or sparse-matrix
                   format,

                   A=[2-100;-12-10;0-12-1;00-12];
                   b = [1;1;1;1];
                   x = pcg(A,b),
                   pcg converged at iteration 2 to a solution with relative residual 0
                   x =
                     2
                     3
                     3
                     2
   292   293   294   295   296   297   298   299   300   301   302