Page 169 - Matrices theory and applications
P. 169

9. Iterative Methods for Linear Problems
                              152
                              Gauss–Seidel method: One chooses M = D − E, and thus N = F.The
                                   iteration matrix is G := (D−E)
                                   computes G explicitly. One computes the approximate solutions by
                                   a double induction, on m on the one hand, and on i ∈{1,... ,n} on
                                   the other hand:
                                                                                   
                                                            i−1
                                                                         j=n

                                                                                 m
                                                                   m+1
                                                               a ij x
                                                                                     .
                                                                              a ij x
                                            x m+1  =  1   b i −    −1 F. As we shall see below, one never
                                                                       −
                                                                                 j
                                             i
                                                      
                                                                                   
                                                                   j
                                                   a ii
                                                            j=1          j=i+1
                                   The difference between the two methods is that in Gauss–Seidel one
                                   always uses the most recently computed values of each coordinate.
                              Relaxation method: It often happens that the Gauss–Seidel method
                                   converges exceedingly slowly. We thus wish to improve the Gauss–
                                   Seidel method by looking for a “best” approximated value of the x j
                                                                                          m
                                   (with j< i) when computing x m+1 . Instead of being simply x ,as
                                                              i                           j
                                   in the Jacobi method, or x m+1 , as in that of Gauss–Seidel, this best
                                                          j
                                   value will be an interpolation of both (we shall see that it is merely
                                   an extrapolation). This justifies the choice of
                                                     1                1
                                               M =    D − E,   N =     − 1 D + F,
                                                    ω                ω
                                   where ω ∈ CC is a parameter. This parameter remains, in general,
                                   constant throughout the calculations. The method is called successive
                                   relaxation. When ω> 1, it bears the name successive overrelaxation
                                   (SOR). The iteration matrix is
                                                 L ω := (D − ωE) −1 ((1 − ω)D + ωF).
                                   The Gauss–Seidel method is a particular case of the relaxation
                                   method, with ω =1: L 1 = G. Special attention is given to the choice
                                   of ω, in order to reach the minimum of ρ(L ω ). The computation of
                                   the approximate solutions is done through a double induction:
                                                                                          
                                                    i−1           j=n
                                            ω              m+1           m     1         m
                                     m+1
                                    x    =     b i −  a ij x  −      a ij x +   − 1 a ii x   .
                                     i                     j             j               i
                                           a ii                                ω
                                                    j=1          j=i+1
                                Without additional assumptions relative to the matrix A, the only result
                              concerning the convergence is the following:
                              Proposition 9.2.1 We have ρ(L ω ) ≥|ω − 1|. In particular, if the relax-
                              ation method converges for a matrix A ∈ M n(CC) and a parameter ω ∈ CC,
                              then
                                                         |ω − 1| < 1.
                              In other words, it is necessary that ω belong to the disk for which (0, 2) is
                              adiameter.
   164   165   166   167   168   169   170   171   172   173   174