Page 226 -
P. 226

5.1. Linear Stationary Iterative Methods  209


        consequently, the method converges with optimal ω even in cases where
        it would not converge for ω = 1. But keep in mind that one needs the
        eigenvalues of M to determine ¯ω.
          Moreover, we have
                       ¯ ω< 1  ⇔   λ max (M)+ λ min (M) < 0 .
        If λ min (M)  = −λ max(M), that is, ¯ω  = 1, we will achieve an improvement
        by relaxation:
                                  (M ¯ω ) < (M) .
        Thecaseof ω< 1 is called underrelaxation, whereas in the case of ω> 1
        we speak of an overerrelaxation.
          In particular, for the Richardson method with the iteration matrix M =
        I − A, due to λ min (M)=1 − λ max (A)and λ max (M)=1 − λ min (A), the
        optimal ¯ω is given by
                                         2
                              ¯ ω =                .                (5.32)
                                  λ min (A)+ λ max (A)
        Hence
                                 λ max (A) − λ min (A)  κ(A) − 1
           (M ¯ω )= 1 − ¯ωλ min (A)=               =         < 1 ,  (5.33)
                                 λ min (A)+ λ max (A)  κ(A)+1
        with the spectral condition number of A
                                        λ max (A)
                                 κ(A):=
                                         λ min (A)
        (see Appendix A.3).
          For large κ(A)wehave
                                   κ(A) − 1        2
                           (M ¯ω )=        ≈ 1 −      ,
                                   κ(A)+1        κ(A)
        the variable of the proportionality being κ(A). For the example of the
        five-point stencil discretization, due to (5.24),

                                             n − 1       π
                λ min (A)+ λ max (A)= 4 2 − cos   π − cos   =8 ,
                                               n         n
        and thus due to (5.32),
                                         1
                                     ¯ ω =  .
                                         4
                                        1
        Hence the iteration matrix M ¯ω = I− A is identical to the Jacobi iteration:
                                        4
        We have rediscovered Jacobi’s method.
          By means of (5.33) we can estimate the contraction number, since we
        know from (5.24) that
                                    n−1  	          π
                           4 1 − cos  n  π   1+cos  n   4n 2
                    κ(A)=              	   =        π  ≈    .       (5.34)
                             4 1 − cos  π    1 − cos     π 2
                                      n             n
   221   222   223   224   225   226   227   228   229   230   231