Page 232 -
P. 232
5.1. Linear Stationary Iterative Methods 215
n m GS m SOR
8 43 8
16 178 17
32 715 35
64 2865 70
128 11466 140
256 45867 281
Table 5.1. Gauss–Seidel and optimal SOR method for the model problem.
with optimal relaxation parameter (m SOR ). Here we use the very moderate
termination criterion ε =10 −3 measured in the Euclidean norm.
The optimal SOR method is superior, even if we take into account the
almost doubled effort per iteration step. But generally, ω opt is not known
explicitly. Figure 5.3 shows that it is probably better to overestimate ω opt
instead of underestimating. More generally, one can try to improve the
relaxation parameter during the iteration:
If (M J ) is a simple eigenvalue, then this also holds true for the spectral
radius (M SOR(ω) ). The spectral radius can thus be approximated by the
power method on the basis of the iterates. By Theorem 5.9 (3) one can
approximate (M J), and by Theorem 5.9 (1) then also ω opt .
This basic principle can be extended to an algorithm (see, for example,
[18, Section 9.5]), but the upcoming overall procedure is no longer a linear
stationary method.
5.1.5 Extrapolation Methods
Another possibility for an extension of the linear stationary methods, re-
lated to the adaption of the relaxation parameter, is the following: Starting
with a linear stationary basic iteration ˜x k+1 := Φ ˜x k we define a new
iteration by
(k) (k)
(k+1)
x := ω k Φ x +(1 − ω k )x , (5.41)
with extrapolation factors ω k to be chosen. A generalization of this defi-
nition is to start with the iterates of the basic iteration ˜x (0) , ˜x (1) ,....The
iterates of the new method are to be determined by
k
(j)
(k)
x := α k j ˜ x ,
j=0
defined by a polynomial p k ∈P k , with the property p k (t)=
with α k j
k j
j=0 α k j t and p k (1) = 1. For an appropriate definition of such extrapola-
tion or semi-iterative methods we need to know the spectrum of the basic
iteration matrix M, since the error e (k) = x (k) − x satisfies
e (k) = p k (M)e (0) ,