Page 182 - Matrices theory and applications
P. 182
9.6. Exercises
165
Each iteration thus needs O(n) operations. Finally, the condition number of
2
A is of order c/h . Hence, a number of iterations N " 1/h is appropriate.
This is worthwhile as soon as d ≥ 2. The method becomes more useful as
d grows larger and the threshold 1/h is independent of the dimension.
Preconditioning: In practice, the performance of the method is improved
by preconditioning the matrix A. The idea is to replace the system Ax = b
T
T
by B ABy = B b, where the inversion of B is easy, for example B is block-
triangular or block-diagonal with small blocks. If BB T is close enough to
A −1 , the condition number of the new matrix is smaller, and the number
of iterations is reduced. Actually, when the condition number reaches its
infimum K =1, we have A = I n , and the solution ¯x = b is obvious. The
simplest preconditioning consists in choosing B = D −1/2 . Its efficiency is
clear in the (trivial) case where A is diagonal, because the matrix of the
new system is I n , and the condition number is lowered to 1. Observe that
preconditioning is also used with SOR, because it allows us to diminish the
value of ρ(J), hence also the convergence rate. We shall see in Exercise 5
that, if A ∈ SPD n is tridiagonal and if D = dI n (which corresponds to the
preconditioning described above), the conjugate gradient method is twice
as slow as the relaxation method with optimal parameter; that is,
1
θ = τ RL .
2
This equality is obtained by computing θ and the optimal convergence
rate τ RL of the relaxation method in terms of ρ(J). In the real world, in
which A might not be tridiagonal, or be only blockwise tridiagonal, the
map ρ(J) → θ remains the same, while τ RL deteriorates. The conjugate
gradient method becomes more efficient than the relaxation method. It has
also the advantage that it does not need the preliminary computation of
ρ(J), in contrast to the relaxation method with optimal parameter.
The reader will find a deeper analysis of the method of the conjugate
gradient in the article of J.-F. Maˆıtre in [1].
9.6 Exercises
1. Let A be a tridiagonal matrix with an invertible diagonal and let J
be its Jacobi matrix. Show that J is conjugate to −J.Compare with
Proposition 9.4.1.
2. We fix n ≥ 2. Use Theorem 3.4.2 to construct a matrix A ∈ SPD n
for which the Jacobi method does not converge. Show in particular
that
sup{ρ(J) | A ∈ SPD n ,D = I n } = n − 1.
3. Let A ∈ M n (IR)satisfy a ii > 0 for every index i,and a ij ≤ 0
whenever j = i. Using (several times) the weak form of the Perron–