Page 166 - Matrices theory and applications
P. 166
9
Iterative Methods for Linear Problems
In this chapter the field of scalars is K = IR or CC.
We have seen in the previous Chapter a few direct methods for solving
a linear system Ax = b,when A ∈ M n (K) is invertible. For example, if A
admits an LU factorization, the successive resolution of Ly = b, Ux = y
is called the Gauss method. When a leading principal minor of A van-
ishes, a permutation of the columns allows us to return to the generic case.
More generally, the Gauss method with pivoting consists in permuting the
columns at each step of the factorization in such a way as to limit the
magnitude of round-off errors and that of the conditioning number of the
matrices L, U.
The direct computation of the solution of a Cramer’s linear system Ax =
b, by the Gauss method or by any other direct method, is rather costly,
3
on the order of n operations. It also presents several inconveniences. On
the one hand, it does not exploit completely the sparse shape of many
matrices A; in numerical analysis it happens frequently that an n × n
2
matrix has only O(n) nonzero entries, instead of O(n ). On the other hand,
the computation of an LU factorization is rather unstable, because the
round-off errors produced by the computer are amplified at each step of
the computation.
For these reasons, one often uses an iterative method to compute an ap-
m
proximate solution x , instead of an exact solution. The iterative methods
fully exploit the sparse structure of A. The number of operations is O(am),
where a is the number nonzero entries in A. The choice of m depends on
the accuracy that one requires a priori. It is, however, modest, because
m
the error x m − ¯x from the exact solution ¯x is of order constant × k ,