Page 168 - Matrices theory and applications
P. 168
9.2. Basic Methods
n
To be more precise, if · is a norm on K ,then
0
m
m
x − A
From Householder’s theorem (Theorem 4.2.1), there exists for every > 0
a constant C( ) < ∞ such that
0
m
−1
x − A −1 −1 b ≤ (M −1 N) x − A −1 b . m 151
b ≤ C( ) x − ¯x (ρ(M
N)+ ) .
In most cases (in fact, when there exists an induced norm satisfying
M −1 N = ρ(M −1 N)), one can choose = 0 in this inequality such that
m
m
x − A −1 b = O(ρ(M −1 N) ).
0
0
Thechoiceofa vector x such that x − A −1 b is an eigenvector associated
to an eigenvalue of maximal modulus shows that this inequality cannot be
improved in general. For this reason, we call the positive number
τ := − log ρ(M −1 N)
the convergence ratio of the method. Given two convergent methods, we
say that the first one converges faster than the second one if τ 1 >τ 2 .For
example, we say that it converges twice as fast if τ 1 =2τ 2 . In fact, with
an error of order ρ(M −1 N) m =exp(−mτ), we see that the faster method
needs only half as many iterations to obtain the same accuracy.
9.2 Basic Methods
There are three basic iterative methods, of which the first has only a his-
torical or theoretical interest. Each uses the decomposition of A into three
parts, a diagonal one D, a lower triangular −E, and an upper triangular
one −F:
d 1
.
.
. −F
A = D − E − F = .
. .
−E .
d n
In all cases, one assumes that D is invertible: The diagonal entries of A are
nonzero.
Jacobi method: One chooses M = D;thus N = E + F. The iteration
m
matrix is J := D −1 (E + F). Knowing the vector x , one computes
the components of the vector x m+1 by the formula
1 m
m+1
x = b i − a ij x .
i j
a ii
j =i