Page 299 - Numerical Methods for Chemical Engineering
P. 299
288 6 Boundary value problems
This is done by storing at iteration k an N ×k matrix V [k] whose column vectors are
[0]
[0]
orthonormal basis vectors of K k (r ; A). We then can write any p ∈ K k (r ; A) as the
linear combination
p = V [k] c c ∈ k (6.153)
We find p [k] = V [k] [k] by minimizing the quadratic cost function
c
& [0]
& [0] & 2 & & 2
c
c
F(c) = b − A x + V [k] & = r − AV [k] & (6.154)
&
2 2
It may be shown that the GMRES method converges in at most N iterations; however, the
memory required to store V [k] for large k is considerable, and the GMRES method becomes
unwieldy after a few iterations. In practice, we restart the method every m < N steps, so
that V [k] never becomes larger than an N × m matrix.
To demonstrate gmres, we start with the matrix A used to demonstrate pcg above, and
add 1 to the (1, 4) element, destroying its symmetry. While this is a problem for pcg,itis
not for gmres.
A2 = A; A2(1,4) - A(1,4) + 1;
x = pcg(A2,b), % this should not work
pcg stopped at iteration 4 without converging to the desired tolerance
1e-006 because the maximum number of iterations was reached.
The iterate returned (number 4) has relative residual 0.16
x=
0.6356
2.1460
2.6616
1.8970
x = gmres(A2,b), % but this should work
gmres converged at iteration 4 to a solution with relative residual 1.2e-015
x=
0.6667
2.0000
2.3333
1.6667
To restart gmres every restart iterations, use x = gmres(A,b,restart).
The use of preconditioners
The conjugate gradient and GMRES methods converge in at most N iterations, but is it
possible for them to converge in significantly less than N iterations? Consider for the linear
system Ax = b the first conjugate gradient or GMRES update
[0] [0]
x [1] = x [0] + a r r [0] = b − Ax [0] (6.155)
Let W be a matrix whose columns are eigenvectors of A and be the diagonal matrix
containing the eigenvalues of A, AW = W. Let us assume that A is diagonalizable, such