Page 260 - Compact Numerical Methods For Computers
P. 260
Conjugate gradients method in linear algebra 247
Algorithm 25. Rayleigh quotient minimisation by conjugate gradients (cont.)
avec, bvec, yvec, zvec, g, t : rvector;
beta, d, eps, g2, gg, oldg2, pa, pn, s2, step : real;
t2, ta, tabt, tat, tb, tbt, tol, u, v, w, xat, xax, xbt, xbx : real;
conv, fail : boolean;
begin
writeln(‘alg25.pas -- Rayleigh quotient minimisation’);
itlimit := ipr; {to save the iteration limit} {STEP 0}
fail := false; {Algorithm has yet to fail.}
conv := false; {Algorithm has yet to converge.}
ipr := 0; {to initialize the iteration count}
eps := calceps;
tol := n*n*eps*eps; {a convergence tolerance}
{The convergence tolerance, tol, should ideally be chosen relative
to the norm of the B matrix when used for deciding if matrix B is
singular. It needs a different scaling when deciding if the gradient
is “small”. Here we have chosen a compromise value. Performance of
this method could be improved in production codes by judicious choice
of the tolerances.}
pa := big; {a very large initial value for the ‘minimum’ eigenvalue}
while (ipr<=itlimit) and (not conv) do
begin {Main body of algorithm}
matmul(n, A, X, avec); {STEP 1}
matmul(n, B, X, bvec); {initial matrix multiplication}
ipr := ipr+l ; {to count the number of products used}
{STEP 2: Now form the starting Rayleigh quotient}
xax := 0.0; xbx := 0.0; {accumulators for numerator and denominator
of the Rayleigh quotient}
for i := l to n do
begin
xax := xax+X[i]*avec[i]; xbx := xbx+X[i]*bvec[i];
end; {loop on i for Rayleigh quotient}
if xbx<=tol then halt; {STEP 3: safety check to avoid zero divide.
This may imply a singular matrix B, or an inappropriate
starting vector X i.e. one which is in the null space of B
(implying B is singular!) or which is itself null.}
rq := xax/xbx; {the Rayleigh quotient -- STEP 4}
write(ipr,’ products -- ev approx. =’,rq:18);
if rqcpa then {Principal convergence check, since if the Rayleigh
quotient has not been decreased in a major cycle, and we must
presume that the minimum has been found. Note that this test
requires that we initialize pa to a large number.)
begin {body of algorithm -- STEP 5}
pa := rq; {to save the lowest value so far}
gg := 0.0; {to initialize gradient norm. Now calculate gradient.}
for i := 1 to n do {STEP 6}
begin
g[i] := 2.0*(avec[i]-rq*bvec[i])/xbx; gg := gg+g[i]*g[i];
end; {gradient calculation}
writeln(‘ squared gradient norm =’,gg:8);
if gg>tol then {STEP 7}
{Test to see if algorithm has converged. This test is unscaled
and in some problems it is necessary to scale the tolerance tol