Page 251 - Compact Numerical Methods For Computers
P. 251
238 Compact numerical methods for computers
Algorithm 24. Solution of a consistent set of linear equations by conjugate gradients
(cont.)
ssmin := -big; {to indicate this failure}
end
else
begin
{The step length has been computed and the gradient norm saved.}
g2 := 0.0; count := 0; {STEP 8}
for i := 1 to n do
begin
g[i] := g[i]+step*v[i]; {to update the gradient/residual}
t2 := Bvec[i]; Bvec[i] := t2+step*t[i]; {to update the solution}
if Bvec[i]=t2 then count := count+l;
g2 := g2+g[i]*g[i]; {accumulate gradient norm}
end; {loop on i -- updating}
if count<n then {STEP 9}
begin
if g2>tol then
begin {STEP 10 -- recurrence relation to give next search direction.}
t2 := g2/oldg2;
for i := 1 to n do t[i] := t2*t[i]-g[i];
end; {if g>tol}
end; {if count<n}
{writeln(‘Iteration’,itcount,’count=’,count,’residualss=’,g2);)
ssmin := g2; {to return the value of the estimated sum of squares}
end; {else on stepsize failure}
end; {while g2>tol}
if itcount>=itlimit then itcount := -itcount; {to notify calling program
of the convergence failure.}
{After n cycles, the conjugate gradients algorithm will, in exact
arithmetic, have converged. However, in floating-point arithmetic
the solution may still need refinement. For best results, the
residual vector g should be recomputed at this point, in the same
manner as it was initially computed. There are a number of ways
this can be programmed. Here we are satisfied to merely point out
the issue.}
end, {alg24.pas == lecg}
The above algorithm requires five working vectors to store the problem and intermediate results
as well as the solution. This is exclusive of any space needed to store or generate the coefficient
matrix.
Example 19.1. Solution of Fröberg’s differential equation (example 2.2)
Recall that the finite difference approximation of a second-order boundary value
problem gives rise to a set of linear equations whose coefficient matrix is easily
generated. That is to say, there is no need to build the set of equations explicitly if
the coefficient matrix satisfies the other conditions of the conjugate gradients
method. It turns out that it is negative definite, so that we can solve the equations
(-A)x= - b (19.12)
by means of the conjugate gradients algorithm 24. Alternatively, if the coefficient