Page 250 - Compact Numerical Methods For Computers
P. 250

Conjugate gradients method in linear algebra        237
                       Algorithm 24. Solution of a consistent set of linear equations by conjugate gradients
                      (cont.)
                             computes the matrix - vector product H t and places it in
                             v, we really only need have
                                      matmulH(n, t, v),
                             with the matrix being implicit. In many situations, the
                             elements of H are simple to provide or calculate so that
                             its explicit storage is not required. Such modifications
                             allow array H to be removed from the calling sequence
                             of this procedure.
                                      Copyright 1988 J.C.Nash
                        }
                        var
                          count, i, itn, itlimit : integer;
                          eps, g2, oldg2, s2, step, steplim, t2, tol : real;
                          g, t, v : rvector;
                        begin
                        {writeln(‘alg24.pas -- linear equations solution by conjugate gradients’);}
                           {Because alg24.pas is called repeatedly in dr24ii,pas, we do not always
                             want to print the algorithm banner.}
                          itlimit := itcount; {to save the iteration limit -- STEP 0}
                          itcount := 0; {to initialize the iteration count}
                          eps := calceps; {machine precision}
                          steplim := 1.0/sqrt(eps); {a limit to the size of the step which may
                                         be taken in changing the solution Bvec}
                          g2 := 0.0; {Compute norm of right hand side of equations.}
                          for i := 1 to n do g2 := g2+abs(C[i]); tol := g2*eps*eps*n;
                           {STEP 1: compute the initial residual vector)
                           {Note: we do not count this initial calculation of the residual vector.}
                          matmul(n, H, Bvec, g);
                          for i := 1 to n do g[i] := g[i]-C[i];{this loop need not be separate if
                             explicit matrix multiplication is used} {This loop computes the
                             residual vector. Note that it simplifies if the initial guess to the
                             solution Bvec is a null vector. However, this would restrict the
                             algorithm to one major cycle (steps 4 to 11)}
                          g2 := 0.0; {STEP 2}
                          for i := 1 to n do
                          begin
                             g2 := g2+g[i]*g[i]; t[i] := -g[i];
                          end; {loop on i}
                             {writeln(‘initial gradient norm squared = ’,g2);}
                          ssmin := big; {to ensure no accidental halt -- STEP 3}
                          while (g2>tol) and (itcount<itlimit) and (ssmi1>0.0) do
                           (Convergence test -- beginning of main work loop in method. )
                          begin {STEP 4 -- removed}
                              {STEP 5 -- here in the form of explicit statements}
                             itcount := itcount+l; {to count matrix multiplications}
                             matmul( n, H, t, v);
                             t2 := 0.0; {STEP 6 -- t2 will contain the product t-transpose * H * t}
                             for i := 1 to n do t2 := t2+t[i]*v[i];
                             step := g2/t2; oldg2 := g2; {STEP 7}
                             if abs(step)>steplim then
                             begin
                                writeln(‘Step too large -- coefficient matrix indefinite?‘);
   245   246   247   248   249   250   251   252   253   254   255