Page 194 - Compact Numerical Methods For Computers
P. 194

Direct search methods                      183

                      nevertheless may need to minimise functions. I have used the Hooke and Jeeves
                      method in a number of forecasting courses, and published it as a step-and-
                      description version with a BASIC code in Nash (1982). A more advanced version
                      appears in Nash and Walker-Smith (1987). I caution that it may prove very slow to
                      find a minimum, and that it is possible to devise quite simple functions (Nash and
                      Walker-Smith 1989) which will defeat its heuristic search. Algorithm 27 below
                      presents a Pascal implementation.

                      Algorithm 27. Hooke and Jeeves minimiser
                       procedure hjmin(n: integer; {the number of parameters in the
                                        function to be minimised}
                                   var B,X: r-vector; {the parameter values on
                                        input (B) and output (X) from minmeth}
                                   var Fmin: real; {‘minimum’ function value}
                                        Workdata: probdata; {user defined data area}
                                   var fail: boolean; {true if method has failed}
                                        intol: real); {user-initialized convergence
                                        tolerance}
                        {alg27.pas == Hooke and Jeeves pattern search function minimisation
                             From Interface Age, March 1982 page 34ff.
                                      Copyright 1988 J.C.Nash
                        }
                       var
                          i j: integer; {loop counters}
                          stepsize: real; {current step size}
                          fold: real; {function value at ‘old’ base point}
                          fval: real; {current function value}
                          notcomp: boolean; {set true if function not computable}
                          temp: real; {temporary storage value}
                          samepoint: boolean; {true if two points are identical}
                          ifn: integer; {to count the number of function evaluations}
                       begin
                          if intol<0.0 then int<1 := calceps; {set convergence tolerance if necessary}
                          ifn := 1; {to initialize the count of function evaluations}
                          fail := false; {Algorithm has not yet failed.}
                          {STEP HJ1: n already entered, but we need an initial stepsize. Note the
                          use of the stepredn constant, though possibly 0.1 is faster for
                          convergence. Following mechanism used to set stepsize initial value.}
                          stepsize := 0.0;
                          for i := l to n do
                             if stepsize < stepredn*abs(B[i]) then stepsize := stepredn*abs(B[i]);
                          if stepsize=0.0 then stepsize := stepredn; {for safety}
                          {STEP HJ2: Copy parameters into vector X}
                          for i := 1 to n do X[i] := B[i];
                          {STEP HJ3 not needed. In original code, parameters are entered into X
                          and copied to B}
                          fval := fminfn(n, B,Workdata,notcomp); {STEP HJ4}
                          if notcomp then
                          begin
                             writeln(‘***FAILURE*** Function not computable at initial point’);
                             fail := true;
   189   190   191   192   193   194   195   196   197   198   199