Page 226 - Compact Numerical Methods For Computers
P. 226

Minimising a nonlinear sum of squares           215
                     Algorithm 23. Modified Marquardt method for minimising a nonlinear sum-of-squares
                     function (cont.)

                               end; {if not singmat}
                               if count<n then
                                  if (not singmat) and (not notcomp) and (p<Fmin) then
                                  begin {successful in reducing sum of squares}
                                     lambda:=lambda*dec; {to decrease the damping parameter}
                                     Fmin:=p; {to save best sum of squares so far}
                                     calcmat:=true; {to perform new iteration}
                                  end
                               else {somehow have not succeeded in reducing sum of
                                        squares: matrix is singular, new point is not
                                        a feasible set of parameters, or sum of squares
                                        at new point not lower}
                               begin
                                  lambda:=lambda*inc; {to increase the damping parameter}
                                  if lambdaceps*eps then lambda:=eps; {safety check}
                                  calcmat:=false; {since we simply renew search from the same
                                     point in the parameter space with new search step}
                                end; {adjustment of damping factor}
                                {This also ends ‘if count<n’}
                             end; {while count<n}
                          end; {with Workdata}
                       end; {alg23.pas == modmrt}



                                           17.6. RELATED METHODS
                      Fletcher (1971) has proposed a Marquardt method in which the bias e (lambda) is
                      computed on the basis of some heuristic rules rather than adjusted arbitrarily. His
                      program, incorporated in the Harwell and NAG libraries, has had widespread
                      usage and seems highly reliable. On straightforward problems I have found it very
                      efficient, and in terms of evaluations of J and S(b) generally better than a
                      FORTRAN  version of algorithm 23 even on difficult problems. However, in a small
                      sample of runs, the cost of running the Fletcher program was higher than that of
                      running the FORTRAN algorithm 23 (Nash 1977).
                        Jones (1970) uses an explicit search along spirals between the Gauss-Newton
                      and steepest descents directions. He reports very favourable performance for this
                      procedure and some of my colleagues who have employed it speak well of it.
                      Unfortunately, it does not seem possible to make it sufficiently compact for a
                      ‘small’ computer.
                        Since the first edition was published, algorithm 23 has proved to be highly reliable.
                      Nevertheless, I have at times been interested in using other approaches to minimising
                      a nonlinear sum of squares. In LEQB05 (Nash 1984b), the singular-value decompo-
                      sition is used to solve the Gauss-Newton linear least-squares problem of equation
                      (17.16) without the formation of the Jacobian inner-products matrix. In Nash and
                      Walker-Smith (1987), we added the facility to impose bounds constraints on the
                      parameters. We also considered changing the method of solving equation (17.16)
                      from the current Choleski method to a scaled Choleski method, but found that this
                      change slowed the iterations.
   221   222   223   224   225   226   227   228   229   230   231