Page 223 - Compact Numerical Methods For Computers
P. 223

212               Compact numerical methods for computers

                            large enough can, moreover, be made computationally positive definite so that the
                            simplest forms of the Choleski decomposition and back-solution can be employed.
                            That is to say, the Choleski decomposition is not completed for non-positive
                            definite matrices. Marquardt (1963) suggests starting the iteration with e=0·1,
                            reducing it by a factor of 10 before each step if the preceding solution q has
                            given
                                                         S(x+q) <S(x)

                            and x has been replaced by (x+q). If
                                                         S(x+q)>S (x)
                            then e is increased by a factor of 10 and the solution of equations (17.24)
                            repeated. (In the algorithm below, e is called lambda.)

                                             17.5. CRITIQUE AND EVALUATION
                            By and large, this procedure is reliable and efficient. Because the bias e is reduced
                            after each successful step, however, there is a tendency to see the following
                            scenario enacted by a computer at each iteration, that is, at each evaluation of J:
                            (i) reduce e, find S(x+q)>S(x);
                            (ii) increase e, find S(x+q)<S(x), so replace x by (x+q ) and proceed to (i).

                              This procedure would be more efficient if e were not altered. In other examples
                            one hopes to take advantage of the rapid convergence of the Gauss-Newton part
                            of the Marquardt equations by reducing e, so a compromise is called for. I retain
                            10 as the factor for increasing e, but use 0·4 to effect the reduction. A further
                            safeguard which should be included is a check to ensure that e does not approach
                            zero computationally. Before this modification was introduced into my program,
                            it proceeded to solve a difficult problem by a long series of approximate
                            Gauss-Newton iterations and then encountered a region on the sum-of-squares
                            surface where steepest descents steps were needed. During the early iterations e
                            underflowed to zero, and since J J was singular, the program made many futile
                                                         T
                            attempts to increase e before it was stopped manually.
                              The practitioner interested in implementing the Marquardt algorithm will find
                            these modifications included in the description which follows.


                             Algorithm 23. Modified Marquardt method for minimising a nonlinear sum-of-squares
                            function
                              procedure modmrt( n : integer; {number of residuals and number of parameters}
                                            var Bvec : rvector; {parameter vector}
                                            var X : rvector; {derivatives and best parameters}
                                            var Fmin : real; {minimum value of function}
                                               Workdata:probdata);
                              {alg23.pas == modified Nash Marquardt nonlinear least squares minimisation
                                   method.
                                            Copyright 1988 J.C.Nash
                              }
                              var
   218   219   220   221   222   223   224   225   226   227   228