Page 220 - Compact Numerical Methods For Computers
P. 220
Minimising a nonlinear sum of squares 209
(as evaluated). The steepest descents program above required 232 computations
of the derivative and 2248 evaluations of S(x) to find
-6
S(1·00144, 1·0029)=2·1×10 .
The program was restarted with this point and stopped manually after 468
derivative and 4027 sum-of-squares computations, where
-7
S(1·00084, 1·00168)=7·1×10 .
By comparison, the Marquardt method to be described below requires 24
derivative and 32 sum-of-squares evaluations to reach
S(1,1)=1·4×10 -14 .
(There are some rounding errors in the display of x , x or in the computation of
2
1
S(x), since S(1,1)=0 is the solution to the Rosenbrock problem.)
The Gauss-Newton method
At the minimum the gradient v(x) must be null. The functions v (x), j=1,
i
2, . . . , n, provide a set of n nonlinear functions in n unknowns x such that
v (x)= 0 (17.8)
the solution of which is a stationary point of the function S(x), that is, a local
maximum or minimum or a saddle point, The particular form (17.1) or (17.2) of
S(x) gives gradient components
(17.9)
which reduces to
(17.10)
or
T
v=J f (17.11)
by defining the Jacobian matrix J by
(17.12)
Some approximation must now be made to simplify the equations (17.8).
Consider the Taylor expansion of v (x) about x
j
(17.13)
2
If the terms in q (that is, those involving q q for k, j=1, 2, . . . , n) are assumed
k j
to be negligible and v (x+q) is taken as zero because it is assumed to be the
j
solution, then
(17.14)