Page 91 - Numerical methods for chemical engineering
P. 91
Estimating the Jacobian and quasi-Newton methods 77
Estimating the Jacobian and quasi-Newton methods
With Newton’s method, to update our estimate of the solution we need to evaluate at x [k] the
[k]
function vector f (x ) and the Jacobian matrix J [k] . In many instances, we do not wish to
take the time to derive an analytical form for the elements of the Jacobian matrix and code a
subroutine to evaluate them. It is often easier merely to provide a subroutine that evaluates
the function vector, so we consider now how the human effort of computing the Jacobian can
be eliminated. One approach gaining use is to employ an automatic differentiation program,
a routine that reads the code that we write to evaluate the function vector and automatically
generates additional code that evaluates the Jacobian matrix. Here, we describe two more
traditional approaches: finite difference approximations and the use of approximate Jacobian
matrices generated by the past history of function evaluations.
In these methods, we use not the exact value of the Jacobian matrix, but rather some
approximation
B [k] ≈ J x [k] (2.63)
We then compute the update by solving the linear system
[k] [k] [k]
B x =− f x (2.64)
The use of such an approximation does not change the value of the solution that we obtain,
[k] −1
as at a solution x s , f (x s ) = 0 and x = (B ) f (x s ) = 0 no matter the value of B [k]
(provided that it is nonsingular). This allows some freedom in the choice of B [k] to balance
accuracy in the approximation of the true Jacobian against computational efficiency.
The most straight forward approach to approximate the Jacobian is to use a finite differ-
ence approximation
[k] [n] [k]
∂ f m [k] [k] f m x + εe − f m x
= J mn ≈ B mn = (2.65)
ε
∂x n x [k]
e [n] is the unit vector comprising all zeros except for a value of 1 for the nth compo-
[n]
nent, e = δ jn . Here ε is some small number, typically chosen for reasons of numerical
j
√
accuracy to be the square root of the machine precision, ε ≈ eps. This finite difference
approximation can yield quite accurate approximations of the Jacobian, but at the cost of
N additional function evaluations per Newton iteration. When the Jacobian has a known
sparsity structure, the number of function evaluations may be reduced, but, in general, the
overhead associated with these additional function evaluations can be quite significant for
large systems.
To avoid the numerous and costly function evaluations required by the finite differ-
ence method, various quasi-Newton methods have been developed in which the approx-
imation B [k] of the Jacobian is constructed from the recent values of the function vector
[k]
f (x ), f (x [k−1] ), f (x [k−2] ),.... We describe here the popular Broyden’s method (Broyden,
1965) for updating the estimate of the Jacobian, which can be considered an extension of
the secant method to systems of multiple equations.