Page 172 - Compact Numerical Methods For Computers
P. 172
One-dimensional problems 161
then the root lies in [u,b]; otherwise in [b, v]. This bisection can be repeated as
many times as desired. After t bisections the interval length will be
-t
2 (u – v) (13.22)
so that the root can always be located by this method in a fixed number of steps.
Since the function values are only examined for their signs, however, an unneces-
sarily large number of function evaluations may be required. By reversing the
process of interpolation-that is, estimating function values between tabulated
values by means of an interpolating function fitted to the tabular points-one may
expect to find the root with fewer evaluations of the function. Many interpolants
exist, but the simplest, a straight line, is perhaps the easiest and most reliable to
use. This inverse linear interpolation seeks the zero of the line
y - f(u) = [f (v)- f(u) ] (b - u) / (v - u) (13.23)
which by substitution of y = 0 gives
b = [uf(v) - uf(u)]/[f(v) - f(u)]. (13.24)
Once f(b) has been evaluated, the interval can be reduced by testing for condition
(13.21) as in the bisection process.
All the above discussion is very much a matter of common sense. However, in
computer arithmetic the formulae may not give the expected results. For instance,
consider the function depicted in figure 13.1(a). Suppose that the magnitude of f(v)
is very much greater than that of f(u). Then formula (13.24) will return a value b
very close to u. In fact, to the precision used, it may happen that b and u are
identical and an iteration based on (13.24) may never converge. For this reason it
is suggested that this procedure, known as the method of False Position, be
combined with bisection to ensure convergence. In other words, after every few
iterations using the False Position formula (13.24), a step is made using the
bisection formula (13.20). In all of this discussion the practical problems of
evaluating the function correctly have been discretely ignored. The algorithms
will, of course, only work correctly if the function is accurately computed. Acton
(1970, chap 1) discusses some of the difficulties involved in computing functions.
A final possibility which can be troublesome in practice is that either of the
formulae (13.20) or (13.24) may result in a value for b outside the interval [u, v]
when applied in finite-precision arithmetic, a clearly impossible situation when the
calculations are exact. This has already been discussed in §1.2 (p 6). Appro-
priate tests must be included to detect this, which can only occur when u and v are
close to each other, that is, when the iteration is nearing convergence. The author
has had several unfortunate experiences with algorithms which, lacking the
appropriate tests, continued to iterate for hundreds of steps.
Some readers may wonder where the famous Newton’s algorithm has disap-
peared in this discussion. In a manner similar to False Position, Newton’s method
seeks a root at the zero of the line
y - f(u) = f´(u)(b - u) (13.25)
where the point (u, f(u)) has been used with f'(u), the derivative at that point, to