Page 81 - Numerical Methods for Chemical Engineering
P. 81

70      2 Nonlinear algebraic systems



                   Thus, the new estimate is
                                                        [k]      [k]  [k−1]
                                                     f x   x  − x
                                                [k]
                                        [k+1]
                                       x    = x   −                                   (2.38)
                                                     f x [k]  − f x [k−1]
                   In this method, only one new function evaluation per iteration is necessary. In comparison
                   to Newton’s method, there is no need to evaluate analytically the value of the first derivative;
                   however, convergence is slower than with Newton’s method,
                                                           1.618
                                               |ε k+1 | ≈ C|ε k |                     (2.39)

                   When an analytical expression for the first derivative is available, Newton’s method is
                   preferred due to its faster, quadratic rate of convergence; otherwise, the secant method is
                   suggested. In practice, the loss of quadratic convergence is not as bad as one might expect,
                   for, in general, it is found only very near the solution.




                   Bracketing and bisection methods

                   It is far easier to find a suitable initial guess for a single equation than it is for multiple
                   equations. For a single equation, we can locate the region of a solution by scanning only a
                   single variable x; however, with even only two equations, for each trial value of x 1 , there
                   are infinitely many possible guesses of x 2 . Only for a single equation does it become really
                   practical to try various initial guesses in some planned, rigorous manner to search for a
                   solution.
                                                                                        [k]
                     Let us say that we have two values of x, x [k]  and x [k+1] , and their function values, f (x )
                   and f (x [k+1] ). If the signs of the two function values differ and f (x) is continuous, we then
                   know that f (x) must cross f = 0 at least once between x [k]  and x [k+1] . Therefore, f (x) must
                                          [k]
                   have at least one solution in [x , x [k+1] ]. Once we have found such a segment that brackets
                   a solution, we can narrow the bracketing range by setting the bisecting point x [k+2]  =
                   (x [k]  + x [k+1] )/2 and computing f (x [k+2] ). We then select two consecutive members of
                    [k]
                   {x , x [k+1] , x [k+2] } whose signs of f (x) differ, and repeat the bisection. This approach is
                   robust, but rather slow.
                     Once the bracket becomes sufficiently small that we feel that Newton’s method or the
                   secant method should be able to find the solution, we switch to one of those more efficient
                   procedures. If this fails, we continue with bisection until the initial guess is sufficiently close
                   for the iterative method to succeed. In MATLAB, the routine fzero takes such an approach.
                   For further discussion of iterative methods to solve a single equation f (x) = 0, consult
                   Press et al. (1992) and Quateroni et al.(2000).



                   Finding complex solutions

                   The previous discussion focused upon methods for finding solutions to f (x) = 0, where
                   both x and f (x) are real. While this is generally the case in engineering and scientific
   76   77   78   79   80   81   82   83   84   85   86