Page 53 - Sustainability in the process industry
P. 53

30   Cha p te r  T h r ee


                     one, repeating this action until no further solution improvement is
                     possible. This strategy means that the methods find only local
                     optima, so there is no guarantee of global optimality. Although
                     modern optimization software packages are capable of solving
                     constrained problems, they are mostly based on the search techniques
                     developed for unconstrained optimization.

                     3.7.1   Search Algorithms for Nonlinear Unconstrained
                            Problems
                     Many methods have been developed for performing local search for
                     the minimum of an objective function (Luenberger and Ye, 2008).
                     Deterministic search algorithms mainly assume that the objective
                     function is minimized and use the concept of descent: a series of
                     iterations are performed that aim reducing the objective function
                     value at each iteration. A thorough review of the deterministic search
                     algorithms for optimization and their performance, as applied to
                     chemical process models has been provided elsewhere, for example,
                     Klemeš  and Vašek (1973). Such methods include pattern search, the
                     Rosenbrock method, and conjugate gradients.
                        All descent-based algorithms rely on a common general strategy.
                     First, an initial solution point is chosen. Then, a rule-based direction
                     of movement and a step size are determined and executed. At the
                     new point, a new pair of direction and step size are determined, and
                     the process is repeated until the desired convergence is achieved.
                     The main difference between the various search algorithms is in the
                     rule applied for determining the iteration steps.
                        When the search domain is a given line, the process of determining
                     the function minimum is known as a line search. Higher-dimensional
                     problems are solved by executing a sequence of successive line
                     searches. The two simplest line search algorithms are the Fibonacci
                     method (e.g., Avriel and Wilde, 1966) and the golden section method
                     (e.g., Kiefer, 1953). Both algorithms search over a fixed interval and
                     assume that the function is unimodal within it. The golden section
                     algorithm has a linear convergence rate of d /d  = 0.618. Another line
                                                         k+1  k
                     search algorithm is the  Newton method, which is based on the
                     additional assumption of function smoothness; this technique
                     achieves even faster convergence than the golden section one. There
                     are also other line search algorithms that are grouped together as
                     curve-fitting methods.
                        Another important search algorithm—one that is applicable to
                     multidimensional function domains—is the method of steepest
                     descent or  gradient method. Its update rule derives the new step
                     based on the gradient information in the current point, identifying
                     the direction of the fastest decrease of the objective function. This is
                     one of the oldest and best-known function minimization methods.
                     Hence it is often used as a benchmark against which the performance
   48   49   50   51   52   53   54   55   56   57   58