Page 1008 - The Mechatronics Handbook
P. 1008

Evolutional Optimization Methods
                                 Since some problems are difficult to solve by standard numeric optimization methods, even if they
                                 converge to an acceptable optimum in a reasonable time, new approaches had to be developed. Therefore,
                                 a number of new methods have been designed based on the laws of natural genetics copying them in
                                 various degrees. These methods employ random generation of input parameters and so can be thought
                                 of as stochastic methods. In general, stochastic optimizing algorithms (including virtually all the evolu-
                                 tional algorithms) optimize using multi-parameter function with “wild” behavior, that is, with many
                                 minima or with an unknown gradient. Stochastic optimization methods are necessarily slower than
                                 heuristic approaches, which take advantage of the fact that they know the type and details of the function
                                 to be optimized. Unless the conditions for the global optimum are previously known, we can never be
                                 sure whether we have reached the global optimum to be able to terminate the optimization process.
                                 However, stochastic optimization methods also bring numerous benefits. They are generally very well
                                 specified and thus applicable virtually to any problem, and they can get out of the trap of a local minimum.
                                 The evolutional process of searching the space of potential solutions requires an equilibrium of two
                                 objectives:
                                     • to find the nearest (mostly local) minimum as quickly as possible, and
                                     • to search the space of all potential solutions in the optimum manner.
                                   The methods differ in their orientation towards these two objectives and they can be roughly ordered
                                 in a sequence starting with methods tending to local minima to methods searching a large number of
                                 potential solutions:
                                    1. Stochastic “hill climbing” algorithms,
                                    2. Tabu search algorithms,
                                    3. Simulated annealing algorithms, and
                                    4. Genetic algorithms.
                                 Hill Climbing Algorithm
                                 This is the simplest optimization algorithm being a variant of the gradient method “without gradient”
                                 where the direction of the steepest climb is determined by searching the neighborhood. This algorithm
                                 also has all the drawbacks of gradient methods, in that it is very likely to end up in a local extreme
                                 without reaching the global minimum. Here the starting solution is generated at random. For the
                                 currently designed solution, a certain neighborhood is generated using a finite set of transformations
                                 and the best minimum is chosen from this neighborhood. The local solution obtained in this way is then
                                 used as the center of a new neighborhood in which the optimization is repeated. This process is iterated
                                 a specified number of times. In the course of this process the subsequent best solutions are recorded to
                                 be finally used as the resulting minimum. The basic drawback of this algorithm is that, after a number
                                 of iterations, it may revert to a local minimum that has already been passed in a previous step (the
                                 problem of looping). This problem can be avoided by running the algorithm several times with different
                                 randomly generated initial values to eventually choose the best result achieved.
                                 Tabu Search Algorithm
                                 At the end of the 1980s, Professor Fred Glover designed a new approach to solving the problem of finding
                                 the global minimum, which he called tabu search. At present, this method is among those used most
                                 frequently to solve combinatorial problems and problems of finding the global minimum. Based on the
                                 hill-climbing algorithm, it tries to eliminate the problem of looping. The hill-climbing algorithm is
                                 equipped with a so-called short-time memory, which, for a short previous interval of the algorithm
                                 history, remembers the inverse transformations to locally optimal solution transformations used to obtain
                                 the new centers in iterations. These inverse transformations are prohibited (tabu) when the new neigh-
                                 borhood is created for a given current solution. In this way, the looping caused by falling into the trap
                                 of a local minimum may substantially be reduced.  A hill-climbing algorithm modified in this way
                                 systematically searches the entire area in which the global minimum of a function is to be found.

                                 ©2002 CRC Press LLC
   1003   1004   1005   1006   1007   1008   1009   1010   1011   1012   1013