Page 1007 - The Mechatronics Handbook
P. 1007

0066_frame_C34.fm  Page 3  Wednesday, January 9, 2002  8:07 PM









                       functions. Each of these functions generally assumes its local minima at different points of the optimization
                       parameter spaces. This is the reason why a multi-criteria function can have a large number of shallow local
                       minima or is insensitive to changes in the optimization parameters. Due to this fact, the selection of an
                       optimization method is of great importance. The result is averaged in the sense that several criteria may
                       participate simultaneously in a reduction of the multi-criteria function, while some other criteria may
                       increase.
                         A more suitable method may be to select a single-criterion objective function, including all criteria in
                       the constraints. Only the most significant criterion is chosen for the objective function to be specified
                       in the subsequent process. All other criteria included in the constraints are kept within specified limits
                       without being optimized. Thus, the results of an optimization process are dependent on the degree of
                       reduction of the admissible set given by the inequality-type constraints.
                         Generally, we specify the constraints in a form similar to the objective function

                                               q i p() =  f i p() f i ,  i =  1, 2,…, m ∗        (34.1)
                                                             h
                                                          –
                                                                 h
                       Here f i  are suitable functions of a vector variable and f i   their maximum admissible values.
                         The selection of optimization variables is given by the sensitivity of the objective function to changes
                       of relevant optimization variables. This sensitivity is described by the gradient vector of the objective
                       function.

                                                            dy p()   dy p()  T
                                                grad y p() =  ---------------,…,---------------  (34.2)
                                                              dp 1    dp s

                       Types of Optimization Methods

                       Standard Optimization Methods
                       Most practical problems lead to nonlinear (transcendental) systems of equations. These may only be
                       solved using numerical optimization methods. According to the order of the derivatives used in the
                       application of a method, numerical methods of finding local minima of functions of several variables
                       may be divided into:
                         1. zero-order methods (comparative)
                            • methods of co-ordinate comparison
                            • simplex methods
                            • stochastic methods
                         2. first-order methods (gradient and quasi-gradient)
                            • methods of associated directions
                            • variable-metric methods
                         3. second-order method (Newton method)
                       Stochastic Methods
                       These methods consist of calculating the values of the objective function at a large number of selected
                       points. The points are selected by such criteria that each point in the space has an equal probability of
                       being selected. The best points are then determined by comparing the function values. From the outlined
                       strategy, it follows that these methods lead to computing the function values at a large number of points,
                       which may protract the calculation. On the other hand, we can more easily reach the global optimum
                       of the function to be optimized. These methods also comprise the evolution methods since the  first
                       solution population is generated completely by random. The difference only consists in the strategy of
                       selecting better solutions.


                       ©2002 CRC Press LLC
   1002   1003   1004   1005   1006   1007   1008   1009   1010   1011   1012