Page 109 - Innovations in Intelligent Machines
P. 109

UAV Path Planning Using Evolutionary Algorithms  99
                           donor comprises the local information of all members of the triplet, provid-
                           ing a better starting-point for the mutation operation that result in a better
                           distribution of the trial-vectors. As it is reported in [42], the modified donor
                           scheme accelerated the DE convergence rate, without sacrificing the solution
                           precision or robustness of the DE algorithm.
                              The random number generation (with uniform probability) is based on
                           the algorithm presented in [43], which computes the remainder of divisions
                           involving integers that are longer than 32 bits, using 32-bit (including the
                           sign bit) words. The corresponding algorithm, using an initial seed, produces
                           a new seed and a random number. In each different operation inside the DE
                           algorithm that requires a random number generation, a different sequence
                           of random numbers is produced, by using a different initial seed for each
                           operation and a separate storage of the corresponding produced seeds. By
                           using specific initial seeds for each operation, it is ensured that the different
                           sequences differ by 100,000 numbers.

                           5.2 Radial Basis Function Network for DE Assistance

                           Despite their advantages, EAs ask for a considerable amount of evaluations.
                           In order to reduce their computational cost several approaches have been pro-
                           posed, such as the use of parallel processing, the use of special operators and
                           the use of surrogate models and approximations. Surrogate models are auxil-
                           iary simulations that are less physically faithful, but also less computationally
                           expensive than the expensive simulations that are regarded as “truth”. Sur-
                           rogate approximations are algebraic summaries obtained from previous runs
                           of the expensive simulation [44, 45]. Such approximations are the low-order
                           polynomials used in Response Surface Methodology [46, 47], the kriging esti-
                           mates employed in the design and analysis of computer experiments [48], and
                           the various types of Artificial Neural Networks [45]. Once the approximation
                           has been constructed, it is typically inexpensive to use.
                              DE has been demonstrated to be one of the most promising novel EAs, in
                           terms of efficiency, effectiveness and robustness. However, its convergence rate
                           is still far from ideal, especially when it is applied in optimization problems
                           with time consuming objective functions. In order to enhance the convergence
                           rate of DE algorithm, an approximation model is used for the objective func-
                           tion, based on a Radial Basis Functions Artificial Neural Network [49]. In
                           general a RBFN (Fig. 5), is a three layer, fully connected feed-forward net-
                           work, which performs a nonlinear mapping from the input space to the hidden
                                                                              1
                                        M
                                   L
                           space (R → R ), followed by a linear mapping (R M  → R ) from the hidden
                           to the output space (L is the number of input nodes, M is the number of
                           hidden nodes, while the output layer has a single node).
                              The corresponding output yy(xx), for an input vector xx=[xx 1 , xx 2 ,...,xx L ]
                           is given
                                                           M

                                                 yy (xx)=     w i · ϕ i (xx).              (26)
                                                           i=1
   104   105   106   107   108   109   110   111   112   113   114