Page 1009 - The Mechatronics Handbook
P. 1009
Simulated Annealing Algorithm
Apart from the stochastic methods and methods based on natural evolution, there is another possibility
of simulating the evolution of systems based on the physical evolution of macroscopic systems. The
annealing of a solid body in order to remove the internal stress is a simple example of this kind of
evolution. For a physical interpretation of this process, consider a body that is heated until it reaches a
high temperature. The temperature is then gradually lowered. The atoms of a body heated to a high
temperature can easily overcome the local energetic barriers to reach equilibrium states. When the
temperature is lowered, atoms are fixed in this state and the cooled off body is without internal stress.
This principle was used to design the method of simulated annealing. First, an initial temperature T max
is set, whose value is important for the method to be efficient. The simulated annealing algorithm then
searches the space of all potential solutions in a strongly stochastic way, also accepting the states that
correspond to solutions worse than the current one. This property of simulated annealing is a charac-
teristic feature of this method and provides a way of escaping from a local minimum trap, thus allowing
the search of another area of the entire solution space. However, as the annealing temperature T is
lowered, the probability of accepting worse states as well is diminishing. For small temperature values
then, only solutions better than the current one are considered.
Genetic Algorithm (GA)
Genetic algorithms (GAs) are most frequently used to optimize the parameters of an unknown system
whose mathematical description is either too complicated or unknown [5]. When applying a GA, it is
mostly sufficient to know a function assigning a price to each individual in the population. This may be
the error of the solution for randomly selected parameters during GA. Since a GA is looking for a
maximum, the error, which, on the contrary, is being minimized, must be transformed into looking for
a maximum. This may be done in several different ways: by subtracting the error from the maximum
error occurring, by calculating the inverted value of the error, or by using another transforming function
that approaches zero as the error approaches one. Increased attention should be paid to setting up the
program implementing the pricing function since it consumes the most computing time compared with
the other GA components.
Apart from general optimization problems, GAs are mostly applied to neural networks. Here the
tendency is to employ GAs at two different levels. First, for finding suitable weights for a neural network
and second, when optimizing the structure of a neural network, that is, when selecting the algorithm,
the number of input neurons in the hidden layers, the number of hidden layers, etc. Using a genetic
algorithm to optimize the parameters of another genetic algorithm (the size of the population, the number
of crossbreedings, the extent of mutations, the frequency of mutations) is a very revolutionary idea
(optimization of the computation time where the computation time is a pricing function of the GA). As
far as applications of GAs to problems encountered in research of electric machines are concerned, GAs
have been used to identify the parameters of the substitution diagram of an induction motor.
By way of conclusion, it may be added that genetic algorithms perform surprisingly well when all
other algorithms fail, such as for incomplete problems where the computation time is an exponential or
factorial function of the number of variables. There is no point in using GAs to optimize relatively simple
functions or functions for which special algorithms exist for their description. Considering the necessity
to calculate the function values for tens or hundreds of genetic chains in a population and the necessity
to evaluate hundreds or even thousands of populations during a single run of the program, GAs are
rather time-consuming.
Despite the positive results achieved by using GAs, it is clear that nature must use even more intricate
and, at the same time, not very sophisticated methods. The GAs described above only correspond to
very primitive examples observed in nature, particularly those related to asexual reproduction with a
single chromosome. Since nature has taken billions of years to test its algorithms, it is highly efficient
to further learn from it. It is interesting that it needs no mathematics to solve complicated problems
of optimization. Nevertheless there are other optimization methods suitable to solve the problems of
the design [2–4].
©2002 CRC Press LLC

