Page 172 - Biomimetics : Biologically Inspired Technologies
P. 172

Bar-Cohen : Biomimetics: Biologically Inspired Technologies DK3163_c005 Final Proof page 158 6.9.2005 12:11pm




                    158                                     Biomimetics: Biologically Inspired Technologies

                       Some optimization models are based on continuous variables (linear or nonlinear program-
                    ming), that is the variables are allowed to obtain any value (whether integer or fractional). Other
                    optimization models require integer variables, that is variables that must be whole numbers 0, 1,
                    2, . . . (integer programming, Nemhauser and Wolsey, 1988). Other models require 0 or 1 variables,
                    that is variables that must be binary, which can assume either a value of 0 or a 1 (such as ‘‘yes’’ or
                    ‘‘no,’’ build or not build, select or not select). For such 0 or 1 (sometimes called Boolean)
                    programming problems, there is a finite number of combinations. For example, 10 binary variables
                             10
                    can have 2  ¼ 1,024 combinations that can usually be evaluated within a reasonable computer
                    time. Theoretically, one can calculate the value of the objective function for all possible combin-
                    ations of the independent variables and select the best combination (such a process is called total
                    enumeration). However, the number of possible combinations is often prohibitively large, as it will
                    take thousands of years to evaluate all of them making it nontractable. For 100 binary variables, the
                    number of possible combinations is 2 100  ¼ 1,267,650,600,228,229,401,496,703,205,376 which
                    will take over 40,000 trillion years to evaluate even if the evaluation of each combination takes one
                    millionth of a second.
                       For instance, consider the location of instruments such as cellular telephone transmitters
                    (towers) to cover 1000 sites. There are 100 candidate locations (a preselected set) for placing the
                    instruments. For each candidate location, there is a subset of the 1000 sites that are covered by it.
                    The objective is to find the minimum number of instruments required to cover all 1000 sites. This
                    is the set covering problem (Daskin, 1995; Current et al., 2002). Alternatively, one may wish to
                    maximize the number of covered sites with a given number of instruments (the maximum cover
                    problem, Daskin, 1995; Current et al., 2002). The latter objective is appropriate when there is a
                    limited budget (a constraint). The variables are assigned a value of ‘‘0’’ if the candidate location is
                    not selected and ‘‘1’’ if it is selected. Each possible combination is described by a vector of 100
                    zeroes and ones. If one needs to check all possible subsets of the candidate locations, one has to
                    check 2 100  combinations, which is prohibitive (see above).
                       In cases where total enumeration is impractical, various optimization techniques have been
                    developed. One approach for finding the optimal solution is the branch and bound approach (Land
                    and Doig, 1960; Nemhauser and Wolsey, 1988). The branch and bound approach implicitly
                    evaluates all possibilities, but a bound is used to eliminate the need to evaluate all of them. The
                    number of remaining combinations is significantly reduced thereby making it possible to evaluate
                    them in a reasonable computer time.
                       When robust approaches (that guarantee finding the optimal solution) are not efficient enough to
                    be performed in a reasonable computer time, heuristic algorithms are used. Heuristic algorithms are
                    designed to find a ‘‘good’’ solution but do not guarantee that the optimal solution is found. In the last
                    three decades, researchers have developed ‘‘metaheuristic’’ methods (Salhi, 1998) that are problem-
                    independent general heuristic approaches to be applied in any optimization problem.
                       Following a brief overview of major metaheuristic approaches, we will focus on genetic
                    algorithms, also referred to as evolutionary algorithms.
                       It is interesting to note that many of these metaheuristic methods mimic biological and other
                    natural processes, most notably genetic algorithms mimic evolutionary processes, natural selection,
                    and survival of the fittest.

                    5.1.1 Common Metaheuristic Methods

                    1.   Steepest descent is the ‘‘classic’’ local search for a minimum. A ‘‘neighborhood’’ of nearby
                         combinations is defined for each particular combination. The number of combinations in the
                         neighborhood is usually quite small. The algorithm starts at one (usually constructed randomly)
                         combination and proceeds iteratively. At each iteration, (i) all combinations in the neighborhood are
                         evaluated, and (ii) if a better combination is found, the search moves to the best combination in
                         the neighborhood and the iteration is completed. The next iteration applies the same process to the
   167   168   169   170   171   172   173   174   175   176   177