Page 358 - Applied Numerical Methods Using MATLAB
P. 358
CONSTRAINED OPTIMIZATION 347
attractive for optimization problems with fuzzy or loose constraints that are not
so strict with zero tolerance.
Consider the following problem.
Min f(x) (7.2.5a)
h 1 (x) g 1 (x)
s.t. h(x) = : = 0, g(x) = : ≤ 0 (7.2.5b)
h M (x) g L (x)
The penalty function method consists of two steps. The first step is to construct
a new objective function
M L
2
Min l(x) = f(x) + w m h (x) + v m ψ(g m (x)) (7.2.6)
m
m=1 m=1
by including the constraint terms in such a way that violating the constraints
would be penalized through the large value of the constraint terms in the objective
function, while satisfying the constraints would not affect the objective function.
The second step is to minimize the new objective function with no constraints
by using the method that is applicable to unconstrained optimization problems,
but a non-gradient-based approach like the Nelder method. Why don’t we use
a gradient-based optimization method? Because the inequality constraint terms
v m ψ m (g m (x)) attached to the objective function are often determined to be zero as
long as x stays inside the (permissible) region satisfying the corresponding con-
straint (g m (x) ≤ 0) and to increase very steeply (like ψ m (g m (x)) = exp(e m g m (x))
as x goes out of the region; consequently, the gradient of the new objective func-
tion may not carry useful information about the direction along which the value
of the objective function decreases.
From an application point of view, it might be a good feature of this method
that we can make the weighting coefficient (w m ,v m ,and e m ) on each penalizing
constraint term either large or small depending on how strictly it should be
satisfied.
Let us see the following example.
Example 7.3. Minimization by the Penalty Function Method.
Consider the following minimization problem subject to several nonlinear
inequality constraints:
2 2 2 2
Min f(x) ={(x 1 + 1.5) + 5(x 2 − 1.7) }{(x 1 − 1.4) + 0.6(x 2 − 0.5) }
(E7.3.1a)