Page 355 - Applied Numerical Methods Using MATLAB
P. 355
344 OPTIMIZATION
The solution of this problem, if it exists, can be obtained by setting the derivatives
of this new objective function l(x, λ) with respect to x and λ to zero:
M
∂ ∂ T ∂
l(x, λ) = f(x) + λ h(x) =∇f(x) + λ m ∇h m (x) = 0 (7.2.3a)
∂x ∂x ∂x
m=1
∂
l(x, λ) = h(x) = 0 (7.2.3b)
∂λ
Note that the solutions for this system of equations are the extrema of the objec-
tive function. We may know if they are minima/maxima, from the positive/nega-
tive-definiteness of the second derivative (Hessian matrix) of l(x, λ) with respect
to x. Let us see the following examples.
Remark 7.2. Inequality Constraints with the Lagrange Multiplier Method.
Even though the optimization problem involves inequality constraints like
g j (x) ≤ 0, we can convert them to equality constraints by introducing the (non-
2
negative) slack variables y as
j
2
g j (x) + y = 0 (7.2.4)
j
Then, we can use the Lagrange multiplier method to handle it like an equality-
constrained problem.
Example 7.1. Minimization by the Lagrange Multiplier Method.
Consider the following minimization problem subject to a single equality con-
straint:
2
Min f(x) = x + x 2 (E7.1.1a)
1 2
s.t.h(x) = x 1 + x 2 − 2 = 0 (E7.1.1b)
We can substitute the equality constraint x 2 = 2 − x 1 into the objective func-
tion (E7.1.1a) so that this problem becomes an unconstrained optimization prob-
lem as
2
2
2
Min f(x 1 ) = x + (2 − x 1 ) = 2x − 4x 1 + 4 (E7.1.2)
1 1
which can be easily solved by setting the derivative of this new objective function
with respect to x 1 to zero.
∂ (E7.1.1b)
f(x 1 ) = 4x 1 − 4 = 0, x 1 = 1,x 2 = 2 − x 1 = 1 (E7.1.3)
∂x 1
Alternatively, we can apply the Lagrange multiplier method as follows:
(7.2.2) 2 2
Min l(x,λ) = x + x + λ(x 1 + x 2 − 2) (E7.1.4)
2
1