Page 250 - Numerical Methods for Chemical Engineering
P. 250
Lagrangian methods for constrained optimization 239
A constrained minimum x min must satisfy
= 0
∇L| x min
g i (x min ) = 0 i = 1, 2,..., n e
h j (x min ) ≥ 0 j = 1, 2,..., n i (5.102)
κ j ≥ 0 j = 1, 2,..., n i
κ j h j (x min ) = 0 j = 1, 2,..., n i
To find x min , we again use the augmented Lagrangian method, writing each inequality
constraint h j (x) ≥ 0 in an equivalent form similar to an equality constraint by introducing
a slack variable s j ,
h j (x) − s j = 0 s j ≥ 0 (5.103)
The advantage of this transformation is that the inequality is applied now to a “bare” variable
s j , rather than to a function h j (x).
Again, we make initial guesses of the multipliers, define a Lagrangian augmented with
penalty functions that enforce the constraints, find the unconstrained minimum of this
function, and use the results to update the multiplier estimates. At iteration k, the multiplier
estimates λ [k] and κ [k] and the penalty tolerance µ [k] > 0 define the augmented Lagrangian
[k] [k] [k] [k] n e [k] n i [k]
L A x, s; λ , κ ,µ ≡ F(x) − λ g i (x) − κ [h j (x) − s j ]
j
i
i=1 j=1
(
1 n e 2 n i 2
+ [g i (x)] + [h j (x) − s j ] (5.104)
2µ [k]
i=1 j=1
We vary x and s to minimize this augmented Lagrangian subject to the conditions s j ≥ 0.
For a specified x, the optimal s j satisfies
[k]
∂L A [k] 1
0 = = κ + {2[h j (x) − s j ](−1)} (5.105)
j [k]
∂s j 2µ
Enforcing s j ≥ 0 yields the constrained optimal s j for a particular x:
[k] [k]
s j = max h j (x) − µ κ , 0 (5.106)
j
[k]
We use (5.106) to remove the slack variables from L . At each x, we define I NA (x)asthe
A
[k] [k]
subset of j ∈ [1, n i ] for which h j (x) − µ κ > 0. The complement set I A (x) contains all
j
[k] [k]
other j for which h j (x) − µ κ ≤ 0. Thus,
j
[k] [k]
h j (x) − µ κ , if j ∈ I NA (x)
j
s j = (5.107)
0, if j ∈ I A (x)
[k] [k]
As h j (x) − µ κ is the unconstrained optimal s j for x,if j ∈ I NA (x), we can attain this
j
optimal value without violating the condition that s j ≥ 0. s j ≥ 0 is merely a restatement
of h j (x) ≥ 0 when we enforce h j (x) − s j = 0. Thus, if j ∈ I NA (x), we can optimize s j
without worrying about h j (x) ≥ 0, and associate I NA (x) with the set of inactive constraints
at x.If j ∈ I A (x), we cannot optimize s j without violating h j (x) ≥ 0, and so associate I A (x)
with the set of active constraints at x.