Page 279 - MATLAB an introduction with applications
P. 279
264 ——— MATLAB: An Introduction with Applications
It is required to determine a means of establishing conjugate directions for an optimization function of this
form and show that they give convergence to the minimum. For this we require A to be a positive-define
matrix, which means it must be such that all quadratic terms of Eq.(5.8) are positive. This is equivalent to
requiring Eq.(5.8) to be convex so that it has a minimum. A will also be symmetric since
∂ 2 U ∂ 2 U
ij
∂
∂∂ j = A = ∂ x x i = A ji
xx
j
i
−
1
and consequently A is also symmetric. For convenience, let
()gx k = g k ; gx = ; g C () = C k
( )
x
k
Note that the gradient vector of the quadratic (5.8) is
g = Ax + B ...(5.9)
Equation (5.7) is iterated for successive steps from i to (n – 1), to obtain
n− 1
i ∑
x = x + λ k C k
n
k= 1
Premultiplying both sides by A and adding B , we get
n− 1
Ax = B = Ax + B + ∑ λ k AC k ...(5.10)
i
n
ki =
Using Eq.(5.9) this becomes
n− 1
i ∑
g = g + λ k AC k
n
ki =
T
Finally, premultiplying by C i− 1 ,
n− 1
1 1 ∑
T
T
T
Cg = Cg + λ k C AC k ...(5.11)
i−
i−
i−
1 n
1
k= 1
T
It can be demonstrated intuitively that Cg is zero by the following reasoning. In the i – 1 step, C i− 1 is
i−
1 i
followed to a minimum for U, with takes us to x and therefore the gradient g is normal to C i− 1 . Consequently,
i
1
T
Cg = 0
i− 1 i
This can be demonstrated more rigorously as follows. Consider λ as a variable that is being adjusted to
i–1
minimize U and bring the search to x . Consequently,
i
dU ∂ U dx ∂ U dx ∂ U dx
1
n
2
dλ j− 1 = ∂ x dλ i− 1 + ∂ x dλ i− 1 + ... + ∂ x dλ i− 1 ...(5.12)
n
2
1
Referring to Fig. 5.1, the search vector C i− 1 at the origin has components C 1, i–1 , …, C n, i–1 , and the search
moves along vector λ C from x to x . In general, then, for any value of λ ,
i–1
i–1
1
i–1 i–1