Page 262 - Numerical Methods for Chemical Engineering
P. 262

Optimal control                                                     251



                  We thus have the following PDE problem,
                           ∂ϕ   C U            2          2   ∂ϕ
                              =    [u(τ, x) − u set ] + [x − x set ] +  [−(x − 1) + u(τ, x)]
                           ∂τ    2                            ∂x                    (5.167)
                                     initial condition ϕ(0, x) = C H [x(t H ) − x set ] 2

                  To solve this problem numerically, we use the method of finite differences, explained in
                  further detail in Chapter 6. We restrict the x-domain to x lo ≤ x ≤ x hi , where the limits are
                  chosen to be larger than any conceivable x-value that could be encountered in practice.
                  Then, we place a grid of N points, uniformly-spaced, in this domain,
                                                                x hi − x lo
                                     x k = x lo + (k − 1)( x)   x =                 (5.168)
                                                                 N − 1
                  At each x k , we use a finite difference approximation to estimate ∂ϕ/∂x,


                               1
                       ∂ϕ
                            =    [A lo ϕ k−1 + A mid ϕ k + A hi ϕ k+1 ]  A lo + A mid + A hi = 0  (5.169)
                       ∂x      x
                          x j
                  where ϕ k (τ) ≡ ϕ(τ, x k ). For reasons that will become clear in our discussion of convection
                  in Chapter 6, we choose here the set of one-sided differences,

                                  if f (x, u) ≤ 0  A lo =−1  A mid =+1  A hi = 0
                                                                                    (5.170)
                                    else  A lo = 0  A mid =−1  A hi =+1

                  Note that if x hi is large enough, f (t, x, u) =−(x − 1) + u < 0, and we have the one-sided
                  difference pointing “into” the grid, and we have no problem applying (5.169). Similar
                  reasoning holds at the lower boundary.
                    We now solve the HJB equation numerically by integrating the set of ODEs


                            dϕ k   C U          2           2  ∂ϕ
                                =    [u k (τ) − u set ] + [x k − x set ] +     f (x k , u k (τ))
                             dτ    2                           ∂x
                                                                  x k
                            f (x k , u k (τ)) =−(x k − 1) + u k (τ)  ϕ k (0) = C H [x k (t H ) − x set ] 2  (5.171)

                                                  −1  ∂ϕ
                                    u k (τ) = u set − C U
                                                    ∂x
                                                       x k
                  The feedback control law u con (x) is then
                                             u con (x k ) = u k (τ = t H )          (5.172)


                  The optimal control at any point may be computed from (5.166). control 1D HJB.m solves
                  this HJB equation for specified t H , C U , C H .For t H = 10, C U = 1, and C H = 10, the resulting
                  feedback control law is shown in Figure 5.15. For this simple linear system and quadratic
                  cost functional, the optimal control law is a simple proportional controller with a gain of
                  K =−0.732. The advantage of this approach is that it can be extended (though at perhaps
                  great numerical cost) to nonlinear systems and to systems involving input constraints.
   257   258   259   260   261   262   263   264   265   266   267