Page 261 - Numerical Methods for Chemical Engineering
P. 261

250     5 Numerical optimization



                   of the last section is sufficient. In a closed-loop control problem, we only implement the
                                             [0]
                             ∗
                   initial input u (t 0 ) = u(t H − t 0 , x ) for some period  t, after which we measure the result-
                   ing new state x new . This may not be equal to x ( t) due to random error or inadequacy
                                                         ∗
                   of the model. We use feedback to compensate for this, by computing a new “best cur-
                   rent input” by minimizing the functional again, shifting t 0 → t 0 + t, t H → t H + t, and
                   x [0]  → x new . Note, however, that if neither the system’s time derivative function nor the
                   cost functional integrand depend upon the absolute value of time, then we have exactly
                   the same dynamic programming problem that we have just solved from (5.162). Therefore,
                   we do not need in this case to redo the entire calculation, but just implement as the new
                   control input u(t H − t 0 , x new ). Thus, we obtain from u(τ = t H − t 0 , x) the optimal feedback
                   control law for the system, and this is the primary advantage of the dynamic programming
                   approach.
                     For further description of this subject, and its implementation in process control, con-
                   sult Sontag (1990). Of course, to apply this method, we must be able to solve (5.162).
                   The numerical solution of such partial differential equations is the subject of the next
                   chapter.



                   Example. A simple 1-D optimal control problem

                   We consider again the problem in which we wish to control x(t) at a set point x set = 2. The
                   system is governed by the ODE

                                          ˙ x(t) = f (x, u) =−(x − 1) + u            (5.163)


                   We wish to determine a control law for this system by solving the HJB equation. The cost
                   functional that we wish to minimize is

                                     t H '
                                                                  1
                               [0]      C U         2            2                   2
                       F u(t); x  =       [u(s) − u set ] + [x(s) − x set ]  ds + C H [x(t H ) − x set ]
                                        2
                                    0
                                                                                     (5.164)
                   u set = 1 is the steady value that maintains the system at the set point. Thus, this functional
                   penalizes both excessive departure from the set point and very large control inputs. The
                   HJB equation for this system is

                       ∂ϕ              C U      2          2  ∂ϕ             1
                          = min u(τ, x)  [u − u set ] + [x − x set ] +  [−(x − 1) + u]  (5.165)
                       ∂τ             2                       ∂x
                   Note that if C U = 0, the extremum condition becomes ∂ϕ/∂x = 0, and so if ∂ϕ/∂x > 0, u
                   should be decreased until it reaches its lower bound, and it should be increased to its upper
                   bound if ∂ϕ/∂x < 0. Thus, C U > 0 is necessary for an unconstrained minimum to exist. If
                   so, the optimal control input is

                                                              ∂ϕ
                                                            −1
                                             u(τ, x) = u set − C U                   (5.166)
                                                              ∂x
   256   257   258   259   260   261   262   263   264   265   266