Page 451 - Mechanical Engineers' Handbook (Volume 2)
        P. 451
     442   Basic Control Systems Design
                          erating range, and some adaptive controllers use several models of the process, each of which
                          is accurate within a certain operating range. The adaptive controller switches between gain
                          settings that are appropriate for each operating range. Adaptive controllers are difficult to
                          design and are prone to instability. Most existing adaptive controllers change only the gain
                          values, not the form of the control algorithm. Many problems remain to be solved before
                          adaptive control theory becomes widely implemented.
           14.4 Optimal Control
                          A rocket might be required to reach orbit using minimum fuel or it might need to reach a
                          given intercept point in minimum time. These are examples of potential applications of
                          optimal-control theory. Optimal-control problems often consist of two subproblems. For the
                          rocket example, these subproblems are (1) the determination of the minimum-fuel (or
                          minimum-time) trajectory and the open-loop control outputs (e.g., rocket thrust as a function
                          of time) required to achieve the trajectory and (2) the design of a feedback controller to
                          keep the system near the optimal trajectory.
                             Many optimal-control problems are nonlinear, and thus no general theory is available.
                          Two classes of problems that have achieved some practical successes are the bang-bang
                          control problem, in which the control variable switches between two fixed values (e.g., on
                                               6
                          and off or open and closed), and the linear-quadratic-regulator (LQG), discussed in Section
                          7, which has proven useful for high-order systems. 1,6
                             Closely related to optimal-control theory are methods based on stochastic process theory,
                                                     17
                          including stochastic control theory, estimators, Kalman filters, and observers. 1,6,17
           REFERENCES
                          1. W. J. Palm III, Modeling, Analysis, and Control of Dynamic Systems, 2nd ed., Wiley, New York,
                             2000.
                          2. W. J. Palm III, Control Systems Engineering, Wiley, New York, 1986.
                          3. D. E. Seborg, T. F. Edgar, and D. A. Mellichamp, Process Dynamics and Control, Wiley, New York,
                             1989.
                          4. W. J. Palm III, System Dynamics, McGraw-Hill, New York, 2005.
                          5. D. McCloy and H. Martin, The Control of Fluid Power, 2nd ed., Halsted, London, 1980.
                          6. A. E. Bryson and Y. C. Ho, Applied Optimal Control, Blaisdell, Waltham, MA, 1969.
                          7. F. Lewis, Optimal Control, Wiley, New York, 1986.
                          8. K. J. Astrom and B. Wittenmark, Computer Controlled Systems, Prentice-Hall, Englewood Cliffs,
                             NJ, 1984.
                          9. Y. Dote, Servo Motor and Motion Control Using Digital Signal Processors, Prentice-Hall, Engle-
                             wood Cliffs, NJ, 1990.
                          10. W. J. Palm III, Introduction to MATLAB 7 for Engineers, McGraw-Hill, New York, 2005.
                          11. G. Klir and B. Yuan, Fuzzy Sets and Fuzzy Logic, Prentice-Hall, Englewood Cliffs, NJ, 1995.
                          12. B. Kosko, Neural Networks and Fuzzy Systems, Prentice-Hall, Englewood Cliffs, NJ, 1992.
                          13. J. Craig, Introduction to Robotics, 3rd ed., Addison-Wesley, Reading, MA, 2005.
                          14. M. W. Spong and M. Vidyasagar, Robot Dynamics and Control, Wiley, New York, 1989.
                          15. J. Slotine and W. Li, Applied Nonlinear Control, Prentice-Hall, Englewood Cliffs, NJ, 1991.
                          16. K. J. Astrom, Adaptive Control, Addison-Wesley, Reading, MA, 1989.
                          17. R. Stengel, Stochastic Optimal Control, Wiley, New York, 1986.
     	
