Page 780 - Mechanical Engineers' Handbook (Volume 2)
P. 780
4 Extensions of the Linear Quadratic Regulator Problem 771
˙ x (t) A (t)x (t) (50)
r
r
r
y (t) C (t)x (t) (51)
r
r
r
and a specified initial condition x (t ). The index of performance to be optimized for the
0
r
finite-time problem is
t 1
T
T 1
J {x [I C (CC ) C] Q [I C (CC ) C]x (y y ) Q (y y ) uR u}dt
T
T 1
T
T
T
T
1
2
2
r
r
t 0
(52)
where the time dependencies of the vectors and matrices have been omitted for convenience.
and Q are positive-semidefinite matrices and R is a positive-definite
The matrices Q 1 2 2
matrix. The weighting on the tracking error y y helps reduce it, whereas the weighting
r
on the state x achieves a smooth response. The optimal-control law involves linear feedback
of the system state as well as feedforward of the state of the trajectory model:
u K(t)x(t) K (t)x (t) (53)
r
r
where the gain matrices K(t) and K are linearly related to solutions of the matrix Riccati
r
differential equations. Conditions for the time-invariant version of this servo problem to
reduce to the standard infinite-time LQR problem have also been noted. 10
Trankle and Bryson 22 have considered the time-invariant servomechanism problem for
the case where y, y , u have the same dimension and have proposed the following index of
r
performance:
T
T
J [(y y ) Q (y y ) (u Ux ) R (u Ux )] dt (54)
r y r 1 r u 1 r
0
where Q is positive semidefinite and R is positive definite. A modification of the index of
y
u
performance to add integral error feedback can also be devised. A matrix U and another
1
matrix X to occur later in the development are defined by
CX C r (55)
AX BU XA r (56)
1
The optimal-control law is asymptotically stable if the pair A, B is completely state con-
trollable and the pair A, C is completely observable. The control law is given by
u (U KX)x (t) Kx(t) (57)
r
1
where K is related, in the usual manner, to the solution of an algebraic Riccati equation.
The first term on the right-hand side represents feedforward control action, and the second
term represents feedback control action (Fig. 7). The feedforward action yields faster and
more accurate tracking of the desired trajectory than other control schemes that depend more
on integral error feedback. Finally, model and system state feedback is required by both Eqs.
(53) and (57). If these states are not available for measurement, state estimators such as
those described in Section 5 would be needed.
A variation on the servomechanism problem already described is that of tracking where
the desired trajectory y is known a priori rather than being defined by a model as in Eqs.
r
(50) and (51). Anderson and Moore have determined the optimal-control law for the index
10
of performance Eq. (52):

