Page 755 - Mechanical Engineers' Handbook (Volume 2)
P. 755
746 State-Space Methods for Dynamic Systems Analysis
dV(x , x ) 2x ˙x 2x ˙x
2
1
dt 11 22
2
4
2(ax bx ) (62)
02
02
after using the state equations to substitute for ˙x 1 and ˙x . If a , b satisfy the inequalities
0
0
2
stated, dV/dt is negative semidefinite in an arbitrarily large region about the origin. The
origin is thus a stable equilibrium point. In fact, using a corollary to the main stability
8
theorem provided by Kalman and Bertram, it can be shown that the origin is a global
asymptotically stable equilibrium point.
The limitations of Lyapunov’ s method are that the Lyapunov function is not unique for
a system and there are no systematic procedures for finding a suitable Lyapunov function.
Since only sufficient conditions for stability are determined, some choices of Lyapunov
functions are better in that they provide more information about system stability than others.
Also, appropriate choice of the Lyapunov function can lead to an estimate of the system
speed of response. In practice, therefore, the second method of Lyapunov is used primarily
8
to analyze the stability of systems such as high-order, nonlinear systems for which other
methods of stability analysis are not available.
6 CONTROLLABILITY AND OBSERVABILITY
The controllability of a linear system is a measure of the coupling between the inputs to the
system and the system state. The concept of state controllability was introduced by Kalman 11
in order to clarify conditions for the existence of solutions to specific control problems.
A linear, continuous-time system is said to be state controllable at time t if there exists
0
a finite time t t and a control function u(t), t t t , that can drive the system state
0
1
1
0
from any initial value to any final value at t t . If the system is controllable for all times
1
4
t , the system is completely state controllable. A linear, discrete-time system is said to be
0
state controllable and completely state controllable, respectively, if the sequence numbers k,
k , k are substituted for the times t, t , t , respectively, in the two previously given definitions.
1
0
1
0
An additional form of controllability for continuous-time and discrete-time LTV systems is
that of uniformly complete state controllability. The mathematical definition of this form of
controllability may be found in Kalman. 11 This property implies that the control effort and
time interval required to drive the system state to the final value is relatively independent of
the initial time. For LTI systems, of course, complete state controllability is the same as
uniformly complete state controllability.
Though the control problems formulated above are open-loop control problems, the
property of controllability has very significant implications for closed-loop control problems.
Section 2 in Chapter 18 indicates that the closed-loop poles of a completely state-controllable
time-invariant system can be specified and placed arbitrarily in the complex s-plane (or
z-plane for discrete-time systems) by proportional state-variable feedback. Moreover, satis-
faction of the controllability conditions to be defined in this section for time-invariant systems
ensures that the optimal-control law for a quadratic performance index is a proportional
state-variable feedback law and yields an asymptotically stable closed-loop system. 10
Direct application of the definition of state controllability to LTI systems yields con-
trollability conditions involving the transition matrices. Simple algebraic conditions are usu-
ally available for such systems and are used more often in practice to evaluate controllability.
The controllability condition for LTI systems with distinct eigenvalues may be stated
very simply if the state equations are transformed to the diagonal Jordan canonical form.
Such systems are completely controllable if there are no zero rows in the transformed B

