Page 869 - The Mechatronics Handbook
P. 869
0066_Frame_C28 Page 9 Wednesday, January 9, 2002 7:19 PM
28.4 Implementation Considerations
It is commonly held among designers of Kalman filters that the implementation of the formulas listed above
represent only a portion of the effort required to develop an accurate and robust Kalman filter application.
Once the dynamics, measurements, and partial derivatives have been coded, the task remains to tune the
noise magnitudes represented in the process noise covariance Q and the measurement noise covariance R.
While the measurement noise can be based in realistic hardware performance specifications, the process
noise is often used as a tuning parameter to ensure that the filter operates correctly. This process of tuning
the filter crosses over into the area of design and is nearly an art form of such myriad approaches that it is
beyond the scope of this work to outline. However, a Kalman filter checklist is provided for the newcomer
to the field to reduce the time of the implementation and tuning learning curve:
• Because the linear Kalman filter does not change the reference state in the presence of measurement
information, the reference state and partial derivatives for an LKF application may be computed
prior to operation. This makes the LKF more amenable to computationally restricted applications
or hypothesis testing where differing process noise and measurement noise parameters are being
evaluated in parallel [8].
• Process noise serves to keep the filter from becoming overconfident in its estimate (i.e., a covariance
with near zero diagonal values) and converging prematurely. Examining the propagation equations
for the Kalman filters presented previously, it can easily be seen how the addition of process noise
increases the magnitude of the state error covariance between measurements.
• The innovations covariance should ideally converge to describe the variance in the filter measure-
ment residuals. Adaptive techniques have been implemented where the filter noise parameters are
tuned according to a metric linking residual statistics with the innovations covariance [5]. In an
ideal filter, the innovations covariance should approach the measurement noise covariance as the
process noise magnitude approaches zero.
• When multiple measurements are available at the same time, they may be processed as a series of
scalar observations as long as they are uncorrelated (i.e., R is a diagonal matrix). The effect of
processing scalar measurements is that the innovations covariance becomes a scalar, and a numer-
ical division rather than a matrix inversion is required to calculate the Kalman gain.
• Measurement editing may be employed to prevent spurious data from causing filter divergence in
a number of ways. One of the most common is to reject measurements when the ratio of the
measurement residual squared to the scalar innovations covariance
2
r k
------- (28.47)
W k
is above a user-defined threshold. The threshold value may either be a constant or may be time
varying after long propagation periods to allow for a smooth transition to a steady state innovations
covariance.
• The covariance should always be positive definite. If filter divergence is a chronic problem in a
particular application, the numerical integrity of the covariance may provide insight into the
nature of the divergence. There are also several numerical implementations of the covariance
update equation that take advantage of its symmetry and positive definiteness to enhance its
stability while reducing computational load [9].
• Process noise may be enhanced by including time correlated states such as first-order Gauss–Markov
processes to the filter to account for specific dynamic effects. The biases associated with these processes
can be included in the filter state for estimation.
As a final note it should be stressed that the Kalman filter is not the state observer algorithm best
suited for all applications. Its strengths lie in light computational requirements and real-time availability
©2002 CRC Press LLC

