Page 79 - Classification Parameter Estimation & State Estimation An Engg Approach Using MATLAB
P. 79
68 PARAMETER ESTIMATION
Using the matrix inversion lemma (b.10) the expression for the error
covariance matrix can be given in an alternative form:
T
C e ¼ C x KSK T with: S ¼ HC x H þ C n ð3:45Þ
The matrix S is called the innovation matrix because it is the covariance
matrix of the innovations z Hm . The factor K(z Hm ) is a correction
x
x
term for the prior expectation vector m . Equation (3.45) shows that the
x
T
prior covariance matrix C x is reduced by the covariance matrix KSK of
the correction term.
3.3 DATA FITTING
In data-fitting techniques, the measurement process is modelled as:
z ¼ hðxÞþ v ð3:46Þ
where h(:) is the measurement function that models the sensory system,
and v are disturbing factors, such as sensor noise and modelling errors.
The purpose of fitting is to find the parameter vector x which ‘best’ fits
the measurements z.
x
Suppose that ^ x is an estimate of x. Such an estimate is able to ‘predict’
the modelled part of z, but it cannot predict the disturbing factors. Note
that v represents both the noise and the unknown modelling errors. The
prediction of the estimate ^ x is given by h(^ x). The residuals e are defined
x
x
as the difference between observed and predicted measurements:
x
" ¼ z hð^ xÞ ð3:47Þ
x
Data fitting is the process of finding the estimate ^ x that minimizes some
error norm e kk of the residuals. Different error norms (see Appendix
A.1.1) lead to different data fits. We will shortly discuss two error norms.
3.3.1 Least squares fitting
The most common error norm is the squared Euclidean norm, also called
the sum of squared differences (SSD), or simply the LS norm (least
squared error norm):
N 1 N 1
2 X 2 X 2 T
x
x
x
e kk ¼ " ¼ ðz n h n ð^ xÞÞ ¼ðz hð^ xÞÞ ðz hð^ xÞÞ ð3:48Þ
2 n
n¼0 n¼0