Page 155 - Classification Parameter Estimation & State Estimation An Engg Approach Using MATLAB
P. 155
144 SUPERVISED LEARNING
Being a sum of Gaussian random variables, this estimate has a Gaussian
distribution too. The expectation of the estimate is:
1 X 1 X
N k
N k
m m ¼ m
E½^ m ¼ E½z n ¼ ð5:10Þ
k k k
N k N k
n¼1 n¼1
where m is the true expectation of z. Hence, the estimation is unbiased.
k
The covariance matrix of the estimation error is found as:
h T i 1
m
m
m ¼ E ð^ m m Þð^ m m Þ ¼ C k ð5:11Þ
C ^ m k k k k k
N k
The proof is left as an exercise for the reader.
5.2.2 Gaussian distribution, covariance matrix unknown
Next, we consider the case where under class ! k the measurement vector
z is a Gaussian random vector with unknown covariance matrix C k . For
the moment we assume that the expectation vector m is known. No
k
prior knowledge is available. The purpose is to find an estimator for C k .
The maximum likelihood estimate follows from (5.5) and (5.7):
( )
N k 1 X
N k
^
X
C C k ¼ argmax lnðpðz n j! k ; CÞÞ ¼ ðz n m Þðz n m Þ T
k
k
C n¼1 N k n¼1
ð5:12Þ
The last step in (5.12) is non-trivial. The proof is rather technical and
will be omitted. However, the result is plausible since the estimate is the
T
average of the N k matrices (z n m )(z n m ) whereas the true covari-
k
k
T
ance matrix is the expectation of (z m )(z m ) .
k
k
^
C
The probability distribution of the random variables in C k is a Wishart
distribution. The estimator is unbiased. The variances of the elements of
^
C C k are:
1
^ 2
C ¼ þ C ð5:13Þ
Var½C k i; j C k i; i C k j; j
k i; j
N k