Page 37 - Introduction to Statistical Pattern Recognition
P. 37
2 Random Vectors and their Properties 19
(2.29)
Since yI , . . . , yN are mutually independent, E ( (yk - m,)(y, - my) 1
,.
= E( yk - m, JE(y, - m, 1 = 0 for k&. The variance of the estimate is seen to
be 11N times the variance of y. Thus, Var( m,.} can be reduced to zero by let-
ting N go to m. An estimate that satisfies this condition is called a consisrent
esrimare. All sample estimates are unbiased and consistent regardless of the
functional form off.
The above discussion can be extended to the covariance between two dif-
ferent estimates. Let us introduce another random variable z = g (xl, . . . , x,~).
n
Subsequently, m, and m, are obtained by (2.26) and (2.27) respectively. The
n
A
covariance of m, and m, is
A A A n
Cov(my,m,} =E((m, -m,)(m, -m)}
1
= - Cov(y,z) . (2.30)
N
Again, E{(yl -m,.)(z;-m,)} =E(yk -rn,)E(z, -mr} =O for k+L because
yl and z are independent due to the independence between XI and X, .
In most applications, our attention is focused on the first and second
order moments, the sample mean and sample autocor-relation matrix, respec-
tively. These are defined by
(2.31)
and
(2.32)
Note that all components of (2.31) and (2.32) are special cases of (2.25).
Therefore, M and 6 are unbiased and consistent estimates of M and S respec-
tively.