Page 50 -
P. 50
3 6 2 Pattern Discrimination
The covariance matrix C can be expressed compactly as the sum of the direct
products of the difference vectors of x from the mean rn by their transpose:
Suppose now that the feature vectors x undergo a linear transformation as in
Figure 2.1 1. The transformed patterns will be characterized by a new mean vector
and a new covariance matrix:
Applying these formulas to the example shown in Figure 2.1 1 (matrix A
presented in 2-12a), we obtain:
The result (2-18c) was already obtained in (2-12c). The result (2-l8d) shows
that the transformed feature vectors have a variance of d5 along y, and d2 along y,.
It also shows that whereas in the original feature space the feature vectors were
uncorrelated, there is now a substantial correlation in the transformed space.
In general, except for simple rotations or reflections, the Euclidian distances
IIx - m,ll and lly - m,ll will be different from each other. In order to maintain the
distance relations before and after a linear transformation, we will have to
generalize the idea of scaling presented at the beginning of this section, using the
metric:
IIx - rnllm = (x - m)I C-' (x - m). (2- 19)
This is a Mahalanobis dis~ance already introduced in the preceding section.
Notice that for the particular case of a diagonal matrix C one obtains the scaled
distance formula (2-14a). The Mahalanobis distance is invariant to scaling