Page 39 - Classification Parameter Estimation & State Estimation An Engg Approach Using MATLAB
P. 39

28                               DETECTION AND CLASSIFICATION

              As an example we consider the classifications shown in Figure 2.5.
            In fact, the probability densities shown in Figure 2.4(b) are normal.
            Therefore, the decision boundaries shown in Figure 2.5 must be quad-
            ratic curves.



            Class-independent covariance matrices
            In this subsection, we discuss the case in which the covariance matrices
            do not depend on the classes, i.e. C k ¼ C for all ! k 2 O. This situation
            occurs when the measurement vector of an object equals the (class-
            dependent) expectation vector corrupted by sensor noise, that is
            z ¼ m þ n. The noise n is assumed to be class-independent with covari-
                 k
            ance matrix C. Hence, the class information is brought forth by the
            expectation vectors only.
              The quadratic decision function of (2.19) degenerates into:


                                     ^ ! !ðxÞ¼ ! i  with

                                                 T   1
                   i ¼ argmaxf2ln Pð! k Þ ðz   m Þ C ðz   m Þg
                                                           k
                                               k
                       k¼1;...;K
                                                                       ð2:24Þ
                                                  T   1
                     ¼ argminf 2ln Pð! k Þþðz   m Þ C ðz   m Þg
                                                k           k
                       k¼1;...;K
            Since the covariance matrix C is self-adjoint and positive definite
                                             T
                                                 1
            (Appendix B.5) the quantity (z   m ) C (z   m ) can be regarded as a
                                                       k
                                            k
            distance measure between the vector z and the expectation vector m .
                                                                           k
            The measure is called the squared Mahalanobis distance. The function of
            (2.24) decides for the class whose expectation vector is nearest to the
            observed measurement vector (with a correction factor  2ln P(! k )to
            account for prior knowledge). Hence, the name minimum Mahalonobis
            distance classifier.
              The decision boundaries between compartments in the measurement
            space are linear (hyper)planes. This follows from (2.20) and (2.21):



                                                           T
                         ^ ! !ðzÞ¼ ! i  with  i ¼ argmaxfw k þ z w k g  ð2:25Þ
                                             k¼1;...;K
   34   35   36   37   38   39   40   41   42   43   44