Page 216 - Classification Parameter Estimation & State Estimation An Engg Approach Using MATLAB
P. 216

LINEAR FEATURE EXTRACTION                                    205

                                                           T
            matrices of the feature vector, i.e. Wm and WC k W , respectively. For
                                               k
            the sake of brevity, let m be the difference between expectations of z:
                                      m ¼ m   m  2                     ð6:34Þ
                                            1

                                                  T
            Then, substitution of m, Wm and WC k W in (6.19) gives the Bhatta-
                                      k
            charyya distance of the feature vector:
                               1      T                      1
                                              T
                  J BHAT ðWzÞ¼ ðWmÞ WC 1 W þ WC 2 W      T   Wm
                               4
                                    2                       3
                                 1            T         T              ð6:35Þ
                                    6  jWC 1 W þ WC 2 W j 7
                                 2      q  ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi5
                               þ ln4
                                                          T
                                                 T
                                     2 D  jWC 1 W jjWC 2 W j
            The first term corresponds to the discriminatory information of the
            expectation vectors; the second term to the discriminatory information
            of covariance matrices.
              Equation (6.35) is in such a form that an analytic solution of (6.32) is
            not feasible. However, if one of the two terms in (6.35) is dominant, a
            solution close to the optimal one is attainable. We consider the two
            extreme situations first.

            Equal covariance matrices

            In the case where the conditional covariance matrices are equal, i.e.
            C ¼ C 1 ¼ C 2 , we have already seen that classification based on the
            Mahalanobis distance (see (2.41)) is optimal. In fact, this classification
            uses a 1   N dimensional feature extraction matrix given by:

                                             T
                                      W ¼ m C    1                     ð6:36Þ
            To prove that this equation also maximizes the Bhattacharyya distance is
            left as an exercise for the reader.


            Equal expectation vectors
            If the expectation vectors are equal, m ¼ 0, the first term in (6.35)
            vanishes and the second term simplifies to:
                                       2                       3
                                    1             T         T
                        J BHAT  ðWzÞ¼ ln4  jWC 1 W þ WC 2 W j 7        ð6:37Þ
                                       6
                                    2
                                           q
                                             ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi5
                                                             T
                                                     T
                                         2 D  jWC 1 W jjWC 2 W j
   211   212   213   214   215   216   217   218   219   220   221