Page 98 -
P. 98

4.1 Linear Discriminants   85

                         4.1.3 Mahalanobis Linear Discriminants

                         We know already from chapter 2 that the Mahalanobis metric is a generalization of
                         the  Euclidian  metric  suitable for dealing  with  unequal  variances  and  correlated
                         features. Let  us  assume  that  all  classes  have  an  identical  covariance  matrix  C,
                         reflecting  a  similar  hyperellipsoidal  shape  of  the  corresponding  feature  vector
                         distributions. The generalization of (4-3) is then written as:





                         or,  d;(x)  = X'C-~X  - mkfc-'X - X'C-'mk + mkfc-lmk

                            Grouping, as we have done before, the terms dependent on mk, we obtain:




                            The decision functions are:





                          with   wk = C1mk ;  wkIO = -0.5mkfC-'mk .                    (4-5c)

                            We  again  obtain  linear  discriminant  functions  in  the  form  of  hyperplanes
                          passing through the middle point of  the line segment linking the means. The only
                          difference  from  the  results  of  the  previous  section  is  that  the  hyperplanes
                          separating class u, from class q are now orthogonal to the vector Ci(m,-mj).
                            In  the particular case of  C = s21 (uncorrelated features with the same variance
                          for all classes), the Mahalanobis classifier is identical to the Euclidian classifier (as
                          it should be).
                            In practice, it is  impossible to guarantee that all class covariance  matrices are
                          equal. Fortunately,  the  decision  surfaces  are  usually  not  very  sensitive  to  mild
                          deviations from  this  condition;  therefore,  in  normal  practice,  one uses  a pooled
                           covariance matrix computed as an average of  the individual covariance matrices.
                           This is  also done in  all  statistical software applications. We  now  exemplify  the
                           previous results for the cork stoppers problem with two classes.

                           One feature, N

                           Given  the  similarity  of  both  distributions,  already  pointed  out  in  4.1 .l,  the
                           Mahalanobis classifier produces  the  same classification  results  as  the  Euclidian
                           classifier.
                             The  decision  function  coefficients,  as  computed  by  Statistics,  are  shown  in
                           Figure 4.8.
   93   94   95   96   97   98   99   100   101   102   103