Page 101 -
P. 101

8 8    4 Statistical Classification

                                "hyperplane" leans along the regression direction of the features (see Figure 4.3 for
                                comparison).
                                  Let us suppose that the previously mentioned cork stopper with 65 defects, and
                                assigned to class wl based on this feature only, had a total perimeter of the defects
                                of  520 pixels.  To which  class  will  it be  assigned now?  As g1([65 5211)=5.78 is
                                smaller than g2([60 48Iv)=6.84, it  is assigned to class  q. This cork  stopper has a
                                total perimeter of the defects that is too big to be assigned to class 8,.
                                  Notice that if the distributions of the feature vectors in the classes correspond to
                                different hyperellipsoidal shapes, they will be characterized by unequal covariance
                                matrices.  The distance  formula  (4-5)  will  then  be  influenced  by  these  different
                                shapes in such a way that we will obtain quadratic decision boundaries as shown in
                                Figure  2.12b.  Table  4.1  summarizes  the  different  types  of  minimum  distance
                                classifiers, depending on the covariance matrix.


                                Table 4.1.  Summary of  minimum distance classifier types.
                                                             Equiprobability
                                 Covariance    Classifier                         Discriminants
                                                                surfaces
                                                                          Hyperplanes orthogonal to the segment
                                   ci=s21    Linear, Euclidian   Hypersphcres
                                                                                 linking the means
                                                                             Hyperplanes leaning along the
                                   ci=C     Linear. Mahalanohis   Hyperellipsoids
                                                                                    regression
                                     Ci    Quadratic, Mahalanobis   Hyperellipsoids   Quadratic surfaces






                                4.1.4  Fisher's Linear Discriminant

                                In  the previous chapter the problem of dimensionality reduction was addressed in
                                an  unsupervised  context.  In  a  supervised  context,  we  are  able  to  use  the
                                classification  information  of  the  training  set  in  order  to  produce  an  optimised
                                 mapping  into  a  lower  dimensional  space,  easing  the  classification  task  and
                                obtaining further insight into the class separability. The Fisher linear discriminant
                                 provides the necessary tool for this mapping.
                                   Consider two classes with sample means ml and m2 and an overall sample mean
                                 m.  We  can  measure  the  class  separability  in  a  way  similar  to  the  well-known
                                 Anova statistical test by  evaluating the volume of the pooled covariance matrix of
                                 the classes relative to the separation of their means. To get a more concrete idea of
                                 this, let us consider:

                                        2
                                   S,  = x x (x - m, )(x - mk 1'.  the within-class scatter matrix, and   (4-9a)
                                        k=lx€ck
   96   97   98   99   100   101   102   103   104   105   106