Page 27 - Introduction to Statistical Pattern Recognition
P. 27

1  Introduction                                                 9



                         Once the structure of the data is thoroughly understood, the data dictates
                    which classifier must  be adopted.  Our choice is normally  either a linear,  qua-
                    dratic,  or  piecewise  classifier,  and  rarely  a  nonparametric  classifier.  Non-
                    parametric  techniques  are  necessary  in  off-line  analyses  to  carry  out  many
                    important  operations  such as the estimation of  the  Bayes error and  data struc-
                    ture  analysis.  However,  they  are  not  so  popular  for  any  on-line  operation,
                    because of their complexity.
                         After a classifier is designed, the classifier must be evaluated by  the pro-
                    cedures  discussed  in  Chapter  5.  The  resulting  error  is  compared  with  the
                    Bayes error in the feature space.  The difference between these two errors indi-
                    cates how much  the error is  increased  by  adopting the classifier.  If  the differ-
                    ence is unacceptably  high, we must reevaluate the design of the classifier.
                         At  last,  the  classifier  is  tested  in  the  field.  If  the  classifier  does  not
                    perform  as was expected,  the data base  used  for designing the classifier  is dif-
                    ferent from the test data in the field.  Therefore, we must expand the data base
                    and design a new classifier.

                    Notation


                    n                                   Dimensionality
                    L                                   Number of classes

                    N                                   Number of total samples

                    N,                                  Number of class i samples
                                                        Class i
                    Oi
                                                        A priori probability of 0,

                                                        Vector
                                                        Random vector

                                                        Conditional density function of O,
                                                        Mixture density function
                                                        A poster-iori probability of w,
                                                        given X
                    M, =E(XI w, I                       Expected vector of o,
   22   23   24   25   26   27   28   29   30   31   32