Page 112 -
P. 112

4.2 Bayesian Classification   99









                              Using  these decision  functions, clearly dependent on  the Mahalanobis  metric,
                            one  can  design  a  minimum  risk  Bayes  classifier,  usually  called  an  optimum
                            classifier. In general, one obtains quadratic discriminants.
                              Notice that  formula  (4-23b) uses  the  true  value of  the  Mahalanobis  distance,
                            whereas before we have used estimates of this distance.
                              For the situation of equal covariance for all classes (C,=Z),  neglecting constant
                            terms, one obtains:





                              For a two-class problem, the discriminant d(x)=h,(x)-h2(x) is easily computed
                            as:










                              Therefore,  one  obtains  linear  decision  functions  (notice  the  similarity  to
                            formulas (4-5b) and (4-5c)).




















                             Figure 4.19.  Error probability of a Bayesian two-class discrimination with normal
                             distributions and equal prevalences and covariance.
   107   108   109   110   111   112   113   114   115   116   117