Page 108 -
P. 108

4.2 Bayesian Classification   95


                               We are  obviously  interested  in  minimizing  an  average  risk  computed  for  an
                             arbitrarily  large  number  of  cork  stoppers.  The  Bayes  rule for  minimum  risk
                             achieves this through the minimization of the individual conditional risks R(a, 1 x).
                                Let  us  assume  first  that  wrong  decisions imply  the  same loss,  which  can  be
                             scaled to a unitary loss:






                                In  this  situation,  since all  posterior  probabilities  add  up  to one,  we  have  to
                              minimize:






                                This  corresponds  to  maximizing  P(oi 1  x),  i.e.,  the  Bayes  decision  rule  for
                              minimum risk corresponds to the generalized version of (4- 13a):

                                Decide  wi  if  P(u,  1 x) > P(u,  ( x).  b'j * i .        (4- 1%)


                                In short:
                                   The  Bayes  decision rule for  minimum risk, when  correct decisions have zero loss
                                   and  wrong  decisions  have equal  losses,  corresponds  to  selecting  the  class  with
                                   maximum posterior probability.

                                The decision function for class mi is therefore:




                                Let  us  now  consider  the  situation  of  different  losses  for  wrong  decisions,
                               assuming first, for the sake of simplicity, that c=2. Taking into account expressions
                               (4-17a) and (4-17b), it is readily concluded that we will decide wl if:






                                 Therefore, the decision threshold with which the likelihood ratio is compared is
                               inversely weighted  by  the losses (compare with  4-15a).  This decision rule can be
                               implemented as shown in Figure 4.16 (see also Figure 4.6).
                                 Equivalently, we can use the following adjustedprevalences:
   103   104   105   106   107   108   109   110   111   112   113