Page 173 -
P. 173

5.3 The Perceptron Concept   161


                               Let  us  consider  formula  (5-7e) for  the  particular case of  77  = ?h and  use  the
                             activation step function applied to the linear discriminant:





                               For  correct decisions  the  output h(w'x,) is  identical to  ti and  no increment  is
                             obtained. For  wrong  decisions  the  increment obtained  is  exactly  the  same as  in
                             (5-16). Hence,  the  perceptron  rule  is  identical to  the  LMS  rule,  using  the  step
                             function and a learning rate 77 = %.
                               Note that what the perceptron produces are class labels, therefore it is adequate
                             for  classijkation  problems.  It  is  interesting  to  compare  how  the  perceptron
                             performs in  the case of the one-dimensional data of  Figure 5.2a. In this case the
                             perceptron learning rule can be written:

                                    -a+b      if  -a+b>_O
                                              if   a+bcO
                                              other cases























                              Figure 5.12. Energy surface for the perceptron learning rule in  a one-dimensional
                              two-class situation.



                                Figure 5.12 shows this error surface, exemplifying the piecewise linear nature of
                              the error energy surface for the perceptron learning rule, with jumps  whenever a
                              pattern changes from class label.
   168   169   170   171   172   173   174   175   176   177   178