Page 174 -
P. 174

162     5 Neural Networks


















                             Figure  5.13.  Regression  error  S(x)  and  classification  error  E(x) with  a  linear
                             discriminant.



                               Figure 5.13  illustrates a discriminant function for a two-dimensional problem.
                            The  weights have  been  written  a, b  and  c for  simplicity. A  feature  vector x is
                            shown  with  a  solid  bar  representing  the  target  value.  For  an  LMS  regression
                            problem,  we  are  interested  in  minimizing  the  squared  distance  S(x)  to  the
                            discriminant function. For  a classification problem using  the  perceptron, we are
                             interested in the distance I(x) to the discriminant surface, i.e., to assess whether or
                             not the pattern is on the "right side of the border".
                               The simple perceptron learning rule will drive the perceptron into convergence
                             in a finite number of  iteration steps if  the classes are linearly separable. The reader
                             can find the demonstration of this interesting result in Bishop (1995), for instance.
                             If  the classes are not linearly separable, the perceptron learning rule will produce
                             an oscillating discriminant around the borderline patterns.



















                             Figure 5.14. Examples of handwritten   and V on a 7x8 grid, with the respective
                             projections.
   169   170   171   172   173   174   175   176   177   178   179