Page 235 -
P. 235

5.10 Support Vector Machines   223

                                  Notice that the preceding linear approaches can be viewed as particular cases of
                               the kernel approach, using a linear kernel, K(x,,x) = x'x,.
                                  The connectionist  structure of  a generalized support vector machine, using  a
                               kernel K(xi,x), is similar to the RBF network structure (see Figure 5.42) for a two
                               class problem, with the hidden layer neurons computing the kernels, which are next
                               linearly combined (5-107a) to provide the output .
                                  Figure 5.49 exemplifies an SVM quadratic discrimination, K(xi,x) = (x'x~+~)~,
                               using two different values of  C. The influence of  C on the margin width and the
                               quadratic shape is evident. Using C=1000 for the same data a quadratic solution
                                similar to the  C=m  solution was obtained, with  similar margin  and misclassified
                                samples, in  a  clear  demonstration  that  there  may  be  many  different  "optimal"
                                values for C. The generalization properties of  non-linear SVMs are still an open
                                issue.
                                  The  Support  Vector  Machine  approach  can  also  be  applied  to  regression
                                problems. A description of  this  topic can be found in Gunn (1997) and  Haykin
                                (1999).






















                                Figure 5.49.  SVM quadratic discrimination. (a) C=100, eleven support vectors, 4
                                misclassified patterns; (b) C=w, fifteen support vectors, 6 misclassified patterns.




                                5.1 1  Kohonen Networks


                                All the previous neural networks performed supervised classification or regression
                                tasks. Unlike these, Kohonen's selforganising feature  map or Kohonen network
                                for short, constitutes a neural net approach to data clustering. As shown in Figure
                                5.50, these networks are constituted by just  one layer of output neurons, arranged
                                as  a  two-dimensional  grid.  The  main  goal  is  to  iteratively  adjust  the  weights
                                connecting inputs to outputs, such that in the end these reflect the distance relations
                                 among input patterns.
   230   231   232   233   234   235   236   237   238   239   240