Page 179 -
P. 179

5.4 Neural Network Types   167

                               This linear discriminant is shown in Figure 5.19, with the discriminant derived
                              in  section  4.1.3  using  a  statistical  classifier.  The  similarity  is  striking.  Both
                              discriminants  have  similar  slope.  A  slight  deviation  is  observed,  with  the
                              perceptron tending to equalize the misclassifications for both classes.



                              5.4  Neural Network Types


                              The neural networks that we  have seen in the preceding sections are very simple
                              discriminant  devices  capable  of  performing  some  interesting  tasks,  as  was
                              exemplified. As a matter of fact, with these devices one could in principle succeed
                              in  any  classification  or  regression  task,  provided  that  one  could  determine  the
                              appropriate transformation functions of the input features, and also the appropriate
                              activation  functions.  However,  this  is  a  difficult  task  that  could  also  imply,  for
                              many problems, having to work in a very high dimensional space. What we need is
                              a generic type of  network, which  can be easily trained to solve any task. This is
                              achieved by  cascading discriminant units, as shown in the multi-layer perceptron
                              (MLP) structure of Figure 5.20.




























                              Figure  5.20.  Multilayer  perceptron  structure  with  input  features  xi  and  output
                              values zk. An  open circle indicates a processing neuron; a solid circle is simply a
                              terminal.




                                The term multi-layer refers to the existence of several levels or layers of weights
                               in the network. In Figure 5.20 there are two layers of  weights: one connecting the
                               input  neurons  (feature  vector  x)  to  the  so-called  hidden  neurons  (hidden-layer
   174   175   176   177   178   179   180   181   182   183   184