Page 180 -
P. 180

168      5 Neural Networks


                             vector y), and another connecting these to the output neurons (output vector z). The
                             previous  devices  (sections  5.1  and  5.3)  with  only  one  layer of  weights are also
                             known  as single-layer  networks. Some authors refer to the input level as "input
                             layer", and in this sense the network of Figure 5.20 has three layers instead of two.
                             We prefer, however, to associate the idea of layers with the concept of processing
                             levels, and therefore adopt the above convention. It is customary to denote a neural
                             net by the number of units at each level, from input to output, separated by colons.
                             Thus, a net with 6 inputs, 4 hidden neurons at the first layer and 2 output neurons is
                             denoted a MLP6:4:2 net.
                               Given an arbitrary neuron with a d-dimensional input vector s and output r,,  the
                             computation performed at this neuron is:






                             where wji denotes the weight corresponding to the connection of the output neuron
                             j  to  the  input  neuron  i  (see  Figure  5.21).  The  function f is  any  conceivable
                             activation function, namely one of  those described in section 5.2. Note, however,
                             that  a  linear  activation  function  is  not  of  interest  for  hidden  layers,  since a
                             composition of  linear functions is itself a linear function, and therefore the whole
                             net would be reducible to the single-layer network of Figure 5.1. The quantity aj is
                             the so-called neuron activation or post-synaptic potential.
                                For a multi-layer network it is also possible to have the neurons at each layer
                             performing quite distinct tasks, as we will see in the radial basis functions  (RBF)
                             and support vector machine (SVM) approaches.















                             Figure 5.21.  A general processing neuron, computing a transformation of the dot
                             product of the weight and input vectors.



                                Assuming a two-layered network with d-dimensional inputs, h hidden neurons
                             and c outputs, the number of  weights, w, that have to be computed, including the
                             biases. is:
   175   176   177   178   179   180   181   182   183   184   185