Page 38 - Rapid Learning in Robotics
P. 38

24                                                           Artificial Neural Networks




















                                                                  x
                          x                                        1
                           1
                                   w                                                                 y
                                     i1                                                               1
                                  w                               x
                          x        i2                   y          2
                           2             Σ               i                                           y
                          x 3      w i3                           x 3                                 2
                                             w
                                               i
                                                                   1                1
                                                                    input      hidden      output
                          a)                                 b)     layer       layer       layer


                          Figure 3.1: (a) The McCulloch-Pitts neuron “fires” (output y i =1 else 0) if the
                          weighted sum  P j  w ij x j of its inputs x j reaches or exceeds a threshold w i . If this
                          binary threshold function is generalized to a non-linear sigmoidal transfer func-
                          tion g  P j  w ij x j  w i   (also called activation,or squashing function, e.g. g   =tanh   ),
                          the neuron becomes a suitable processing element of the standard (b) Multi-Layer
                          Perceptron (MLP). The input values x i are made available at the “input layer”.
                          The output of each neural unit is feed forward as input to all neurons of the next
                          layer. In contrast to the standard or single-layer perceptron, the MLP has typi-
                          cally one or several, so-called hidden layers of neurons between the input and the
                          output layer.
   33   34   35   36   37   38   39   40   41   42   43