Page 236 -
P. 236

224    5 Neural Networks


















                          Figure 5.50.  Connectionist structure of a Kohonen self organizing map. The output
                          neurons form a two-dimensional grid.


                            We will denote the output neurons by zjk, the index j  denoting the position along
                          the horizontal direction of the grid and the index k along the vertical direction. The
                          distance djk between an input vector x and an output neuron zjk is computed as:






                          where wyi is the weight relative to the connection of input xi to output zjk.
                             Basically, the  network training consists of  adjusting, at  any iteration step, the
                          weights of the neuron that is nearest to the input pattern, called the winning neuron,
                          so that it becomes more similar to the input pattern, the so called winner-takes-all
                          learning rule. At the same time, in the initial iterations, a set of neighbours of the
                          winning neuron have their weights similarly adjusted.
                             Therefore, the weight adjustment takes place in a neighbourhood of the winning
                          output neuron. The neighbourhood can be large at the beginning of the process and
                          then it decreases as the process progresses. A square or hexagonal grid centred at
                          the  winning neuron  can be  used  as neighbourhood, the  square grid being more
                          popular. It is normal to use a radius measure to define the neighbourhood size; for
                          a square grid the radius is simply the city-block distance to it centre. During the
                          learning process the neurons compete in order to arrive at the decision that most
                          resembles a certain input.
                             The learning algorithm can be described as follows:
                             Initialise the weights,  w:;li,  with random values in a certain interval, and select
                             the neighbourhood radius, r, and the learning rate, q.
                             Compute 4k for each output neuron and determine the winning neuron (the one
                             with minimum distance).
                          3. For all neurons in the neighbourhood of the winning neuron, adjust the weights
                             as:
   231   232   233   234   235   236   237   238   239   240   241