Page 253 - Classification Parameter Estimation & State Estimation An Engg Approach Using MATLAB
P. 253

242                                     UNSUPERVISED LEARNING

            2. Iterate:

               2.1 Find, for each object z n in the training set, the most similar
                           (i)
                  neuron w .
                           k
                                                      ðiÞ
                                kðz n Þ¼ arg min kz n   w k            ð7:31Þ
                                                      j
                                            j
                  This is called the best-matching or winning neuron for this input
                  vector.
               2.2 Update the winning neuron and its neighbours using the update
                  rule:

                                                             ðiÞ
                         w ðiþ1Þ  ¼ w ðiÞ  þ   h ðjkðz n Þ  jjÞðz n   w Þ  ð7:32Þ
                                       ðiÞ ðiÞ
                          j       j                          j
               2.3 Repeat 2.1 and 2.2 for all samples z n in the data set.
               2.4 If the weights in the previous steps did not change significantly,
                  then stop. Else, increment i and go to step 2.1.

                                           (i)
            Here   (i)  is the learning rate and h (jk(z n )   jj) is a weighting function.
            Both can depend on the iteration number i. This weighting function
            weighs how much a particular neuron in the grid is updated. The term
            jk(z n )   jj indicates the distance between the winning neuron k(z n ) and
            neuron j, measured over the grid. The winning neuron (for which
                                                        (i)
            j ¼ k(z n )) will get the maximal weight, because h () is chosen such that:
                                               ðiÞ
                                  ðiÞ
                                 h ðÞ   1 and h ð0Þ¼ 1                 ð7:33Þ
            Thus, the winning neuron will get the largest update. This update moves
            the neuron in the direction of z n by the term (z n   w j ).
              The other neurons in the grid will receive smaller updates. Since we
            want to preserve the neighbourhood relations only locally, the further
            the neuron is from the winning neuron, the smaller the update.
            A commonly used weighting function which satisfies these requirements
            is the Gaussian function:

                                                    !
                                                  x 2
                                   ðiÞ
                                  h ðxÞ¼ exp       2                   ð7:34Þ

                                                   ðiÞ
            For this function a suitable scale   (i) over the map should be defined.
            This weighting function can be used for a grid of any dimension (not just
            one-dimensional), when we realize that jk(z n )   jj means in general the
            distance between the winning neuron k(z n ) and neuron j over the grid.
   248   249   250   251   252   253   254   255   256   257   258