Page 956 - The Mechatronics Handbook
P. 956

0066_Frame_C32.fm  Page 6  Wednesday, January 9, 2002  7:54 PM









                       Hebbian Learning Rule
                       The Hebb (1949) learning rule is based on the assumption that if two neighbor neurons must be activated
                       and deactivated at the same time, then the weight connecting these neurons should increase. For neurons
                       operating in the opposite phase, the weight between them should decrease. If there is no signal correlation,
                       the weight should remain unchanged. This assumption can be described by the formula

                                                         ∆w ij =  cx i o j                       (32.9)

                       where
                         w ij = weight from ith to jth neuron,
                          c = learning constant,
                          x i = signal on the ith input,
                          o j = output signal.
                       The training process starts usually with values of all weights set to zero. This learning rule can be used
                       for both soft and hard threshold neurons. Since desired responses of neurons are not used in the learning
                       procedure, this is the unsupervised learning rule. The absolute values of the weights are usually pro-
                       portional to the learning time, which is undesired.

                       Correlation Learning Rule
                       The correlation learning rule is based on a similar principle as the Hebbian learning rule. It assumes that
                       weights between simultaneously responding neurons should be largely positive, and weights between
                       neurons with opposite reaction should be largely negative. Contrary to the Hebbian rule, the correlation
                       rule is the supervised learning. Instead of actual response, o j , the desired response, d j , is used for the
                       weight change calculation

                                                         ∆w ij =  cx i d j                      (32.10)


                         This training algorithm usually starts with initialization of weights to zero values.

                       Instar Learning Rule
                       If input vectors and weights are normalized, or they have only binary bipolar values ( −1 or +1 ), then
                       the net value will have the largest positive value when the weights and the input signals are the same.
                       Therefore, weights should be changed only if they are different from the signals

                                                       ∆w i =  cx i –  w i )                    (32.11)
                                                              (

                         Note, that the information required for the weight is taken only from the input signals. This is a very
                       local and unsupervised learning algorithm.

                       Winner Takes All (WTA)

                       The WTA is a modification of the instar algorithm where weights are modified only for the neuron with
                       the highest net value. Weights of remaining neurons are left unchanged. Sometimes this algorithm is
                       modified in such a way that a few neurons with the highest net values are modified at the same time.
                       Although this is an unsupervised algorithm because we do not know what are desired outputs, there is
                       a need for a “judge” or “supervisor” to  find a winner with a largest net value. The WTA algorithm,
                       developed by Kohonen (1982), is often used for automatic clustering and for extracting statistical prop-
                       erties of input data.


                       ©2002 CRC Press LLC
   951   952   953   954   955   956   957   958   959   960   961