Page 957 - The Mechatronics Handbook
P. 957

0066_Frame_C32.fm  Page 7  Wednesday, January 9, 2002  7:54 PM









                       Outstar Learning Rule
                       In the outstar learning rule, it is required that weights connected to a certain node should be equal to
                       the desired outputs for the neurons connected through those weights
                                                       ∆w ij =  cd j –  w ij )                  (32.12)
                                                              (
                       where d j  is the desired neuron output and c is the small learning constant, which further decreases during
                       the learning procedure. This is the supervised training procedure because desired outputs must be known.
                       Both instar and outstar learning rules were developed by Grossberg (1969).


                       Widrow–Hoff LMS Learning Rule
                       Widrow and Hoff (1960, 1962) developed a supervised training algorithm, which allows training a neuron
                       for the desired response. This rule was derived so the square of the difference between the net and output
                       value is minimized.

                                                             P
                                                    Error j ∑ ( net jp –  d jp ) 2              (32.13)
                                                          =
                                                            p=1
                       where
                         Error j = error for jth neuron,
                            P = number of applied patterns,
                           d jp  = desired output for jth neuron when pth pattern is applied,
                           net = given by Eq. (32.2).
                       This rule is also known as the least mean square (LMS) rule. By calculating a derivative of Eq. (32.13)
                       with respect to w ij , a formula for the weight change can be found:
                                                             P
                                                    ∆w ij =  cx i∑ ( d jp –  net jp )           (32.14)

                                                            p=1
                       Note that weight change  ∆w ij  is a sum of the changes from each of the individual applied patterns.
                       Therefore, it is possible to correct the weight after each individual pattern was applied. This process is
                       known as incremental updating; cumulative updating is when weights are changed after all patterns have
                       been applied. Incremental updating usually leads to a solution faster, but it is sensitive to the order in
                       which patterns are applied. If the learning constant c is chosen to be small, then both methods give the
                       same result. The LMS rule works well for all types of activation functions. This rule tries to enforce the
                       net value to be equal to desired value. Sometimes this is not what the oberver is looking for. It is usually
                       not important what the net value is, but it is important if the net value is positive or negative. For example,
                       a very large net value with a proper sign will result in correct output and in large error as defined by
                       Eq. (32.13) and this may be the preferred solution.

                       Linear Regression
                       The LMS learning rule requires hundreds or thousands of iterations, using formula (32.14), before it
                       converges to the proper solution. Using the linear regression rule, the same result can be obtained in
                       only one step.
                         Considering one neuron and using vector notation for a set of the input patterns X applied through
                       weight vector w, the vector of net values net is calculated using

                                                          Xw =  net                             (32.15)



                       ©2002 CRC Press LLC
   952   953   954   955   956   957   958   959   960   961   962