Page 182 -
P. 182

170      5 Neural Networks

                                Table 5.2. Neural  net  learning  rules,  with  the  weight  adjustment  formulas and
                                examples of networks using the rules.
                                                                                    -
                                         Rule                 Weight adjustment        Network type
                                   Least Mean Square         AW = -)7(w1 xi - ti )xi       MLP

                                       Perceptron           Aw = --%(h(w1 x)-  ti )xi    Perceptron

                                         Hebb                   Awii = pk,iXk,           Hopfield
                                    Winner Takes All          AW~ = -q(wii  - xj)        Kohonen





                                   As  there  is  a  large  diversity  of  neural  nets,  with  various  architectures and
                                different  types  of  processing  neurons,  it  is  no  surprise  that there  are also many
                                types of learning rules for the weight adjustment process. Table 5.2 shows some of
                                these learning rules.
                                   The  LMS  and  perceptron  learning  rules,  as  previously  described,  basically
                                consist in  the addition of  a corrective increment proportional to the value of  the
                                wrongly classified pattern and the deviation (error) from the target value.
                                   The Hebb learning rule, one of  the earliest and simplest learning rules, is based
                                on the  idea of  reinforcing the connection weight of  two neurons if  they are both
                                 "on"  (+I)  at  the  same  time.  Using  a  corrective  increment  proportional  to  the
                                multiplication of  the respective  neuron outputs reinforces the  connection weight
                                when the neurons are both "on" or "off' (-1) at the same time.
                                   The  winner-takes-all  rule  is  characteristic  of  a  class  of  networks  exhibiting
                                competition among the neurons in order to arrive at a decision. In the case of the
                                Kohonen  network,  the  decision  is  made  by  determining  which  neuron  best
                                 "represents" a certain input pattern. The weight increment reflects the "distance"  of
                                the current weight value from the input value.
                                   An  introductory taxonomy and description of basic  architectures and learning
                                rules can be found in Lippmann (1987). A detailed description of these matters can
                                be found in Fausett (1994).
                                   There  is  a  close  resemblance  and  relation  between  some  neural  network
                                 approaches  and  statistical  approaches  described  in  the  previous  chapter,  as
                                 summarized in Table 5.3. This resemblance will become clear when we present, in
                                 the following sections, the neural nets listed on this table.



                                 Table 5.3. Relations between neural net and statistical approaches.
                                  NN approach                      MLP          RBF         KFM

                                                                 Bayesian      Parzen      k-means
                                  Related statistical approach
                                                                 classifier   window       clustering
   177   178   179   180   181   182   183   184   185   186   187