Page 30 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 30

6. Bootstrap Learning With a More “Biologically Correct” Sigmoidal Neuron  17




                  6.1 TRAINING A NETWORK OF HEBBIAN-LMS NEURONS
                  The method for training a neuron and synapses described above can be used in
                  training neural networks. The networks could be layered structures or could be inter-
                  connected in random configurations like a “rat’s nest.” Hebbian-LMS will work with
                  all such configurations. For simplicity, consider a layered network like the one
                  shown in Fig. 1.12. The Hebbian-LMS neurons and their synapses are represented
                  by double circles.
                     The example of Fig. 1.12 is a fully connected feedforward network. A set of
                  input vectors are applied repetitively, periodically, or in random sequence. All of
                  the synaptic weights are set randomly initially, and adaptation commences by
                  applying the Hebbian-LMS algorithm independently to all the neurons and their
                  input synapses. The learning process is totally decentralized. All of the synapses
                  could be adapted simultaneously, so the speed of convergence for the entire network
                  would be the same as that of a single neuron and its input synapses. If on the other
                  hand, the first layer were trained until convergence, then the second layer were
                  trained until convergence, then the third layer were trained until convergence, the
                  convergence time would be three times greater than that of a single neuron and its
                  synapses. Training the network all at once would be faster with totally parallel
                  operation.
                     If the input patterns were linearly independent vectors, the output of the first-
                  layer neurons would be binary after convergence. Since the input synapses of
                  each of the first-layer neurons were set randomly and independently, the outputs
                  of the first-layer neurons would be different from neuron to neuron. After conver-
                  gence, the outputs of the second-layer neurons would also be binary, but different
                  from the outputs of the first layer. The outputs of the third layer will also be binary
                  after convergence.






















                  FIGURE 1.12
                  An example of a layered neural network.
   25   26   27   28   29   30   31   32   33   34   35