Page 15 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 15

2      CHAPTER 1 Nature’s Learning Rule: The Hebbian-LMS Algorithm




                         fired together. This might seem strange. What purpose would nature fulfill with such
                         a learning algorithm?
                            In his book, Hebb actually said: “When an axon of cell A is near enough to excite
                         a cell B and repeatedly or persistently takes part in firing it, some growth process or
                         metabolic change takes place in one or both cells such that A’s efficiency, as one of
                         the cells firing B, is increased.”
                            “Fire together wire together” is a simplification of this. Wire together means
                         increase the synaptic weight. Fire together is not exactly what Hebb said, but
                         some researchers have taken this literally and believe that information is carried
                         with the timing of each activation pulse. Some believe that the precise timing of
                         presynaptic and postsynaptic firings has an effect on synaptic weight changes. There
                         is some evidence for these ideas [2e4] but they remain controversial.
                            Neuron-to-neuron signaling in the brain is done with pulse trains. This is AC
                         coupling and is one of nature’s “good ideas,” avoiding the effects of DC level drift
                         that could be caused by the presence of fluids and electrolytes in the brain. We
                         believe that the output signal of a neuron is the neuron’s firing rate as a function
                         of time.
                            Neuron-to-neuron signaling in computer simulated artificial neural networks is
                         done in most cases with DC levels. If a static input pattern vector is presented,
                         the neuron’s output is an analog DC level that remains constant as long as the input
                         pattern vector is applied. That analog output can be weighted by a synapse and
                         applied as an input to another neuron, a “postsynaptic” neuron, in a layered network
                         or otherwise interconnected network.
                            The purpose of this chapter is to review a new learning algorithm that we call
                         Hebbian-LMS [5]. It is an implementation of Hebb’s teaching by means of the
                         LMS algorithm of Widrow and Hoff. With the Hebbian LMS algorithm, unsuper-
                         vised or autonomous learning takes place locally, in the individual neuron and its
                         synapses, and when many such neurons are connected in a network, the entire
                         network learns autonomously. One might ask, “what does it learn?” This question
                         will be considered below where applications will be presented.
                            There is another question that can be asked: “Should we believe in Hebbian
                         learning? Did Hebb arrive at this idea by doing definitive biological experiments,
                         by ‘getting his hands wet’”? The answer is no. The idea came to him by intuitive
                         reasoning. Like Newton’s theory of gravity, like Einstein’s theory of relativity,
                         like Darwin’s theory of evolution, it was a thought experiment propounded long
                         before modern knowledge and instrumentation could challenge it, refute it, or verify
                         it. Hebb described synapses and synaptic plasticity, but how synapses and neuro-
                         transmitters worked was unknown in Hebb’s time. So far, no one has contradicted
                         Hebb, except for some details. For example, learning with “fire together wire
                         together” would cause the synaptic weights to only increase until all of them reached
                         saturation. That would make an uninteresting neural network, and nature would not
                         do this. Gaps in the Hebbian learning rule will need to be filled, keeping in mind
                         Hebb’s basic idea, and well-working adaptive algorithms will be the result.
   10   11   12   13   14   15   16   17   18   19   20