Page 16 - Rapid Learning in Robotics
P. 16

2                                                                            Introduction


                          was rung always before the dog has been fed, the response salivation be-
                          came associated to the new stimulus, the acoustic signal. This fundamental
                          form of associative learning has become known under the name classical
                          conditioning. In the beginning of this century it was debated whether the
                          conditioning reflex in Pavlov's dogs was a stimulus–response (S-R) or a
                          stimulus–stimulus (S-S) association between the perceptual stimuli, here
                          taste and sound. Later it became apparent that at the level of the nervous
                          system this distinction fades away, since both cases refer to associations
                          between neural representations.
                             The fine structure of the nervous system could be investigated after
                          staining techniques for brain tissue had become established (Golgi and
                          Ramón y Cajal). They revealed that neurons are highly interconnected to
                          other neurons by their tree-like extremities, the dendrites and axons (com-
                          parable to input and output structures). D.O. Hebb (1949) postulated that
                          the synaptic junction from neuron A to neuron B was strengthened each
                          time A was activated simultaneously, or shortly before B. Hebb's rule
                          explained the conditional learning on a qualitative level and influenced
                          many other, mathematically formulated learning models since. The most
                          prominent ones are probably the perceptron, the Hopfield model and the Ko-
                          honen map. They are, among other neural network approaches, character-
                          ized in chapter 3. It discusses learning from the standpoint of an approx-
                          imation problem. How to find an efficient mapping which solves the de-
                          sired learning task? Chapter 3 explains Kohonen's “Self-Organizing Map”
                          procedure and techniques to improve the learning of continuous, high-
                          dimensional output mappings.
                             The appearance and the growing availability of computers became a
                          further major influence on the understanding of learning aspects. Several
                          main reasons can be identified:
                             First, the computer allowed to isolate the mechanisms of learning from
                          the wet, biological substrate. This enabled the testing and developing of
                          learning algorithms in simulation.
                             Second, the computer helped to carry out and evaluate neuro-physiological,
                          psychophysical, and cognitive experiments, which revealed many more
                          details about information processing in the biological world.
                             Third, the computer facilitated bringing the principles of learning to
                          technical applications. This contributed to attract even more interest and
                          opened important resources. Resources which set up a broad interdisci-
   11   12   13   14   15   16   17   18   19   20   21