Page 37 - Rapid Learning in Robotics
P. 37
Chapter 3
Artificial Neural Networks
This chapter discusses several issues that are pertinent for the PSOM algo-
rithm (which is described more fully in Chap. 4). Much of its motivation
derives from the field of neural networks. After a brief historic overview
of this rapidly expanding field we attempt to order some of the prominent
network types in a taxonomy of important characteristics. We then pro-
ceed to discuss learning from the perspective of an approximation prob-
lem and identify several problems that are crucial for rapid learning. Fi-
nally we focus on the so-called “Self-Organizing Maps”, which emphasize
the use of topology information for learning. Their discussion paves the
way for Chap. 4 in which the PSOM algorithm will be presented.
3.1 A Brief History and Overview
of Neural Networks
The field of artificial neural networks has its roots in the early work of
McCulloch and Pitts (1943). Fig. 3.1a depicts their proposed model of an
idealized biological neuron with a binary output. The neuron “fires” if the
weighted sum P j w ij x j (synaptic weights w) of the inputs x j (dendrites)
reaches or exceeds a threshold w i. In the sixties, the Adaline (Widrow
and Hoff 1960), the Perceptron, and the Multi-Layer Perceptron (“MLP”,
see Fig. 3.1b) have been developed (Rosenblatt 1962). Rosenblatt demon-
strated the convergence conditions of an early learning algorithm for the
one-layer Perceptron. The learning algorithm described a way of itera-
tively changing the weights.
J. Walter “Rapid Learning in Robotics” 23