Page 967 - The Mechatronics Handbook
P. 967
0066_Frame_C32.fm Page 17 Wednesday, January 9, 2002 7:54 PM
where
x = input vector,
s i = stored pattern representing the center of the i cluster,
s i = radius of the cluster.
Note that the behavior of this “neuron” significantly differs form the biological neuron. In this “neuron,”
excitation is not a function of the weighted sum of the input signals. Instead, the distance between the
input and stored pattern is computed. If this distance is zero, the neuron responds with a maximum
output magnitude equal to one. This neuron is capable of recognizing certain patterns and generating
output signals that are functions of a similarity. Features of this neuron are much more powerful than
a neuron used in the backpropagation networks. As a consequence, a network made of such neurons is
also more powerful.
If the input signal is the same as a pattern stored in a neuron, then this neuron responds with 1 and
remaining neurons have 0 on the output, as is illustrated in Fig. 32.16. Thus, output signals are exactly
equal to the weights coming out from the active neuron. This way, if the number of neurons in the hidden
layer is large, then any input–output mapping can be obtained. Unfortunately, it may also happen that
for some patterns several neurons in the first layer will respond with a nonzero signal. For a proper
approximation, the sum of all signals from the hidden layer should be equal to one. To meet this requirement,
output signals are often normalized, as shown in Fig. 32.16.
The radial-based networks can be designed or trained. Training is usually carried out in two steps. In
the first step, the hidden layer is usually trained in the unsupervised mode by choosing the best patterns
for cluster representation. An approach similar to that used in the WTA architecture can be used. Also
in this step, radii s i must be found for a proper overlapping of clusters.
The second step of training is the error backpropagation algorithm carried on only for the output
layer. Since this is a supervised algorithm for one layer only, the training is very rapid, 100–1000 times
faster than in the backpropagation multilayer network. This makes the radial basis-function network
very attractive. Also, this network can be easily modeled using computers; however, its hardware imple-
mentation would be difficult.
32.6 Recurrent Neural Networks
In contrast to feedforward neural networks, with recurrent networks neuron outputs can be connected
with their inputs. Thus, signals in the network can continuously circulate. Until recently, only a limited
number of recurrent neural networks were described.
Hopfield Network
The single-layer recurrent network was analyzed by Hopfield (1982). This network, shown in Fig. 32.17,
has unipolar hard threshold neurons with outputs equal to 0 or 1. Weights are given by a symmetrical
square matrix W with zero elements (w ij = 0 for i = j) on the main diagonal. The stability of the system
is usually analyzed by means of the energy function
N N
E = – 1 ∑ ∑ W ij v i v j (32.41)
--
2
i=1 j=1
It has been proved that during signal circulation the energy E of the network decreases and the system
converges to the stable points. This is especially true when the values of system outputs are updated in
the asynchronous mode. This means that at a given cycle, only one random output can be changed to
the required values. Hopfield also proved that those stable points, which the system converges, can be
©2002 CRC Press LLC

