Page 955 - The Mechatronics Handbook
P. 955

0066_Frame_C32.fm  Page 5  Wednesday, January 9, 2002  7:54 PM









                                               x 1

                                                                          x  w
                                                                           1  1
                                                                             w
                                               x 10                       x 2  2
                                                                             w
                                                                              3
                                                                          +1
                                                    x 20    x 2
                                                                            w =   1 10
                                                                             1
                                                                                x
                                                                            w =   1
                                                                             2   x
                                                                                20
                                                                            w = −1
                                                                             3
                       FIGURE 32.6  Illustration of the property of linear separation of patterns in the two-dimensional space by a single
                       neuron.

                                     HIDDEN                  HIDDEN
                                     LAYER #1                LAYER #2  AND


                                                                                    OR
                                                                                             OUTPUT
                          INPUTS


                                                                                +1
                                  +1
                                                      +1

                       FIGURE 32.7  An example of the three-layer neural network with two inputs for classification of three different
                       clusters into one category. This network can be generalized and can be used for solution of all classification problems.

                       Neurons in the second hidden layer perform the AND operation, as shown in Fig. 32.1(b). Output neurons
                       perform the OR operation, as shown in Fig. 32.1(a), for each category. The linear separation property of
                       neurons makes some problems especially difficult for neural networks, such as exclusive OR, parity
                       computation for several bits, or to separate patterns laying on two neighboring spirals.
                         The feedforward neural network is also used for nonlinear transformation (mapping) of a multidi-
                       mensional input variable into another multidimensional variable in the output. In theory, any
                       input–output mapping should be possible if the neural network has enough neurons in hidden layers.
                       (size of output layer is set by the number of outputs required). In practice, this is not an easy task.
                       Presently, there is no satisfactory method to define how many neurons should be used in hidden layers.
                       Usually, this is found by the trial-and-error method. In general, it is known that if more neurons are
                       used, more complicated shapes can be mapped. On the other hand, networks with large numbers of
                       neurons lose their ability for generalization, and it is more likely that such networks will also try to map
                       noise supplied to the input.


                       32.4 Learning Algorithms for Neural Networks

                       Similarly to the biological neurons, the weights in artificial neurons are adjusted during a training
                       procedure. Various learning algorithms were developed, and only a few are suitable for multilayer neuron
                       networks. Some use only local signals in the neurons, others require information from outputs; some
                       require a supervisor who knows what outputs should be for the given patterns, and other unsupervised
                       algorithms need no such information. Common learning rules are described in the following sections.


                       ©2002 CRC Press LLC
   950   951   952   953   954   955   956   957   958   959   960