Page 226 -
P. 226

214    5 Neural Networks




















                                                                    (bias)
                                Figure 5.42. Radial basis functions network, with kernel functions q,.




                                The second  layer weights  are determined using  the  pseudo-inverse  technique
                              described in 5.1.1 :

                                 w'= (@'@)-'@'T = @*T.                                      (5-84)

                                Note  that  formula  (5-82a) does  not  apply,  as  there  are  fewer functions than
                              points.
                                The weights of the two layers of an RBF neural net are, therefore, independently
                              trained, given their different role. As a consequence, RBF nets train much faster in
                              general than equivalent MLP nets.
                                 Instead  of  using  the  Euclidian  distance  in  (5-83) it  is  also  possible  to  use a
                              Mahalanobis distance. It is, however, usually preferred to have more kernels with
                              the Euclidian distance than to have to compute fewer kernels with the Mahalanobis
                              distance, with an additional large number of covariance parameters to estimate.
                                 An important advantage of the RBF approach compared with the MLP approach
                              has been  elucidated by Girosi and Poggio (1991). They showed that for an RBF
                              network  it  is  (at  least  theoretically)  possible  to  find  weights  that  will  yield the
                              minimum approximating error of  any function. This best approximation property
                              does not apply to MLPs.
                                 Further details on RBF properties can be found in Bishop (1995) and Haykin
                              (1 999).
                                 Using  Statistics  intelligent  problem  solver  for  the  foetal  weight  data,  an
                              RBF4:4:1 solution was found with inputs BPD, CP, AP and FL. This solution had
                              RMS errors 287 g, 286.1 g and 305.5  g for training set, validation set and test set,
                              respectively, when trained with Gaussian kernel, k-means centroid adjustment and
                              6 nearest neighbours for the evaluation of  the smoothing factor. Figure 5.43 shows
                              this  solution,  which  performs similarly to  the one obtained  with  the Levenberg-
                              Marquardt algorithm (section 5.7.2).
   221   222   223   224   225   226   227   228   229   230   231