Page 110 - Innovations in Intelligent Machines
P. 110
100 I.K. Nikolos et al.
Fig. 5. A Radial Basis Function Artificial Neural Network
where ϕ i (xx) is the output of the i th hidden unit
ϕ i (xx)= G ( xx − cc i ) ,i =1,...,M. (27)
The connections (weights) to the output unit (w i , i=1,...,M) are the only
adjustable parameters. The RBFN centers in the hidden units cc i , i=1,. . . ,M
are selected in a way to maximize the generalization properties of the network.
The nonlinear activation function G in our case is chosen to be the Gaussian
radial basis function
2 2
G (u, σ) = exp −u σ , (28)
where σ is the standard deviation of the basis function.
The selection of RBFN centers plays an important role for the predictive
capabilities and the generalization of the network. There are several strate-
gies that can be adopted concerning the selection of the radial-basis functions
centers in the hidden layer, while designing a RBFN. Haykin refers to the
following [49]: a) Random selection of fixed centers, which is the simplest
approach and the selection of centers from the training data set is a sensible
choice, given that the latter is adequately representative for the problem at
hand. b) Self-organized selection of centers, where appropriate locations for
the centers are estimated with the use of a clustering algorithm whose assign-
ment is to partition the training set in homogeneous subsets. c) Supervised