Page 81 - Neural Network Modeling and Identification of Dynamical Systems
P. 81
2.3 DYNAMIC NEURAL NETWORK ADAPTATION METHODS 69
The variant of the EKF of this type is more sta- ing various applied problems, is that the net-
ble in computational terms and has robustness work can change, adapting to the problem being
to rounding errors, which positively affects the solved. This kind of adjustment can be carried
computational stability of the learning process out in the following directions:
of the ANN model as a whole.
As can be seen from the relationships deter- • the neural network can be trained, i.e., it can
mining the EKF, the key point is again the calcu- change the values of their tuning parameters
lation of the Jacobian J(t k ) of network errors by (this is, as a rule, the synaptic weights of the
adjusted parameters. neural network connections);
When learning a neural network, it is impos- • the neural network can change its structural
sible to use only the current measurement in the organization by adding or removing neurons
EKF due to the unacceptably low accuracy of the and rebuilding the interneural connections;
search (the effect of the noise ζ and η); it is neces- • the neural network can be dynamically tuned
sary to form a vector estimate on the observation to the solution of the current task by replac-
interval, and then the update of the matrix P(t k ) ing some of its constituent parts (subnets)
is more correct. with previously prepared fragments, or by
As a vector of observations, we can take a se- changing the values of the network settings
quence of values on a certain sliding interval, and its structural organization on the basis of
i.e., the previously prepared relationships linking
the task to the required changes in the ANN
T
ˆ y(t k ) = ˆy(t i−l ), ˆy(t i−l+1 ),..., ˆy(t i ) , model.
The first of these options leads to the traditional
where l is the length of the sliding interval, the
index i refers to the time point (sampling step), learning of ANN models, the second to the class
of growing networks, and the third to networks
and the index k indicates the valuation number.
with pretuning.
The error of the ANN model will also be a
vector value, i.e., The most important limitation related to the
peculiarities of the first of these approaches
T
e(t k ) = e(t i−l ),e(t i−l+1 ),...,e(t i ) . (ANN training) to the adjustment of the ANN
models is that the network, before it started to be
taught, is potentially suitable for a wide class of
2.3.2 ANN Models With Interneurons problems, but after the completion of the learn-
From the point of view of ensuring the adapt- ing process it can already decide only a specific
ability of ANN models, the idea of an interme- task; in the case of another task, it is necessary
diate neuron (interneuron) and the subnetwork to retrain the network, during which the skill of
of such neurons (intersubnet) is very fruitful. solving the previous task is lost.
The second approach (growing networks) al-
2.3.2.1 The Concept of an Interneuron and lows to cope with this problem only partially.
an ANN Model With Such Neurons Namely, if new training examples appeared that
An effective approach to the implementation do not fit into the ANN model obtained accord-
of adaptive ANN models, based on the concepts ing to the first of the approaches, then this model
of an interneuron and a pretuned network, was is built up with new elements, with the addition
proposed by A.I. Samarin [88]. As noted in this of appropriate links, after which the network is
paper, one of the main properties of ANN mod- trained additionally, not affecting the previously
els, which makes them an attractive tool for solv- constructed part of it.