Page 108 - Neural Network Modeling and Identification of Dynamical Systems
P. 108
3.2 NEURAL NETWORK BLACK BOX APPROACH TO SOLVING PROBLEMS ASSOCIATED WITH DYNAMICAL SYSTEMS 97
Since the inputs of the predictor network in- mapping of the following form:
clude, in addition to the control values, the mea-
sured (observed) values of the outputs for the g(k) = ϕ NN (g(k − 1),...,g(k − n),
(3.7)
process implemented by the dynamical system, u(k − 1),...,u(k − m),w),
the output of the model of the considered type
can be calculated only one time step ahead (ac- where, as in (3.5), w is a vector of parameters and
cordingly, predictors of this type are usually ϕ NN (·) is a function implemented by a feedfor-
called one-step-ahead predictors). If the gener- ward network.
Again, let us suppose that the values of pa-
ated model should reflect the behavior of the dy-
rameters w of the network are computed by
namical system on a time horizon exceeding one
training it in such a way that ϕ NN (·) = ϕ(·).We
time step, we will have to feed back the outputs
also assume that for the first n time points, the
of the predictor at the previous time instants to
prediction error is equal in magnitude to the
its inputs at the current time step. In this case,
noise affecting the dynamical system. In this
the predictor will no longer have the properties
case, for all time instants k, k = 0,...,n − 1,the
of the ideal model due to the accumulation of
relation
thepredictionerror.
The second type of noise impact on a sys- y p (k) − g(k) = ξ(k), ∀k ∈{0,n − 1},
tem that requires consideration corresponds to
the case when noise affects the output of the dy- will be satisfied. Therefore, the simulation error
namical system. In this case, the corresponding will be numerically equal to the noise affecting
description of the process implemented by the the output of the dynamical system, i.e., this
dynamical system has the following form: model might be considered to be optimal in the
sense that it accurately reflects the deterministic
components of the process of the dynamical sys-
x p (k) = ϕ(x p (k − 1),...,x p (k − n),
tem operation and does not reproduce the noise
u(k − 1),...,u(k − m)), (3.6) that distorts the output signal of the system.
y p (k) = x p (k) + ξ(k). If the initial modeling conditions are not satis-
fied (exact output values at initial time steps are
unavailable), but the condition ϕ NN (·) = ϕ(·) is
This structural organization of the model im-
satisfied and the model is stable with respect to
plies that additive noise is added directly to the
the initial conditions, then the simulation error
output signal of the dynamical system (this is
will decrease as the time step k increases.
a parallel version of the NARX-type model ar-
As we can see from the above relations, the
chitecture; see Fig. 3.1A). Thus, noise signal at ideal model under the additive output noise as-
some time step k affects only the dynamical sys-
sumption is a closed-loop recurrent network, as
tem output at the same time instant k.
opposed to the case of state noise, when the
Since the output of the model at time step k ideal model is represented by a static feedfor-
depends on the noise only at the same instant of ward network.
time, the optimal model does not require the val- Accordingly, in order to train a parallel-type
ues of the outputs of the dynamical system at the model, in general, it is required to apply meth-
preceding instants; it is sufficient to use their es- ods designed for dynamic networks, which, of
timates generated by the model itself. Therefore, course, are more difficult in comparison with the
an “ideal model” for this case is represented by learning methods for static networks. However,
a recurrent neural network that implements a for the models of the type in question, learning