Page 196 - Neural Network Modeling and Identification of Dynamical Systems
P. 196
5.4 HOMOTOPY CONTINUATION TRAINING METHOD FOR SEMIEMPIRICAL ANN-BASED MODELS 187
To illustrate this theoretical result, let us con- rule
sider the asymptotic error for some specific com-
bination of numerical methods. For instance, the b
b − a
initial value problem may be solved using the f(x)dx ≈ (f (a) + f(b)) (5.40)
2
family of explicit one-step Runge–Kutta meth- a
ods, i.e.,
has the second order of accuracy. Thus, the es-
s
timates of the error function and its deriva-
k,i
x(t k+1 ) = x(t k ) + t b i r ,
tives provided by the combination of the explicit
i=1
⎛ ⎞ fourth-order Runge–Kutta method, cubic spline
i−1
interpolation, and composite trapezoidal rule
r k,i = f u ⎝ t k + c i t,x(t k ) + t a i,j r k,j ⎠ ,
have the asymptotic error
j=1
(5.37) min{4−1,4−1,2} 2
O( t ) = O( t ).
where s is the number of stages, c i are the coef-
ficients that define the locations of intermediate
nodes, a i,j and b i are the corresponding weights, 5.4 HOMOTOPY CONTINUATION
u
and f (t,x(t)) ≡ f(x(t),u(t)). The fourth-order TRAINING METHOD FOR
explicit Runge–Kutta method has the following SEMIEMPIRICAL ANN-BASED
form: MODELS
u
r k,1 = f (t k ,x(t k )),
We have already discussed some reasons for
t t k,1
u
k,2
r = f (t k + ,x(t k ) + r ), difficulties of training recurrent neural networks
2 2 in Chapter 2. These difficulties include the van-
t t k,2
k,3
u
r = f (t k + ,x(t k ) + r ), ishing and the exploding gradients problem, bi-
2 2 furcations in the recurrent neural networks, and
u
r k,4 = f (t k + t,x(t k ) + tr k,3 ),
the presence of spurious valleys in the error
t k,1 k,2 k,3 k,4 function landscape. Thus, traditional gradient-
x(t k+1 ) = x(t k ) + r + 2r + 2r + r .
6 based methods often fail to find a sufficiently
(5.38) good solution unless the initial guess for param-
eter values lies very close to it.
Interpolation may be performed using splines; But what if we consider a problem of finding
in particular, the cubic splines with not-a-knot such an initial guess which lies close to a good
end conditions provide the fourth order of accu- solution? We might further assume that this ini-
racy. Definite integrals can be computed using
tial guess itself is an exact solution to another op-
the family of composite Newton–Cotes rules,
timization problem which closely resembles the
which approximate the integral on each subseg-
ment [a,b]: original problem. Following this logic, we can
construct a sequence of optimization problems,
b such that: the first problem is trivial to solve;
M
I
f(x)dx ≈ ω f(x m ), (5.39) each subsequent problem resembles the previ-
m
a m=0 ous one so that their solutions lie close to each
other; and the sequence converges to the origi-
where x m = a + m b−a ,and ω I are the corre- nal, difficult optimization problem. In the limit,
M m
sponding weights. For example, the trapezoidal for infinitesimal perturbations of optimization