Page 51 - Neural Network Modeling and Identification of Dynamical Systems
P. 51
2.1 ARTIFICIAL NEURAL NETWORK STRUCTURES 39
3
ered ANN model : the neurons, of which the
ANN model consists, operate layer by layer, i.e.,
until all the neurons of the pth layer work, the
neurons of the (p + 1)th layer do not come into
operation. We will consider below the general
variant that defines the rules for activating neu-
rons in ANN models.
In the simplest variant of the structural orga-
nization of layered networks, all the layers L (p) ,
FIGURE 2.5 An element of an FB that transforms the numbered from 0 to N L , are activated in the or-
n-dimensional x = (x 1 ,x 2 ,...x n ) input into a scalar output y.
n
(A) A one-stage mapping R → R. (B) A two-stage (com- der of their numbers. This variant means that
n
positional) mapping R → R → R.From[109], used with until all the neurons in the layer with the num-
permission from Moscow Aviation Institute. ber p work, the neurons from the (p + 1)th layer
are waiting. In turn, the pth layer can start oper-
n
• The one-stage mapping R → R is an element ating only if all the neurons of the (p −1)th layer
have already worked.
of functional networks.
n
• The two-stage mapping R → R → R is an Visually, we can represent such a structure as
a “stack of layers,” ordered by their numbers. In
element of neural networks.
the simplest version, this “stack” looks like it is
The element of compositional type, i.e., the showninFig. 2.6A. Here the L (0) layer is the in-
two-stage mapping of the n-dimensional input put layer, the elements of which are components
to the scalar output, is a neuron; it is specific for of the ANN input vector.
functional expansion of the neural network type Any layer L (p) , 1 p< N L , is connected with
and is a “brand feature” of such expansions, in two adjacent layers: it gets its inputs from the
other words, ANN models of all kinds. (p−1)
previous layer L , and it transfers its outputs
to the subsequent layer L (p+1) . The exception is
2.1.2 Layered Structure of Neural the layer L (N L ) , the latter in the ANN (the output
Network Models layer), which does not have a layer following it.
The outputs of the layer L (N L ) are the outputs
2.1.2.1 Principles of Layered Structural (p)
of the network as a whole. The layers L with
Organization for Neural Network
numbers 0 <p <N L are called hidden.
Models
Since the ANN shown in Fig. 2.6A is a feed-
We assume that the ANN models in the gen- forward network, all links between its layers
eral case have a layered structure. This assump- go strictly sequentially from the layer L (0) to
tion means that we divide the entire set of neu- the layer L (N L ) without “hopping” (bypassing)
rons constituting the ANN model into disjoint through adjacent layers and backward (feed-
subsets, which we will call layers. For these lay- back) links. A more complicated ANN struc-
(0)
(1)
ers we introduce the notation L ,L ,...,L (p) , ture version with bypass connection is shown in
...,L (N L ) . Fig. 2.6B.
The layered organization of the ANN model
determines the activation logic of its neurons.
This logic will be different for different struc- 3 For the case when the layers are in the order of their num-
tural variants of the network. The following bers and there are no feedbacks between the layers. In this
specificity takes place in the operation of the lay- case, the layers will operate sequentially and only once.