Page 58 - Neural Network Modeling and Identification of Dynamical Systems
P. 58

46                2. DYNAMIC NEURAL NETWORKS: STRUCTURES AND TRAINING METHODS











                         FIGURE 2.16 Tapped delay lines (TDLs) as ANN structural elements. (A) TDL of order n.(B) TDLoforder 1. D is delay
                         (memory) element.


                                                                      and the logistic sigmoid function,

                                                                                                 1
                                                                             l  l         l
                                                                           ϕ (n ) = logsig(n ) =      ,
                                                                             i  i         i        −n l
                                                                                              1 + e  i      (2.10)
                                                                                                      l
                                                                             l = 1,...,L − 1,i = 1,...,S .
                                                                      The hyperbolic tangent is more suitable for func-
                                                                      tion approximation problems, since it has a sym-
                                                                      metric range [−1,1]. On the other hand, the lo-
                         FIGURE 2.17 General structure of a neuron within op-
                         erational (hidden and output) ANN layers.  (x,w) is in-  gistic sigmoid is frequently used for classifica-
                                    n
                                        1
                         put mapping R → R (aggregating mapping) parametrized  tion problems, due to its range [0,1]. Identity
                                                              1
                         with synaptic weights w;  (v) is output mapping R → R 1  functions are frequently used as activation func-
                         (activation function); x = (x 1 ),...,x n are neuron inputs; w =  tions for output layer neurons, i.e.,
                         (w 1 ),...,w n are synaptic weights; v is output of the aggre-
                         gating mapping; y is output of the neuron.           L  L     L            L
                                                                             ϕ (n ) = n ,i = 1,...,S .      (2.11)
                                                                                 i
                                                                              i
                                                                                       i
                         to the following equations:
                                                                      2.1.2.3 Input–Output and State Space
                                  S l−1     ⎫                                 ANN-Based Models for
                                            ⎪
                           l   l       l    ⎪                                 Deterministic Nonlinear Controlled
                          n = b +         l−1 ⎪
                                       i,j j
                           i   i     w a ⎬                         l
                                  j=1         l = 1,...,L, i = 1,...,S .      Discrete Time Dynamical Systems
                                            ⎪
                                            ⎪                            Nonlinear AutoRegressive network with
                           l
                                            ⎪
                          a = ϕ l  n l      ⎭
                           i   i  i                                   eXogeneous inputs [44] (NARX). One popular
                                                                (2.8)  class of models for deterministic nonlinear con-
                                                                      trolled discrete time dynamical systems is a class
                         The Lth layer is called the output layer, while all
                                                                      of input–output nonlinear autoregressive neural
                         the rest are called the hidden layers, since they
                                                                      network based models, i.e.,
                         are not directly connected to network outputs.
                            Common examples of activation functions for                           ),u(t k−1 ),...,
                                                                          ˆ y(t k ) = F(ˆy(t k−1 ),..., ˆy(t k−l y
                         the hidden layer neurons are the hyperbolic tan-
                                                                                      ),W), k   max l u ,l y ,
                         gent function,
                                                                                u(t k−l u
                                                                                                            (2.12)
                                                l
                                               n
                                              e i − e −n l i
                                l  l      l
                               ϕ (n ) = th(n ) =       ,              where F(·,W) is a static neural network, and
                                i  i      i    n l  −n l
                                              e i + e  i        (2.9)  l u and l y are the number of past controls and
                                                         l
                                 l = 1,...,L − 1,i = 1,...,S ,        past outputs used for prediction. (See Fig. 2.18.)
   53   54   55   56   57   58   59   60   61   62   63