Page 86 - Neural Network Modeling and Identification of Dynamical Systems
P. 86

74                2. DYNAMIC NEURAL NETWORKS: STRUCTURES AND TRAINING METHODS

                         combination of  x,u . By a reaction of this kind,  In a similar way, we introduce one more exam-
                         we will understand the state x(t k+1 ),towhich  ple of p j ∈ P:
                         the dynamical system (2.99) passes from the
                         state x(t k ) with the value u(t k ) of the control ac-  p j ={ x (j) (t k ),u (j) (t k ) ,x (j) (t k+1 )}.  (2.107)
                         tion, written
                                                                      The source data of the examples p i and p j will
                                           F(x,u,t)                   be considered as not coincident, i.e.,
                                x(t k ),u(t k )  −−−−−→ x(t k+1 ).  (2.103)
                                                                                              (i)
                                                                             (i)
                                                                            x (t k )  = x (j) (t k ),  u (t k )  = u (j) (t k ).
                         Accordingly, some example p from the training
                         set P will include two parts, namely, the input  In the general case, the dynamical system re-
                         (this is the pair  x(t k ),u(t k ) ) and output (this is  sponses to the original data from these examples
                         the reaction x(t k+1 )) of the dynamical system.  do not coincide, i.e.,
                         2.4.2.2 Informativity of the Training Set                x (t k+1 )  = x (j) (t k+1 ).
                                                                                   (i)
                            The training set should (ideally) show the dy-
                         namical system responses to any combinations    We introduce the concept of ε-proximity for
                         of  x,u  satisfying the condition (2.102). Then,  a pair of examples p i and p j . Namely, we will
                         according to the Basic Identification Rule (see  consider examples of p i and p j ε-close if the fol-
                         page 73), the training set will be informative,  lowing condition is satisfied:
                         that is, allow to reproduce in the model all the      (i)       (j)
                         specific behavior of the simulated DS. 5              x (t k+1 ) − x  (t k+1 )    ε,  (2.108)
                            Let us clarify this situation. We introduce the
                         notation                                     where ε> 0 is the predefined real number.
                                                                                                              N p
                                                                         We select from the set of examples P ={p i }
                                                                                                              i=1
                                    (i)    (i)    (i)                 a subset consisting of such examples p s for
                             p i ={ x (t k ),u (t k ) ,x (t k+1 )},  (2.104)
                                                                      which the ε-proximity relation to the example
                         where p i ∈ P is the ith example from the train-  p s is satisfied, i.e.,
                         ing set P. In this example                       (i)       (j)
                                                                        x (t k+1 ) − x  (t k+1 )    ε, ∀s ∈ I s ⊂ I. (2.109)
                                        (i)
                                (i)
                                                   (i)
                              x (t k ) = (x (t k ),...,x (t k )),
                                        1          n          (2.105)  Here I s is the set of indices (numbers) of those
                                        (i)
                                (i)
                                                   (i)
                              u (t k ) = (u (t k ),...,u (t k )).     examples for which ε-proximity is satisfied with
                                        1          m
                                                                      respect to the example p s , while I s ⊂ I ={1,...,
                                      (i)
                         The response x (t k+1 ) of the considered dynam-  N p }.                          6
                         ical system to the example p i is               We call an example p i ε-representative if for
                                                                      the whole collection of examples p s , ∀s ∈ I s ,
                                        (i)
                                                    (i)
                             (i)
                            x (t k+1 ) = (x (t k+1 ),...,x (t k+1 )).  (2.106)  that is, for any example p s ,s ∈ I s , the condition
                                       1            n                 of ε-proximity is satisfied. Accordingly, we can
                                                                      now replace the collection of examples {p s },s ∈
                         5 It should be noted that the availability of an informative  I s ,byasingle ε-representative p i ,andtheer-
                         training set provides a potential opportunity to obtain a  ror introduced by such a replacement will not
                         model that will be adequate to a simulated dynamical sys-  exceed ε. Input parts of collections of examples
                         tem. However, this potential opportunity must still be taken
                         advantage of, which is a separate nontrivial problem, the
                         successful solution of which depends on the chosen class of  6 This means that the example p i is included in the set of
                         models and learning algorithms.              examples {p s },s ∈ I s .
   81   82   83   84   85   86   87   88   89   90   91