Page 154 - Neural Network Modeling and Identification of Dynamical Systems
P. 154

144             4. NEURAL NETWORK BLACK BOX MODELING OF AIRCRAFT CONTROLLED MOTION

                            In this model, ω rm = 1, ζ rm = 0.9, the vector  when the network learns only this particular
                         of the state variables x =[n x a rm , ˙n x a rm ], r is the  segment, forgetting about all the others.
                         reference signal.                            2. Learning the network on large segments al-
                            The neurocontroller is configured to mini-    ways leads to a bad local minimum.
                         mize the error y rm −  y, i.e., to approximate the  3. Learning the network on medium-sized seg-
                         behavior of the reference model with the re-    ments also leads to bad local minimum; how-
                         sponse of the plant model coupled with the con-  ever, the rotation of these segments allows
                         troller. For a good ANN model, this means min-  circumventing this problem to some extent.
                         imizing the “real” error y rm −y to a certain level.
                                                                         For these reasons, it is necessary to use
                            Although the neurocontroller is static, it
                                                                      medium-sized segments, to perform training
                         works as part of a dynamical system, so we need
                         to configure it as a part of the whole recurrent  with three to seven epochs for each of them, and
                         network. This configurable network consists of  to loop over the segments several times, and fi-
                         two subnets (the neurocontroller itself and the  nally to consolidate the segments to improve the
                                                                      training performance. As a result, the learning
                         closed-loop object model), closed by the external
                                                                      process of the ANN becomes very computation-
                         feedback loop. During the configuration, the pa-
                                                                      ally intensive (up to several hours, depending
                         rameters of the model subnet do not change, i.e.,
                         the ANN model only serves to close the external  on the implementation details).
                         feedback loop and represent the entire system in  According to the previous considerations, it
                         a neural network form (to estimate the sensitivi-  is advantageous to use the sequential training
                         ties of the outputs of the controlled object to the  mode mentioned in the last section to train the
                         parameters of the neurocontroller).          neurocontroller in the batch mode (i.e., for its
                            In the batch mode, such a network can be  pretraining); the only difference is that we need
                         trained using the same Levenberg–Marquardt   to use dynamic backpropagation to compute the
                         method. However, it requires the computation  Jacobian.
                         of dynamic derivatives; hence to compute the    In this case, the Kalman filter acts as the “sta-
                         Jacobian, we have to either apply the backprop-  pler” of the individual segments into one data
                         agation through time or the real-time recurrent  array. Moreover, the segments can be chosen to
                         learning method. A recurrent form of the net-  be small (30–100 points, which saves consider-
                         work presents additional difficulties in the pro-  able computational time), so long as the dynam-
                         cess of ANN learning: the larger the sample,  ics of the controlled object is reflected on this
                         the higher the chance that the learning process  interval. Although in general sequential meth-
                         will get stuck in some of the local minima. Such  ods achieve lower accuracy, it is more important
                         chances increase with the length of the sample  to circumvent the problem of local minima and
                         with catastrophic speed. Therefore, we divide  to decrease the training time.
                         the entire sample into segments.                Thus, the procedure of the neurocontroller
                            To configure parameters uniquely, we require  configuration is as follows:
                         the closed-loop network with the controller to
                                                                      1. Set the initial conditions on the reference tra-
                         start from the reference trajectory on each seg-  jectory. Usually, a first few points of the seg-
                         ment, since the neurocontroller cannot affect the  ment are assigned to the initial conditions.
                         initial conditions.
                                                                      2. Simulation of the coupled network on this
                            Thus it is necessary to consider the following
                                                                         segment (prediction of the behavior of the
                         factors:
                                                                         controlled object with the current parameters
                         1. Learning the network on small segments (less  of the neural controller), estimation of the er-
                            than 500–1000 points) leads to the situation  ror of the reference model tracking, compu-
   149   150   151   152   153   154   155   156   157   158   159