Page 102 - Neural Network Modeling and Identification of Dynamical Systems
P. 102

90                2. DYNAMIC NEURAL NETWORKS: STRUCTURES AND TRAINING METHODS

                          [64] Griewank A, Walther A. Evaluating derivatives: Prin-  [79] Graves A, Schmidhuber J. Framewise phoneme clas-
                             ciples and techniques of algorithmic differentiation.  sification with bidirectional LSTM networks. In: Pro-
                             2nd ed. Philadelphia, PA, USA: Society for Industrial  ceedings. 2005 IEEE International Joint Conference on
                             and Applied Mathematics. ISBN 0898716594, 2008.  Neural Networks, 2005, vol. 4; 2005. p. 2047–52.
                          [65] Griewank A. On automatic differentiation. In: Math-  [80] Greff  K,  Srivastava  RK,  Koutník  J,  Steune-
                             ematical programming: Recent developments and  brink BR, Schmidhuber J. LSTM: A search space
                             applications. Kluwer Academic Publishers; 1989.  odyssey.  CoRR  2015;abs/1503.04069.  http://
                             p. 83–108.                                   arxiv.org/abs/1503.04069.
                          [66] Bishop C. Exact calculation of the Hessian ma-  [81] Wang Y. A new concept using LSTM neural networks
                             trix for the multilayer perceptron. Neural Com-  for dynamic system identification. In: 2017 American
                             put 1992;4(4):494–501. https://doi.org/10.1162/neco.  Control Conference (ACC), vol. 2017; 2017. p. 5324–9.
                             1992.4.4.494.                             [82] Doya K. Bifurcations in the learning of recurrent neu-
                          [67] Werbos PJ. Backpropagation through time: What it  ral networks. In: Proceedings of 1992 IEEE Interna-
                             does and how to do it. Proc IEEE 1990;78(10):1550–60.  tional Symposium on Circuits and Systems, vol. 6;
                          [68] Chauvin Y, Rumelhart DE, editors. Backpropagation:  1992. p. 2777–80.
                             Theory, architectures, and applications. Hillsdale, NJ,
                                                                       [83] Pasemann F. Dynamics of a single model neuron.
                             USA: L. Erlbaum Associates Inc.. ISBN 0-8058-1259-8,
                                                                          Int J Bifurc Chaos Appl Sci Eng 1993;03(02):271–8.
                             1995.
                          [69] Jesus OD, Hagan MT. Backpropagation algorithms for  http://www.worldscientific.com/doi/abs/10.1142/
                                                                          S0218127493000210.
                             a broad class of dynamic networks. IEEE Trans Neural
                             Netw 2007;18(1):14–27.                    [84] Haschke R, Steil JJ. Input space bifurcation mani-
                          [70] Williams RJ, Zipser D. A learning algorithm for contin-  folds of recurrent neural networks. Neurocomput-
                             ually running fully recurrent neural networks. Neural  ing 2005;64(Supplement C):25–38. https://doi.org/10.
                             Comput 1989;1(2):270–80.                     1016/j.neucom.2004.11.030.
                          [71] Bengio Y, Simard P, Frasconi P. Learning long-term  [85] Jesus OD, Horn JM, Hagan MT. Analysis of recurrent
                             dependencies with gradient descent is difficult. Trans  network training and suggestions for improvements.
                             Neural Netw 1994;5(2):157–66. https://doi.org/10.  In: Neural Networks, 2001. Proceedings. IJCNN ’01. In-
                             1109/72.279181.                              ternational Joint Conference on, vol. 4; 2001. p. 2632–7.
                          [72] Hochreiter S, Bengio Y, Frasconi P, Schmidhuber J. Gra-  [86] Horn J, Jesus OD, Hagan MT. Spurious valleys
                             dient flow in recurrent nets: The difficulty of learning  in the error surface of recurrent networks: Anal-
                             long-term dependencies. In: Kolen J, Kremer S, editors.  ysis and avoidance. IEEE Trans Neural Netw
                             A field guide to dynamical recurrent networks. IEEE  2009;20(4):686–700.
                             Press; 2001. p. 15.                       [87] Phan MC, Hagan MT. Error surface of recur-
                          [73] Kremer SC. A field guide to dynamical recurrent net-  rent neural networks. IEEE Trans Neural Netw
                             works. 1st ed. Wiley-IEEE Press. ISBN 0780353692,  Learn Syst 2013;24(11):1709–21. https://doi.org/10.
                             2001.                                        1109/TNNLS.2013.2258470.
                          [74] Pascanu R, Mikolov T, Bengio Y. On the difficulty of  [88] Samarin AI. Neural networks with pre-tuning. In: VII
                             training recurrent neural networks. In: Proceedings  All-Russian Conference on Neuroinformatics. Lectures
                             of the 30th International Conference on International  on neuroinformatics. Moscow: MEPhI; 2005. p. 10–20
                             Conference on Machine Learning, vol. 28. JMLR.org;  (in Russian).
                             2013. pp. III–1310–III–1318.              [89] Jategaonkar RV. Flight vehicle system identification: A
                          [75] Hochreiter S, Schmidhuber J. Long short-term mem-  time domain methodology. Reston, VA: AIAA; 2006.
                             ory. Neural Comput 1997;9:1735–80.
                          [76] Gers FA, Schmidhuber J, Cummins F. Learning to for-  [90] Morozov NI, Tiumentsev YV, Yakovenko AV. An ad-
                                                                          justment of dynamic properties of a controllable ob-
                             get: Continual prediction with LSTM. Neural Comput
                                                                          ject using artificial neural networks. Aerosp MAI J
                             1999;12:2451–71.
                          [77] Gers FA, Schmidhuber J. Recurrent nets that time and  2002;(1):73–94 (in Russian).
                             count. In: Proceedings of the IEEE-INNS-ENNS Inter-  [91] Krasovsky AA. Automatic flight control systems and
                             national Joint Conference on Neural Networks. IJCNN  their analytical design. Moscow: Nauka; 1973 (in Rus-
                             2000. Neural computing: new challenges and perspec-  sian).
                             tives for the New Millennium, vol. 3; 2000. p. 189–94.  [92] Krasovsky AA, editor. Handbook of automatic control
                          [78] Gers FA, Schraudolph NN, Schmidhuber J. Learn-  theory. Moscow: Nauka; 1987 (in Russian).
                             ing precise timing with LSTM recurrent networks.  [93] Graupe D. System identification: A frequency domain
                             J Mach Learn Res 2003;3:115–43. https://doi.org/10.  approach. New York, NY: R.E. Krieger Publishing Co.;
                             1162/153244303768966139.                     1976.
   97   98   99   100   101   102   103   104   105   106   107