Page 101 - Neural Network Modeling and Identification of Dynamical Systems
P. 101

REFERENCES                                    89
                           [33] Chen S, Wang SS, Harris C. NARX-based non-  [49] Gill PE, Murray W, Wright MH. Practical optimization.
                              linear system identification using orthogonal least  London, New York: Academic Press; 1981.
                              squares basis hunting. IEEE Trans Control Syst Tech-  [50] Nocedal J, Wright S. Numerical optimization. 2nd ed.
                              nol 2008;16(1):78–84.                        Springer; 2006.
                           [34] Sahoo HK, Dash PK, Rath NP. NARX model based  [51] Fletcher  R.  Practical  methods  of  optimization.
                              nonlinear dynamic system identification using low  2nd ed. New York, NY, USA: Wiley-Interscience.
                              complexity neural networks and robust H ∞ filter.  ISBN 0-471-91547-5, 1987.
                              Appl Soft Comput 2013;13(7):3324–34.      [52] Dennis J, Schnabel R. Numerical methods for uncon-
                           [35] Hidayat MIP, Berata W. Neural networks with ra-  strained optimization and nonlinear equations. Society
                              dial basis function and NARX structure for mate-  for Industrial and Applied Mathematics; 1996.
                              rial lifetime assessment application. Adv Mater Res  [53] Gendreau M, Potvin J. Handbook of metaheuristics.
                              2011;277:143–50.                             International series in operations research & manage-
                           [36] Wong CX, Worden K. Generalised NARX shunting  ment science. US: Springer. ISBN 9781441916655, 2010.
                              neural network modelling of friction. Mech Syst Sig-  [54] Du K, Swamy M. Search and optimization by
                              nal Process 2007;21:553–72.                  metaheuristics:  Techniques  and  algorithms  in-
                           [37] Potenza R, Dunne JF, Vulli S, Richardson D, King P.  spired by nature. Springer International Publishing.
                              Multicylinder engine pressure reconstruction using  ISBN 9783319411927, 2016.
                              NARX neural networks and crank kinematics. Int J  [55] Glorot X, Bengio Y. Understanding the difficulty
                              Eng Res 2017;8:499–518.                      of training deep feedforward neural networks. In:
                           [38] Patel A, Dunne JF. NARX neural network modelling  Teh YW, Titterington M, editors. Proceedings of the
                              of hydraulic suspension dampers for steady-state and  Thirteenth International Conference on Artificial Intel-
                              variable temperature operation. Veh Syst Dyn: Int J  ligence and Statistics. Proceedings of machine learning
                              Veh Mech Mobility 2003;40(5):285–328.        research, vol. 9. Chia Laguna Resort, Sardinia, Italy:
                           [39] Gaya MS, Wahab NA, Sam YM, Samsudin SI, Ja-  PMLR; 2010. p. 249–56. http://proceedings.mlr.press/
                              maludin IW. Comparison of NARX neural network  v9/glorot10a.html.
                              and classical modelling approaches. Appl Mech Mater  [56] Nocedal J. Updating quasi-Newton matrices with lim-
                              2014;554:360–5.                              ited storage. Math Comput 1980;35:773–82.
                           [40] Siegelmann HT, Horne BG, Giles CL. Computa-  [57] Conn AR, Gould NIM, Toint PL. Trust-region methods.
                              tional capabilities of recurrent NARX neural net-  Philadelphia, PA, USA: Society for Industrial and Ap-
                              works. IEEE Trans Syst Man Cybern, Part B, Cybern  plied Mathematics. ISBN 0-89871-460-5, 2000.
                              1997;27(2):208–15.                        [58] Steihaug T. The conjugate gradient method and trust
                           [41] Kao CY, Loh CH. NARX neural networks for nonlinear  regions in large scale optimization. SIAM J Numer
                              analysis of structures in frequency domain. J Chin Inst  Anal 1983;20(3):626–37.
                              Eng 2008;31(5):791–804.                   [59] Martens J, Sutskever I. Learning recurrent neural net-
                           [42] Billings SA. Nonlinear system identification: NAR-  works with Hessian-free optimization. In: Proceedings
                              MAX methods in the time, frequency and spatio-  of the 28th International Conference on International
                              temporal domains. New York, NY: John Wiley & Sons;  Conference on Machine Learning. USA: Omnipress.
                              2013.                                        ISBN 978-1-4503-0619-5, 2011. p. 1033–40. http://dl.
                           [43] Pearson PK. Discrete-time dynamic models. New  acm.org/citation.cfm?id=3104482.3104612.
                              York–Oxford: Oxford University Press; 1999.  [60] Martens J, Sutskever I. Training deep and recurrent
                           [44] Nelles O. Nonlinear system identification: From classi-  networks with Hessian-free optimization. In: Neu-
                              cal approaches to neural networks and fuzzy models.  ral networks: Tricks of the trade. Springer; 2012.
                              Berlin: Springer; 2001.                      p. 479–535.
                           [45] Sutton RS, Barto AG. Reinforcement learning: An in-  [61] Moré JJ. The Levenberg–Marquardt algorithm: Imple-
                              troduction. Cambridge, Massachusetts: The MIT Press;  mentation and theory. In: Watson G, editor. Numer-
                              1998.                                        ical analysis. Lecture notes in mathematics, vol. 630.
                           [46] Busoniu L, Babuška R, De Schutter B, Ernst D. Rein-  Springer Berlin Heidelberg. ISBN 978-3-540-08538-6,
                              forcement learning and dynamic programming using  1978. p. 105–16.
                              function approximators. London: CRC Press; 2010.  [62] Moré JJ, Sorensen DC. Computing a trust region step.
                           [47] Kamalapurkar R, Walters P, Rosenfeld J, Dixon W. Re-  SIAM J Sci Stat Comput 1983;4(3):553–72. https://doi.
                              inforcement learning for optimal feedback control: A  org/10.1137/0904038.
                              Lyapunov-based approach. Berlin: Springer; 2018.  [63] Bottou  L,  Curtis  F,  Nocedal  J.  Optimiza-
                           [48] Lewis FL, Liu D. Reinforcement learning and approx-  tion  methods  for  large-scale  machine  learn-
                              imate dynamic programming for feedback control.  ing.  SIAM  Rev  2018;60(2):223–311.  https://
                              Hoboken, New Jersey: John Wiley & Sons; 2013.  doi.org/10.1137/16M1080173.
   96   97   98   99   100   101   102   103   104   105   106