Page 206 - Neural Network Modeling and Identification of Dynamical Systems
P. 206

REFERENCES                                  197
                          [22] Pearlmutter BA. Learning state space trajectories in re-  [39] Suykens JAK, Vandewalle J. Learning a simple recurrent
                             current neural networks. In: International 1989 Joint  neural state space model to behave like Chua’s double
                             Conference on Neural Networks, vol. 2; 1989. p. 365–72.  scroll. IEEE Trans Circuits Syst I, Fundam Theory Appl
                          [23] Sato MA. A real time learning algorithm for recurrent  1995;42(8):499–502.
                             analog neural networks. Biol Cybern 1990;62(3):237–41.  [40] Bengio Y, Louradour J, Collobert R, Weston J. Curricu-
                          [24] Özyurt DB, Barton PI. Cheap second order directional  lum learning. In: Proceedings of the 26th Annual Inter-
                             derivatives of stiff ODE embedded functionals. SIAM J  national Conference on Machine Learning, ICML ’09.
                             Sci Comput 2005;26(5):1725–43.               New York, NY, USA: ACM. ISBN 978-1-60558-516-1,
                          [25] Griewank A, Walther A. Evaluating derivatives: Princi-  2009. p. 41–8.
                             ples and techniques of algorithmic differentiation. 2nd  [41] Fedorov VV. Theory of optimal experiments. New York:
                             ed. Philadelphia, PA, USA: Society for Industrial and  Academic Press; 1972.
                             Applied Mathematics. ISBN 0898716594, 2008.  [42] MacKay DJC. Information-based objective functions for
                          [26] CppAD, a package for differentiation of C++ algo-  active data selection. Neural Comput 1992;4(4):590–604.
                             rithms. https://www.coin-or.org/CppAD/.   [43] Cohn DA. Neural network exploration using optimal
                          [27] Walther A, Griewank A. Getting started with ADOL-C.  experiment design. Neural Netw 1996;9(6):1071–83.
                             In: Naumann U, Schenk O, editors. Combinatorial sci-  [44] Póczos B, Lörincz A. Identification of recurrent neural
                             entific computing. Chapman-Hall CRC computational  networks by Bayesian interrogation techniques. J Mach
                             science; 2012. p. 181–202. Chap. 7.
                                                                          Learn Res 2009;10:515–54.
                          [28] Allgower E, Georg K. Introduction to numerical con-  [45] Shewry MC, Wynn HP. Maximum entropy sampling. J
                             tinuation methods. Philadelphia, PA, USA: Society for  Appl Stat 1987;14(2):165–70.
                             Industrial and Applied Mathematics. ISBN 089871544X,  [46] Wynn HP. Maximum entropy sampling and general
                             2003.
                          [29] Shalashilin VI, Kuznetsov EB. Parametric continuation  equivalence theory. In: Di Bucchianico A, Läuter H,
                                                                          WynnHP,editors.mODa7—Advancesinmodel-
                             and optimal parametrization in applied mathematics  oriented design and analysis. Heidelberg: Physica-
                             and mechanics. Dordrecht, Boston, London: Kluwer  Verlag HD; 2004. p. 211–8.
                             Academic Publishers; 2003.
                          [30] Chow SN, Mallet-Paret J, Yorke JA. Finding zeros of  [47] Kozachenko L, Leonenko N. Sample estimate of
                                                                          the entropy of a random vector. Probl Inf Transm
                             maps: Homotopy methods that are constructive with  1987;23:95–101.
                             probability one. Math Comput 1978;32:887–99.
                                                                       [48] Kennedy J, Eberhart R. Particle swarm optimization. In:
                          [31] Watson LT. Theory of globally convergent probability-
                                                                          Proceedings of ICNN’95 – IEEE International Confer-
                             one homotopies for nonlinear programming. SIAM J
                                                                          ence on Neural Networks, vol. 4. ISBN 0-7803-2768-3,
                             Optim 2000;11(3):761–80.
                                                                          1995. p. 1942–8.
                          [32] Chow J, Udpa L, Udpa SS. Homotopy continuation
                             methods for neural networks. In: IEEE International  [49] van den Bergh F, Engelbrecht A. A new locally conver-
                             Symposium on Circuits and Systems, vol. 5; 1991.  gent particle swarm optimiser. In: IEEE International
                             p. 2483–6.                                   Conference on Systems, Man and Cybernetics, vol. 3;
                          [33] Lendl M, Unbehauen R, Luo FL. A homotopy   2002. p. 6.
                             method for training neural networks. Signal Process  [50] Peer ES, van den Bergh F, Engelbrecht AP. Using neigh-
                             1998;64(3):359–70.                           bourhoods with the guaranteed convergence PSO. In:
                          [34] Gorse D, Shepherd AJ, Taylor JG. The new era in super-  Proceedings of the SIS ’03 – IEEE Swarm Intelligence
                             vised learning. Neural Netw 1997;10(2):343–52.  Symposium; 2003. p. 235–42.
                          [35] Coetzee FM. Homotopy approaches for the analysis and  [51] Clerc M. Particle swarm optimization. Newport Beach,
                             solution of neural network and other nonlinear systems  CA, USA: ISTE. ISBN 9781905209040, 2010.
                             of equations. Ph.D. thesis, Carnegie Mellon University;  [52] Hansen N, Ostermeier A. Adapting arbitrary nor-
                             1995.                                        mal mutation distributions in evolution strategies:
                          [36] Allgower EL, Georg K. Numerical path following. In:  The covariance matrix adaptation. In: Proceedings of
                             Techniques of scientific computing (Part 2). Handbook  the IEEE Conference on Evolutionary Computation.
                             of numerical analysis, vol. 5. Elsevier; 1997. p. 3–207.  ISBN 0-7803-2902-3, 1996. p. 312–7.
                          [37] Elman JL. Learning and development in neural net-  [53] Hansen N, Ostermeier A. Completely derandomized
                             works: the importance of starting small. Cognition  self-adaptation in evolution strategies. Evol Comput
                             1993;48(1):71–99.                            2001;9(2):159–95.
                          [38] Ludik J, Cloete I. Incremental increased complexity  [54] Jastrebski G, Arnold D. Improving evolution strate-
                             training. In: Proc. ESANN 1994, 2nd European Sym. on  gies through active covariance matrix adaptation. Evol
                             Artif. Neural Netw.; 1994. p. 161–5.         Comput 2006:2814–21.
   201   202   203   204   205   206   207   208   209   210   211