Page 184 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 184

174    CHAPTER 8 The New AI: Basic Concepts, and Urgent Risks




                         property applies to dynamical systems over time. The TLRN format here, assuming
                         Gaussian noise at each time interval, therefore gives us a strict universal generaliza-
                         tion of the vectorized general form of the standard ARMA models and methods of
                         time-series analysis.
                            Despite the strict dominance of the TLRN over traditional and widely known
                         designs for prediction, for dynamic modeling, and for reconstructing the true state
                         of the world (like autoencoders but more rigorous and powerful), there are
                         extensions of the general TLRN which have performed much better in tests in simu-
                         lation and on real data from the chemical process industry [17], and room for further
                         development of even more powerful extensions [25]. The company Deep Mind (a
                         major leader in today’s deep learning, now owned by Google) has been developing
                         ways to approximate backpropagation through time in feedforward real-time com-
                         putations, but as the figure indicates a more general approximation was described
                         in detail in the Handbook of Intelligent Control in 1992 (and shown to work well
                         enough in tests by Prokhorov for control applications).
                            But what of recurrence without a time delay or a clock? Is it good for applications
                         other than associative memory (an area studied to great exhaustion decades ago,
                         building on classic work by Grossberg and Kohonen)?
                            Indeed it is. See Fig. 8.10 for an example.
                            The recurrence in Fig. 8.10 looks exactly like the recurrence in Fig. 8.9 to the
                         untrained eye. However, notice that there are two time indices here, “t” and “i.”
                         This kind of design implies a kind of cinematic processing, in which new input vec-
                         tors arrive from time to time (indexed by the integer t, the big clock index) but in
                         which many iterations of processing (indexed by i, a kind of inner loop index) try
                         to attain some kind of convergence in the inner loop. Simultaneous recurrence is
























                         FIGURE 8.10
                         The CSRN, an example of simultaneous recurrence [27].
   179   180   181   182   183   184   185   186   187   188   189