Page 297 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 297

290    CHAPTER 14 Meaning Versus Information, Prediction Versus Memory




                         consideration when processing the current input (see e.g., Ref. [4]). Another
                         approach is to include recurrent connections, connections that form a loop. More so-
                         phisticated methods exist such as Long Short Term Memory, etc., but generally they
                         all fall under the banner of recurrent neural networks. Finally, there is a third cate-
                         gory that can serve as a memory mechanism, which is to allow feedforward neural
                         networks to drop and detect token-like objects in the environment. We have shown
                         that this strategy can be used in tasks that require memory, just using feedforward
                         neural networks [24]. From an evolutionary point of view, reactive feedforward neu-
                         ral networks may have appeared first, and subsequently, delay and ability to utilize
                         external materials may have evolved (note that this is different with systems that
                         have an integrated external memory, for example, differentiable neural computers
                         [25]). Further development or internalization of some of these methods (especially
                         the external material interaction mechanism) may have led to a fully internalized
                         memory. These memory mechanisms involve some kind of recurrent loop (perhaps
                         except for the delayed input case), thus giving rise to a dynamic internal state. As we
                         have seen in Section 3, in such a system, networks with predictive dynamics have a
                         fitness advantage, and thus will proliferate.
                            Continuing with the discussion on prediction, I would like to talk about how
                         recent advances in machine learning are benefitting from the use of prediction as
                         a learning objective/criterion. In machine-learning situations where explicit target
                         values are rare or task-specific rewards are very sparse, it is a challenge to train
                         the learning model effectively. Recent work by Finn and Levine [26] (and others)
                         showed that learning motor tasks in a completely self-supervised manner is possible
                         without a detailed reward, by using a deep predictive model, which uses a large data-
                         set of robotic pushing experiment. This shows a concrete example where prediction
                         can be helpful to the agent. See Ref. [26] for more references on related approaches
                         that utilize prediction.
                            Finally, let us consider question-asking. As briefly hinted in Section 4 (citing
                         Ref. [10]), generating questions or posing problems can be viewed as generating
                         new goals. Similar in spirit with Schmidhuber’s Powerplay [12], Held et al. proposed
                         an algorithm for automatic goal generation in a reinforcement learning setting [27].
                         The algorithm is used to generate a range of tasks that the agent can currently
                         perform. A generator network is used to propose a new task to the agent, where
                         the task is drawn from a parameterized subset of the state space. A significant finding
                         based on this approach is that the agent can efficiently and automatically learn a
                         large range of different tasks without much prior knowledge. These results show
                         the powerful role of question-asking in learning agents.



                         6. CONCLUSION
                         In this chapter, I talked about meaning versus information, prediction versus mem-
                         ory, and question versus answer. These ideas challenge our ingrained views of brain
                         function and intelligence (information, memory, and problem solving), and we saw
   292   293   294   295   296   297   298   299   300   301   302