Page 296 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 296
5. Discussion 289
As we saw in Section 2, motor aspect is important for semantic grounding. In a
related context, philosopher Ludwig Wittgenstein proposed that the meaning of
language is in its use [17]. Basically, this is a departure from meaning based on
what it (e.g., a word) represents. A more recent thesis in this general direction comes
from Glenberg and Robertson [18], where they emphasized that “what gives mean-
ing to a situation is grounded in actions particularized for that situation,” thus taking
an action-oriented view of grounding. Also see O’Regan and Noe ¨’s sensorimotor
contingency theory, which is organized around a similar argument [19].
One interesting question is, does the range of possible motor behavior somehow
limit the degree of understanding? That is, can organisms with higher degree of
freedom and richer repertoire of actions gain higher level of understanding? I believe
this is true. For example, recall the orientation perception thought experiment in
Section 2. If the visuomotor agent was only able to move horizontally or vertically,
but not diagonally, it would never be able to figure out what the 45 and 135 degrees
light bulbs mean. Intelligence is generally associated with the brain size or brain/
body ratio, but what may also be very important is how rich the behavioral repertoire
of the animal is. For example, all the animals we consider to be intelligent have such
flexibility in behavior: primates, elephants, dolphins, and even octopuses. An exten-
sion of this idea is, can an agent extend its behavioral repertoire? This is possible by
learning new moves, but it is also possible by using tools. The degree of understand-
ing can exponentially grow if the agent can also construct increasingly more
complex tools. This I think is one of the keys to human’s superior intelligence.
See Ref. [20] for our latest work on tool construction and tool use in a simple neuro-
evolution agent, and our earlier work on tool use referenced within.
In Section 2, I also proposed the internal state invariance criterion, within the
context of reinforcement learning. This raises an interesting idea regarding rewards
in reinforcement learning. In traditional reinforcement learning, the reward comes
from the external environment. However, research in reinforcement learning started
to explore the importance of rewards generated from within the learning agent. This
is called “intrinsic motivation” [21], and the internal state invariance criterion is a
good candidate. In this view, intrinsic motivation also seems to be an important
ingredient for meaning that is intrinsic to the learning system. Another related
work in this direction is [22] based on the criterion of independently controllable
features. The main idea is to look for good internal representations where “good”
is defined by whether action can independently control these representations or
not. So, in this case, both the perceptual representations and the motor policy are
learned. This kind of criterion can be internal to the agent, thus, keeping things
intrinsic, while allowing the agent to understand the external environment. Also
see Ref. [23] for our work on codevelopment of visual receptive fields (perceptual
representations) and the motor policy.
Next, I would like to discuss various mechanisms that can serve as memory, and
how, in the end, they all lead to prediction. In neural networks, there are several ways
to make the network responsive to input from the past. Delayed input lines is one,
which allows a reactive feedforward network to take input from the past into