Page 290 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 290
2. Meaning Versus Information 283
semantic level. Philosopher John Searle’s paper on the “Chinese room argument” 2
made clear of the limitation of the computational/information processing view of
cognition. Inside the Chinese room there is an English monolingual person with
all the instructions for processing Chinese language. The room has two mail slots,
one for input, and one for output. A Chinese speaker standing outside the room
writes down something on a piece of paper and deposits it in the input slot, and
the English speaker inside will process the information based on the instructions pre-
sent in the room and draw (not write) the answer on a piece of paper and return it
through the output slot. From the outside, the Chinese room speaks and understands
perfect Chinese, but there is no true understanding of Chinese in this system. The
main problem is that the information within the room lack meaning, and this is
why grounding is necessary; grounding in the sense that information is grounded
in reality, not hovering above in an abstract realm of symbols (see Stevan Harnad’s
3
concept of symbol grounding ). Many current artificial intelligence systems
including deep learning tend to lack such grounding, and this can lead to brittleness,
since these systems simply learn the inputeoutput mapping, without understanding.
Thus, we need to think in terms of the meaning of the information, how semantic
grounding is to be done: How does the brain ground information within itself? How
can artificial intelligence systems ground information within themselves? What is
the nature of such grounding? Perceptual? Referential? This can be a very complex
problem, so let us consider a greatly simplified version. Suppose you are sitting in-
side a totally dark room, and you only observe the occasional blinking of some light
bulbs. You count the bulbs, and it looks like there are four of them. Each of these
bulbs seem to be representing some information, but you are unsure what they
mean. So, here you have a classic symbol grounding problem. The light bulbs are
like symbols, and they represent something. However, sitting inside this room, it
seems that there is no way you can figure out the meaning of these blinking lights.
Now consider that the dark room is the primary visual cortex (V1), and the four light
bulbs are neurons that represent something, and in place of you, we put inside the
room the downstream visual areas. With our reasoning above, it would suggest
that the downstream visual areas have no way to understand the meaning of V1
activities, which seems absurd: one should be blind.
It turns out that this problem can only be solved if we allow motor interaction
from within the system. Inside the dark room, we can install a joystick, and the
person sitting inside can move it around and see how the joystick movement relates
to the changes in the blinking light in a systematic manner. Consider the case where
the four light bulbs represent four different orientations 0, 45, 90, and 135 degrees,
respectively. How can movement of the joystick reveal the meaning of these light
2
“Chinese room argumentd Scholarpedia.” August 26, 2009. http://www.scholarpedia.org/article/
Chinese_room_argument.
3
“Symbol grounding problemd Scholarpedia.” May 6, 2007. http://www.scholarpedia.org/article/
Symbol_grounding_problem.