Page 98 - Foundations of Cognitive Psychology : Core Readings
P. 98

Minds, Brains, and Programs  99

               electric cell,’’ ‘‘The adding machine knows how (understands how, is able)todo
               addition and subtraction but not division,’’ and ‘‘The thermostat perceives
               changes in the temperature.’’ The reason we make these attributions is quite
               interesting, and it has to do with the fact that in artifacts we extend our own
                            3
               intentionality; our tools are extensions of our purposes, and so we find it nat-
               ural to make metaphorical attributions of intentionality to them; but I take it no
               philosophical ice is cut by such examples. The sense in which an automatic
               door ‘‘understands instructions’’ from its photoelectric cell is not at all the sense
               in which I understand English. If the sense in which Schank’s programmed
               computers understand stories is supposed to be the metaphorical sense in
               which the door understands, and not the sense in which I understand English,
               the issue would not be worth discussing. But Newell and Simon (1963) write
               that the kind of cognition they claim for computers is exactly the same as for
               human beings. I like the straightforwardness of this claim, and it is the sort of
               claim I will be considering. I will argue that in the literal sense the programmed
               computer understands what the car and the adding machine understand,
               namely, exactly nothing. The computer understanding is not just (like my un-
               derstanding of German) partial or incomplete; it is zero.
                 Now to the replies.


               5.1 The Systems Reply (Berkeley)
               ‘‘While it is true that the individual person who is locked in the room does not
               understand the story, the fact is that he is merely part of a whole system, and
               the system does understand the story. The person has a large ledger in front of
               him in which are written the rules, he has a lot of scratch paper and pencils for
               doing calculations, he has ‘data banks’ of sets of Chinese symbols. Now, un-
               derstanding is not being ascribed to the mere individual; rather it is being
               ascribed to this whole system of which he is a part.’’
                 My response to the systems theory is quite simple: let the individual inter-
               nalize all of these elements of the system. He memorizes the rules in the ledger
               and the data banks of Chinese symbols, and he does all the calculations in his
               head. The individual then incorporates the entire system. There isn’t anything
               at all to the system that he does not encompass. We can even get rid of the
               room and suppose he works outdoors. All the same, he understands nothing of
               the Chinese, and a fortiori neither does the system, because there isn’t anything
               in the system that isn’t in him. If he doesn’t understand, then there is no way
               the system could understand because the system is just a part of him.
                 Actually I feel somewhat embarrassed to give even this answer to the sys-
               tems theory because the theory seems to me so unplausible to start with. The
               idea is that while a person doesn’t understand Chinese, somehow the conjunc-
               tion of that person and bits of paper might understand Chinese. It is not easy
               for me to imagine how someone who was not in the grip of an ideology would
               find the idea at all plausible. Still, I think many people who are committed to
               the ideology of strong AI will in the end be inclined to say something very
               much like this; so let us pursue it a bit further. According to one version of this
               view, while the man in the internalized systems example doesn’t understand
               Chinese in the sense that a native Chinese speaker does (because, for example,
   93   94   95   96   97   98   99   100   101   102   103