Page 100 - Foundations of Cognitive Psychology : Core Readings
P. 100

Minds, Brains, and Programs  101

               systems reply simply begs the question by insisting without argument that the
               system must understand Chinese.
                 Furthermore, the systems reply would appear to lead to consequences that
               are independently absurd. If we are to conclude that there must be cognition in
               me on the grounds that I have a certain sort of input and output and a program
               in between, then it looks like all sorts of noncognitive subsystems are going to
               turn out to be cognitive. For example, there is a level of description at which
               my stomach does information processing, and it instantiates any number of
               computer programs, but I take it we do not want to say that it has any under-
               standing (cf. Pylyshyn, 1980). But if we accept the systems reply, then it is hard
               to see how we avoid saying that stomach, heart, liver, and so on, are all un-
               derstanding subsystems, since there is no principled way to distinguish the
               motivation for saying the Chinese subsystem understands from saying that the
               stomach understands. It is, by the way, not an answer to this point to say that
               the Chinese system has information as input and output and the stomach has
               food and food products as input and output, since from the point of view of the
               agent, from my point of view, there is no information in either the food or the
               Chinese—the Chinese is just so many meaningless squiggles. The information
               in the Chinese case is solely in the eyes of the programmers and the inter-
               preters, and there is nothing to prevent them from treating the input and out-
               put of my digestive organs as information if they so desire.
                 This last point bears on some independent problems in strong AI, and it is
               worth digressing for a moment to explain it. If strong AI is to be a branch of
               psychology,thenitmust beabletodistinguish thosesystems that are genu-
               inely mental from those that are not. It must be able to distinguish the princi-
               ples on which the mind works from those on which nonmental systems work;
               otherwise it will offer us no explanations of what is specifically mental about
               the mental. And the mental–nonmental distinction cannot be just in the eye of
               the beholder but it must be intrinsic to the systems; otherwise it would be up to
               any beholder to treat people as nonmental and, for example, hurricanes as
               mental if he likes. But quite often in the AI literature the distinction is blurred
               in ways that would in the long run prove disastrous to the claim that AI is a
               cognitive inquiry. McCarthy, for example, writes, ‘‘Machines as simple as ther-
               mostats can be said to have beliefs, and having beliefs seems to be a character-
               istic of most machines capable of problem solving performance’’ (McCarthy,
               1979). Anyone who thinks strong AI has a chance as a theory of the mind ought
               to ponder the implications of that remark. We are asked to accept it as a dis-
               covery of strong AI that the hunk of metal on the wall that we use to regulate
               the temperature has beliefs in exactly the same sense that we, our spouses, and
               our children have beliefs, and furthermore that ‘‘most’’ of the other machines in
               the room—telephone, tape recorder, adding machine, electric light switch—
               also have beliefs in this literal sense. It is not the aim of this article to argue
               against McCarthy’s point, so I will simply assert the following without argu-
               ment. The study of the mind starts with such facts as that humans have beliefs,
               while thermostats, telephones, and adding machines don’t. If you get a theory
               that denies this point you have produced a counter example to the theory and
               the theory is false. One gets the impression that people in AI who write this
   95   96   97   98   99   100   101   102   103   104   105