Page 94 - Foundations of Cognitive Psychology : Core Readings
P. 94

Chapter 5

               Minds, Brains, and Programs

               John R. Searle




               What psychological and philosophical significance should we attach to recent
               efforts at computer simulations of human cognitive capacities?In answering
               this question, I find it useful to distinguish what I will call ‘‘strong’’ AI from
               ‘‘weak’’ or ‘‘cautious’’ AI (Artificial Intelligence). According to weak AI, the
               principal value of the computer in the study of the mind is that it gives us a
               very powerful tool. For example, it enables us to formulate and test hypotheses
               in a more rigorous and precise fashion. But according to strong AI, the com-
               puter is not merely a tool in the study of the mind; rather, the appropriately
               programmed computer really is a mind, in the sense that computers given the
               right programs can be literally said to understand and have other cognitive
               states. In strong AI, because the programmed computer has cognitive states,
               the programs are not mere tools that enable us to test psychological explan-
               ations; rather, the programs are themselves the explanations.
                 I have no objection to the claims of weak AI, at least as far as this article is
               concerned. My discussion here will be directed at the claims I have defined as
               those of strong AI, specifically the claim that the appropriately programmed
               computer literally has cognitive states and that the programs thereby explain
               human cognition. When I hereafter refer to AI, I have in mind the strong ver-
               sion, as expressed by these two claims.
                 I will consider the work of Roger Schank and his colleagues at Yale (Schank
               and Abelson, 1977), because I am more familiar with it than I am with any
               other similar claims, and because it provides a very clear example of the sort of
               work I wish to examine. But nothing that follows depends upon the details of
               Schank’s programs. The same arguments would apply to Winograd’s SHRDLU
               (Winograd, 1973), Weizenbaum’s ELIZA (Weizenbaum, 1965), and indeed any
               Turing machine simulation of human mental phenomena.
                 Very briefly, and leaving out the various details, one can describe Schank’s
               programasfollows:the aimofthe programistosimulatethe human ability to
               understand stories. It is characteristic of human beings’ story-understanding
               capacity that they can answer questions about the story even though the infor-
               mation that they give was never explicitly stated in the story. Thus, for exam-
               ple, suppose you are given the following story: ‘‘A man went into a restaurant
               and ordered a hamburger. When the hamburger arrived it was burned to a
               crisp, and the man stormed out of the restaurant angrily, without paying for
               the hamburger or leaving a tip.’’ Now, if you are asked ‘‘Did the man eat the


               From The Behavioral and Brain Sciences 3 (1980): 140–152. Reprinted with permission.
   89   90   91   92   93   94   95   96   97   98   99