A case study in the philosophy of mind and cognitive science
John R. Searle launched a remarkable discussion about the foundations of artificial intelligence and cognitive science in his well-known Chinese room argument in 1980 (Searle 1980). It is chosen as an example and introduction to the philosophy of mind. No background knowledge is needed to read this presentation. However, for Finnish readers there is a small introductory text to the mind-body -problem. You may also read Stevan Harnad’s commentary on Searle’s arguments and related issues( 1989 , 1990, [1992](ftp://co…
A case study in the philosophy of mind and cognitive science
John R. Searle launched a remarkable discussion about the foundations of artificial intelligence and cognitive science in his well-known Chinese room argument in 1980 (Searle 1980). It is chosen as an example and introduction to the philosophy of mind. No background knowledge is needed to read this presentation. However, for Finnish readers there is a small introductory text to the mind-body -problem. You may also read Stevan Harnad’s commentary on Searle’s arguments and related issues( 1989 , 1990, 1992, 1995 ) and Searle’s replies to them. Selmer Bringsjord offers us an commentary (1994) on Searle’s philosophy, defending cognitive science.
Mind-body -problem
Thousands of years have scientists thought about the following guestion: how can we combine mind and brain - two quite distinct entities. At first, this question might appear meaningless, but trust me: it is a hard one, perhaps one of the biggest problems in science today.
Many proposals have been given. Descartes suggested that there exists an independent entity - or substance - which is the human soul. The soul interacts with the brain. Today this is not a widely accepted view, but it still has its defenders such as neurobiologist John Eccles. It is called interactionist dualism.
Materialism is the doctrine that there are no such things as souls. Only material things exist. However, many forms of materialist positions exist. In this context we will examine only one of them, called strong artificial intelligence stating that a computer with the right program would be mental. The emphasis is on the word ‘program’, since any machine can implement a given program. This is an answer to the mind-body -problem: what matters is not the matter but its organisation ie. that it ‘runs’ the right software.
A human mind is considered as a piece of software which the human brain implements. Therefore it would be possible - in principle - to code this program to a von Neumann computer and we would have a mental machine. This is quite a fascinating position. The whole cognitive science was originally based on this paradigm (classical cognitive science, cognitivism).
Chinese room argument
Searle’s Chinese room argument tries to show that strong AI is false. But how can anyone show it to be false if we don’t know what the human mind’s program is? How can one know it a priori - before any empirical tests have been given? This is the ingenious part of Searle’s argument. The idea is to construct a machine which would be a zombie (ie. not mental) with any program. And if this machine would exist, it is the case that strong AI would be false, since no program would ever make it mental.
But how to construct such an machine? And worse than that, how would we actually know if it has thoughts or not? This is the second problem which Searle solves by putting ourselves to implement the machine. If we implement the program, we would know if it is mental or not. Therefore the Chinese room argument has a thought experiment part. This is presented next.
Suppose you are in a closed room which has two slots. From the slot 1 somebody gives you Chinese characters which you don’t recognize as words ie. you don’t know what these small characters mean. You also has a huge rulebook which you use to construct another Chinese characters from those that were given to you, and finally you split these new characters out of the slow 2. In short:
1. Chinese characters comes in, 2. you use the rulebook and construct more Chinese characters and 3. you put those new characters out.
In its essence, this is just like a computer program which has an input, it computes something and finally splits an output. Suppose further that the rulebook is such that people outside this room can discuss with you in Chinese. For example, they send you a question ‘how are you’ and you, following the rulebook, would give a meaningful answer. So far, the computer program simulates human being which understands Chinese.
One can even ask ‘do you understand Chinese?’ from the room and it can answer ‘yes, of course’ despite of the fact that you, inside the room, would not understand a word of what is going on. You are just following rules, not understanding Chinese.
The crucial part is this: given any rulebook (=program), you would never understand the meanings of those characters you manipulate. Searle has constructed a machine which cannot ever be mental. Changing the program means only to change the rulebook and you can clearly see that it does not increase you understanding. Remember that the strong artificial intelligence states that given the right program, any machine running it would be mental. Well, says Searle, this Chinese room would not understand anything... there must be something wrong in strong AI.
Criticism
Searle has presented his views, it is time for other philosophers and cognitive scientists to introduce their comments and criticism. The criticism is presented in a form of a dialogue. Cognitive scientists’ comments are grounded on many commentaries on Searle’s arguments, and Searle’s replies are based on his commentaries on criticism. However, they are fictive.
Cognitive Scientists (CS for no on): I’m impressed. You have surely given an exceptional argument which raises many profound questions concerning the foundations of artificial intelligence. But how can you insist that we can never come up with thinking machines? It might be that our present computers and programs are still too simple (Sloman & Croucher 1980)? Maybe our present computers are just too slow (Dennett 1987)?
Searle: This is not a matter of any machines, future prospects or the speed of your computers. It has nothing to do with the hardware. Strong artificial intelligence says that all that matters is software.
CS: I see your point. But I still find that your Chinese room is not analoguous to computers, as you claimed. In fact, you have later written that there is no such thing as an intrinsic syntax in the nature (Searle 1993, Jacquette 1990): why do you postulate that such an entity would exist in computers? Sure, computers are programmed by syntactical programs but in their essence they are just hardware. And a program is transformed to electrical impulses, which is, hardware in your vocabulary. So, I think that the Chinese room argument has nothing to do with computers.
Searle: On that point I was wrong when I first introduced the argument in 1980 (note that this is my interpretation, I think Searle has not admit this). I compared the Chinese room to Roger Schank’s computer simulations (1977). However, as I said, my argument has nothing to do with hardware or computers, its about programs. But it still denies strong artificial intelligence, cognitivism and cognitive science.
CS: But the ‘intrinsic intentionality’ (mentality) you are talking about... it is a privite experience... why would we want to introduce any objective criteria for such an subjective experience (Wilensky 1980, Jacquette 1989)? In fact, we have this ‘other minds problem’ stating that it is ultimately impossible to know that someone else has any subjective experiences. We cannot observe other people’s thoughts. I think you want too much - the Turing test (1950) would certainly be enough! I think this doubts the importance of your argument. Whether any machines will be mental, we cannot never know that for sure.
Searle: Agreed. But what’s the point in the strong AI then? If the strong AI claims that any system with the right program would be mental, it is clearly a metaphysical hypothesis in the same sense... However, I can present my Chinese room argument without the other minds problem. My argument shows that the person in the Chinese room doesn’t understand Chinese. We rely on our own experiences when we verify this fact. So, this argument holds whether there are some conscious minds other than me (common sense) or not (solipsism). In that sense, it’s about the ontology of mentality, not about epistemology (Searle 1993b). And in cognitive sciece, one just presupposes that mental minds exist.
CS: Curiously, we still have a feeling that your argument is just a ‘intuition pump’ of some kind (Dennett, Block, Hofstadter 1980). You have just constructed a purposeful and intuitive situation, aiming at a false conclusion. Think about earth: people were pretty much convinced that it is flat. Nobody believes that today. There we have an example of wrong intuition - maybe that’s what Chinese room is all about. Anyway, why should we - as scientists - believe on any intuitions or thought experiments?
Searle: It is a plain fact that I don’t understand Chinese in the Chinese room. There is no intuition about that. The argument relies on that fact.
CS: Hmmm... You refer to this concept of ‘intentionality’ in your argument. You are claiming that the man in the room does not have intentionality?
Searle: Right. Since I don’t understand Chinese, I don’t know what those Chinese characters mean. This means essentially the same as lacking of intentionality. Intentionality, on the other hand, is one essential form of mentality (Brentano).
CS: But isn’t it quite problematic to try to distinguish between intentional and non-intentional creatures? What kind of mental states would you postulate to monkeys, cats, insects... (Menzel 1980)? There is perhaps no point in saying that something is mental and something is not?
Searle: My argument has nothing to do with that. I am not trying to find any criteria for mentality.
CS: Yes, but we meant that what if there exists intentionality in some other form (Jacquette 1989, Carleton 1984)? It might turn out that the human intentionality is not the only possibility. For example, in your original article 1980, you said that 1) there are some processes P which produce intentionality, 2) X is intentional and derived from this that X has those processes P. This is plainly a false logical inference.
Searle: Can you show where I did say so?
CS: You wrote (1980) that “it says simply that certain brain processes are sufficient for intentionality” and “any mechanism capable of producing intentionality must have causal powers equal to those of the brain.”
Searle: ...well... you are right on that. I made a logical mistake. However, it does not destroy my arguments (1990b) since it has nothing to do with the strong AI thesis. Strong AI thesis states that all that matters is program. They don’t distinguish between alien or human intelligence. In fact, I think they are not trying to build a machine with an alien intentionality, it is the human mentality they are after.
CS: What if the whole ‘feeling’ of intentionality is only a some sort of illusion (Bridgeman 1980, Ringe 1980)?
Searle: Illusion? Don’t you have a concrete feeling of being intentional and mental?
CS: Of course, but I asked if it is important in any sense, perhaps it is just an illusion and doesn’t exist as we think?
Searle: So? Then strong AI advocates are also talking about some kind of a illusion... it doesn’t matter whether it is illusion or not... we are just debating about it. Who cares if it’s illusion? And most of all, what a marvellous and wonderful illusion it is! Let us simply say that the intentionality is in fact an illusion, and continue our debate keeping in mind that we are talking about illousions.
CS: Well, ok, just a thought... but you said that this rulebook is so complex and huge that the room can answer meaningfully to any Chinese questions?
Searle: Yes.
CS: So it must be possible for the room to learn?
Searle: Of course.
CS: I think that if the room is able to answer meaningfully to any questons, we must simply say that it understands (Rorty 1980). Cheating completely is not cheating anymore! The problem vanishes.
Searle: Going back to Turing’s proposal? If you want, you can define the concept of ‘understanding’ behaviorally. Then, understanding means the same as behaving as you would understand. I called this as-if-intentionality or observer-relative intentionality. It is for that reason we must distinguish between as-if-intentionality and intrinsic intentionality.
CS: Oh, I see... so, Turing was talking about observer- relative understanding?
Searle: Precisely. However, Turing never mentioned anything about understanding. He was talking about intelligence.
CS: These conceptual issues again... is it necessary to say that consciousness and mentality are necessary for intelligence, semantics and intentionality (Korb 1991, Dretske 1990)? I think that even semantics can exists without any conscious ‘intrinsic’ experience.
Searle: You can say so. But what I mean... I simply don’t use these words in that sense...
CS: Yes, I know. Well, I think a killer counterargument has come to my mind... you said that it is a plain fact that the man in the room does not understand Chinese and therefore he has no intentionality?
Searle: Right.
CS: But isn’t it also a fact that he is intentional when he uses his rulebook and carries out the orders? He is clearly an other human being (Boden 1988, Chrisley 1995, Rapaport 1986) and one can conclude that even a pure syntactical symbol manipulation would then require intentionality! Chinese room would not work at all if the man dies or becomes unconscious.
Searle: You made a good point. The person is not intentional what comes to Chinese, but of course, he understands something about the rulebook. You are right on that.
CS: So isn’t there something quite wrong in your argumentation then? One can conclude, in the same manner as you, that computers need to understand the rules they use in computations...
Searle: From one example you cannot infere for all. If the man in the Chinese room has intentionality, it does not follow that all computers must have intentionality.
CS: But now you are saying that the man in the room is intentional, aren’t you?
Searle: In a sense. But he or she does not understand Chinese, and that is the fact we must concentrate on. OK, I have to admit that the thought experiment is naturally such that a person is there with his or her intentionality. But you must realize that this argument is about Chinese, not about understanding the rulebook.
CS: Can you make your point more explicitly? It is no more clear to me what your argument says?
Searle: My argument says... that if there is a entity which does computation, such as human being or computer, it cannot understand the meanings of the symbols it uses. It does not understand the input given or the output splitted out.
CS: Fine. I can accept that, at least temporarily. But how can one be so sure that the room as a whole does not understand? How can you, being only a part of the system, know about the system as whole (Weiss 1990, Copeland 1993, see also Searle 1980). Think about a single neuron in your head. Do you think it is conscious? But however, quite miraculously, the system of many neurons becomes conscious (Dyer 1990). Maybe the understanding is not in the manipulator but in the rulebook (Abelson 1980), or in the researcher how made the rulebook (Schank 1980)?
Searle: If the room keeps bothering you, we can leave it out. Suppose that the man memorizes the whole rulebook and starts working outdoors. There are no differences and no other subparts where the intentionality would mysteriously hide (Searle 1980). And, if the understanding is on the rulebook or in the researcher, you are talking about the weak artificial intelligence.
CS: What about the indexicals? If you ask the room “What is this?”, he cannot answer. He cannot see it from the rulebook (Carleton 1984, Ben-Yami 1993).
Searle: That assumes that the only input to the machine is language.
CS: So, we must imagine, instead of a passive room, a robot of some kind with perceptions, ability to move, sensort modalities etc. etc. (Bridgeman 1980, Fodor 1980, Boden 1988, Dyer 1990)? A Chinese robot?
Searle: That does not make any difference. When you are sitting in the room and getting meaningless input, you can’t even know whether it is language or something else. It is so meaningless. It does not matter whether the system is a language speaker or a humanoid Star Wars robot.
CS: I have to admit. But what if we simulate - in the means of computer program - every neuron in our head? I mean every neuron in any arbitrary level of detail (Pylyshyn 1980, Bridgeman 1980, Haugeland 1980, Lycan 1980, Hofstadter 1980, Wilensky 1980, Savitt 1981, Jacquette 1989, 1989b, 1990, Dyer 1990)?
Searle: We might get a robot behaving just like us.
CS: But how can you be so sure that it does not have mental life? Think! If we simulate every neuron - what could possible be left out? Nothing! There is absolutely everything in that program we can ever need.
Searle: First, I have to say that this idea is quite far from cognitivism and cognitive science where we are not supposed to simulate brains but minds. That has nothing to do with the original physical symbol system hypothesis.
CS: Of course, but in your original article you said that the Chinese room argument applies to any Turing equivalent computer simulation. Surely this brain-simulation can be Turing equivalent.
Searle: So I said...
CS: Sorry to interrupt, but let me formulate another thought experiment (comes from Savitt 1982). Suppose there are n neurons in your head. Call him agent**n. Take one neuron from his head and formalize its functioning, and replace the missing neuron with its simulation. Call this agentn-1. And you can go on: take another neuron, write down its program and make the replacement. Your Chinese room is agent0! No neurons are left, there is only one huge program - a simulation of his every neuron. What happens between? Does his intentionality vanish? Does it transfer to the program?
Searle: A nice story. But first, we must distinguish two systems. One with many small demons simulating every neuron and the one where only one demon is running the whole program. My Chinese room is about the latter. The former is another issue. And finally, there is no difference. If you make the rulebook to simulate all neurons, this does not give any sense of intentionality to the manipulator. He cannot even know whether the rules concern stock prizes, language production, neural simulation or anything else. We have to admit that the man in the room cannot understand whatever the program is.
The cognitive scientist has failed
Time goes by. Cognitive scientist goes to her chambers and starts to wonder the issue. The man in the Chinese room does not understand... is there any way out of this plain fact? She is feeling depressed, being a defender of strong artificial intelligence and cognitivism. She has done a lifetime job with computer simulations, thinking that some day they would produce mentality. Chinese room seems to be a system which cannot ever be mental.
Then suddenly everything becomes clear. She knows what’s wrong with Searle’s argument, and why Searle is wrong.
Revenge
Searle: You told me you have another counter-argument dealing with my Chinese room?
CS: Yes I do! And this time you cannot beat me.
Searle: We’ll see...
CS: Previously I argued that what if the whole room understands Chinese, and not the person inside it. You replied that the room is unnecessary, and that the man can memorize the rulebook, was this correct?
Searle: Yes, that’s what I said.
CS: You also said that then there are no other systems where the intentionality would hide? There is only one person who is manipulating the rules from his memory?
Searle: Right.
CS: There is only one person... however, as we know, persons and brains are not the same thing, are they?
Searle: No, they are not. Brains do not have sensations but persons do. Persons see red colors but brains do not. This is just a conceptual issue here, quite obvious indeed. It has nothing to do with the fact that brains cause persons or that they are identical. In our language we must distinguish between persons and brains. I have said earlier (1992) that those mental, personal phenomena are also irreducible.
CS: Now, you said that the person does not understand, but what about his brain?
Searle: ?
CS: Who says that there must be only one person per brain (McDermott 1980, Dennett 1980, McCarthy 1980, Maloney 1987, Cole 1991)? A person cannot report on his brain. For example, you are quite unreliable source of information about your neurons or synapses. Moreover, as you previously replied, if the person memorizes the rulebook and goes outdoors, there would be no place where the intentionality can hide. My answer is: there is such a place, namely, the person’s brain.
Searle: There is no reason to think that there would ever be any another person in my brain than me!
CS: But in principle it is possible, isn’t it?
Searle: Yes, but...
CS: In philosophy, anything that is possible in principle can do.
Searle: What are you suggesting? Are you saying that we cannot rely on the Chinese room since we cannot be - as persons - sure if there is someone * else* in our... head... which would understand Chinese?
CS: Precisely. You are making a sort of categorical mistake (Boden 1988) in this argument, mixing the brain and the mind. In computers, for example, you don’t have the person’s level as a starting-point. It might only emerge in right circumstances. The person in the Chinese room is just not reliable a source of information about this matter.
Searle: But still... I find your speculations quite unconvincing, even if I have to admit that it is possible in principle.
CS: I know. But the best part is still coming.
Searle: Hit me.
**CS:**There is one way for the man inside the room to learn Chinese.
Searle: What are you waiting? Speak it out!
CS: The man can go outdoors and start to learn, like anybody else. He can just break the walls and meet his programmers.
Searle: What?? But that would violate my argument..! You cannot be serious.
CS: I am deadly serious. OK, I have to admit that it violates your original argument, but in what way