In this paper, I will explain why Turing proposes an “imitation game” as a reformulation of the question of “Can computers think”. However, I will argue that passing the “Turing test” neither constitutes a sufficient nor a necessary condition for answering this question. Based on Searle’s argument, I will explain that judging whether a computer has an original intentional mental status is a better test. Finally, I will show although Searle does not provide a “test” to measure intentionality, he in fact answers the question that computers cannot think.

By asking “Can computers pass the Turing test”, Turing innovatively converts a difficult theoretical question to an operational question that is open to experimental research. Directly answering “Can computers think?” is hard, given the ambiguous definition of “thinking” and “computer”. A clear definition of “thinking” and “computer” is the premise of judging whether computers can think. According to Turing, the definitions might be framed to reflect the common understanding among people and the normal use of the words. Therefore, a statistical survey is necessary to be conducted in seeking the meaning and answer to the question, but it is absurd since it draws answers from imprecise public opinions rather than scientific studies. Instead of attempting a definition that satisfies everyone, Turing replaces the question by another, which is unambiguous, and operationalizable.

Turing proposes an “imitation game”, which we call “Turing test” today. According to Turing, passing the test can be equivalently considered as having the capacity of thinking. The test involves a human interrogator and two respondents. One respondent is a computer and the other is human. The interrogator could use a keyboard and screen to engage in a natural language conversation with the unseen respondents. After many trials, if the interrogator cannot reliably tell the computer from the human, the computer is said to have passed the Turing test.

However, I will argue that even if a computer passes the Turing test, it does not indicate the computer can think. The Chinese room experiment proposed by John Searle forms a counterexample. The person in the experiment mimics a computer program, and the whole system with Chinese symbols both as input and output mimic a human being that engages in a Chinese conversation with a real human. In the experiment, a monolingual English speaker isolated in a room is given English instructions for manipulating Chinese symbols, while he does not understand or even cannot recognize Chinese. Someone outside the room hands in a set of Chinese symbols. The person applies the rules, writes down a different set of Chinese symbols as specified by the rules, and hands the result to a person outside the room. It is logically possible that the person outside the room is convinced that he/she is interacting with a real Chinese speaker. Therefore, if we let a computer that can execute the same rules replace the person, it also can logically pass the Turing test. However, Searle argues that thinking requires understanding. Since manipulating Chinese symbols following formal rules is not sufficient for the person to understand Chinese, it is not sufficient for a computer to understand Chinese, either. Therefore, a computer passing the Turing test does not sufficiently implies it can think.

According to Searle, understanding is a criterion for thinking, and “understanding” implies the possession of original intentional mental states and the truth of these states. Searle defines that “intentional mental states have propositional content that are directed at or about objects and states of affairs in the world.” In other words, it is about something. For example, the belief that dog is human’s friend is about dog, the desire to have a cat is about cat, so they are intentional states; while sensations like pains and itches are not about anything thus are not intentional states. Therefore, testing whether a computer has original intentional mental states is a better way to judge whether computers can think. However, how do we conduct such a test is an open question, given that intentionality is introspective, that is, one only privately knows he/she has intentionality, but cannot access others’ minds. If we evaluate others by interacting with them and observing their behaviors, then we again run into the Turing test.

Although Searle does not provide a test of whether computers can think, he indeed disposes of the question. He rejects that computers can think, as he disagrees that mind to brain can be duplicated by program/software to hardware. First, he argues that programs can have realizations but no understanding, because programs have no intentionality. In the Chinese room, the English speaker can memorize the rules and Chinese symbols, but memorizing won’t enhance his understanding of Chinese. Likewise, a computer can mimic human beings, but “mimicking” is not “duplicating”, as it only mechanically execute the programs that result in human-like behaviors or languages, but it does not involves any understanding of the corresponding meanings. Second, Searle shows that passing the Turing test is neither a sufficient nor a necessary condition for understanding/thinking. The Turing test requires using a computer program. As it is shown, programs do not have original intentionality, but understanding requires intentionality, so a program that lets the computer pass the Turing test does not indicate the computer can think. On the other hand, it is possible that original intentionality goes through other channels other than programs. Therefore, even if a computer can think, it is not necessary that it has to pass the Turing test.

To conclude, due to technology in-advance and ambiguous definition of the terms “thinking” and “computers”, Turing reformulates the question “Can computers think?” to “Can computers pass the Turing test?” However, this reformulation still does not answer whether computers can think, since passing the Turing test is not necessarily related to understanding, which requires intentional mental states. A test designed to detect original intentionality can essentially answer this question. However, this is empirically difficult as intentional mental states can only be privately experienced and one cannot have access to others’ mental states. Furthermore, although Searle does not provide such a test, he answers the question which denies that computers can think.