John Searle (1932 – ) is currently the Slusser Professor of Philosophy at the University of California, Berkeley. He received his PhD from Oxford University. He is a prolific author and one of the most important living philosophers.
According to Searle, Kurzweil’s book, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, is an extensive reflection on the implications of Moore’s law.[i] The essence of that argument is that smarter than human computers will arrive, we will download ourselves into this smart hardware, thereby guaranteeing our immortality. Searle attacks this fantasy by focusing on the chess playing computer “Deep Blue,” (DB) which defeated world chess champion Gary Kasparov in 1997.
Kurzweil thinks DB is a good example of the way that computers have begun to exceed human intelligence. But DB’s brute force method of searching through possible moves differs dramatically from the how human brains play chess. To clarify Searle offers his famous Chinese Room Argument. If I’m in a room with a program that answers questions in Chinese even though I do not understand Chinese, the fact that I can output the answer in Chinese does not mean I understand the language. Similarly DB does not understand chess, and Kasparov was playing a team of programmers, not a machine. Thus Kurzweil is mistaken if he believes that DB was thinking.
According to Searle, Kurzweil confuses a computers seeming to be conscious with it actually being conscious, something we should worry about if we are proposing to download ourselves into it! Just as a computer simulation of digestion cannot eat pizza, so too a computer simulation of consciousness is not conscious. Computers manipulate symbols or simulate brains through neural nets—but this is not the same as duplicating what the brain is doing. To duplicate what the brain does the artificial system would have to act like the brain. Thus Kurzweil confuses simulation with duplication.
Another confusion is between observer-independent (OI) features of the world, and observer-dependent (OD) features of the world. The former include features of the world studied by, for example, physics and chemistry; while the latter are things like money, property, governments and all things that exist only because there are conscious observers of them. (Paper has objective physical properties, but paper is money only because persons relate to it that way.)
Searle says that he is more intelligent than his dog and his computer in some absolute, OI sense because he can do things his dog and computer cannot. It is only in the OD sense that you could say that computers and calculators are more intelligent than we are. You can use intelligence in the OD sense provided that you remember it does not mean that a computer is more intelligent in the OI sense. The same goes for computation. Machines compute analogously to the way we do, but they don’t computer intrinsically at all—they know nothing of human computation.
The basic problem with Kurzweil’s book, according to Searle, is its assumption that increased computational power leads to consciousness. But he says that increased computational power of machines gives us no reason to believe machines are duplicating consciousness. The only way to build conscious machines would be to duplicate the way brains work and we don’t know how they work. In sum, behaving like one is conscious is not the same as actually being conscious.
Summary – Computers cannot be conscious.
[i] John Searle, “I Married A Computer,” review of The Age of Spiritual Machines, by Ray Kurzweil, New York Review of Books, April 8, 1999.
6 thoughts on “John Searle’s Critique of Ray Kurzweil”
What would Searle say about the computer that just beat one of the world’s leading Go players using a program that supposedly taught itself how to make the best moves instead of the brute force (all possible combinations) method of Deep Blue beating Kasparov? –Sylvia Jane Wojcik
I don’t think he would have a good reply because I think he is mistaken.
Playing Go (and other games) ist just executing an learning algorytm and creating posissible solution based on previous experiences. Computers are better than we because of super accurate memory and a faster gathering of experience. You can compare these super-computers with a human being, that can learn a game for milions of years, and has a perfect memory for previous plays. That’s still computational. But lot more efficient that brute fore.
The conclusion does not follow from the premises. It might not even be true that we have to know what consciousness is, how it works, to reproduce it, after all that’s what animals do. Certainly there’s no reason to believe if we do fully understand consciousness then machine hosts for it cannot be created.
I agree. JGM
Just a couple of thoughts after reading Sylvia’s comment: did the computer enjoyed playing that game of Go? did it feel any satisfaction after its win?