(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 8, 2016.)
John Searle (1932 – ) is currently the Slusser Professor of Philosophy at the University of California, Berkeley. He received his PhD from Oxford University. He is a prolific author and one of the most important living philosophers.
According to Searle, Kurzweil’s book is an extensive reflection on the implications of Moore’s law.[i] The essence of that argument is that smarter than human computers will arrive, we will download ourselves into this smart hardware, thereby guaranteeing our immortality. Searle attacks this fantasy by focusing on the chess playing computer “Deep Blue,” (DB) which defeated world chess champion Gary Kasparov in 1997.
Kurzweil thinks DB is a good example of the way that computers have begun to exceed human intelligence. But DB’s brute force method of searching through possible moves differs dramatically from the how human brains play chess. To clarify Searle offers his famous Chinese Room Argument. If I’m in a room with a program that answers questions in Chinese even though I do not understand Chinese, the fact that I can output the answer in Chinese does not mean I understand the language. Similarly DB does not understand chess, and Kasparov was playing a team of programmers, not a machine. Thus Kurzweil is mistaken if he believes that DB was thinking.
According to Searle, Kurzweil confuses a computers seeming to be conscious with it actually being conscious, something we should worry about if we are proposing to download ourselves into it! Just as a computer simulation of digestion cannot eat pizza, so too a computer simulation of consciousness is not conscious. Computers manipulate symbols or simulate brains through neural nets—but this is not the same as duplicating what the brain is doing. To duplicate what the brain does the artificial system would have to act like the brain. Thus Kurzweil confuses simulation with duplication.
Another confusion is between observer-independent (OI) features of the world, and observer-dependent (OD) features of the world. The former include features of the world studied by, for example, physics and chemistry; while the latter are things like money, property, governments and all things that exist only because there are conscious observers of them. (Paper has objective physical properties, but paper is money only because persons relate to it that way.)
Searle says that he is more intelligent than his dog and his computer in some absolute, OI sense because he can do things his dog and computer cannot. It is only in the OD sense that you could say that computers and calculators are more intelligent than we are. You can use intelligence in the OD sense provided that you remember it does not mean that a computer is more intelligent in the OI sense. The same goes for computation. Machines compute analogously to the way we do, but they don’t computer intrinsically at all—they know nothing of human computation.
The basic problem with Kurzweil’s book is its assumption that increased computational power leads to consciousness. Searle says that increased computational power of machines gives us no reason to believe machines are duplicating consciousness. The only way to build conscious machines would be to duplicate the way brains work and we don’t know how they work. In sum, behaving like one is conscious is not the same as actually being conscious.
Summary – Computers cannot be conscious.
[i] John Searle, “I Married A Computer,” review of The Age of Spiritual Machines, by Ray Kurzweil, New York Review of Books, April 8, 1999.