Category Archives: Mind Uploading

John Searle’s Critique of Ray Kurzweil

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 8, 2016.)

John Searle (1932 – ) is currently the Slusser Professor of Philosophy at the University of California, Berkeley. He received his PhD from Oxford University. He is a prolific author and one of the most important living philosophers.

According to Searle, Kurzweil’s book, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, is an extensive reflection on the implications of Moore’s law.[i] The essence of that argument is that smarter than human computers will arrive, we will download ourselves into this smart hardware, thereby guaranteeing our immortality. Searle attacks this fantasy by focusing on the chess playing computer “Deep Blue,” (DB) which defeated world chess champion Gary Kasparov in 1997.

Kurzweil thinks DB is a good example of the way that computers have begun to exceed human intelligence. But DB’s brute force method of searching through possible moves differs dramatically from the how human brains play chess. To clarify Searle offers his famous Chinese Room Argument. If I’m in a room with a program that answers questions in Chinese even though I do not understand Chinese, the fact that I can output the answer in Chinese does not mean I understand the language. Similarly DB does not understand chess, and Kasparov was playing a team of programmers, not a machine. Thus Kurzweil is mistaken if he believes that DB was thinking.

According to Searle, Kurzweil confuses a computers seeming to be conscious with it actually being conscious, something we should worry about if we are proposing to download ourselves into it! Just as a computer simulation of digestion cannot eat pizza, so too a computer simulation of consciousness is not conscious. Computers manipulate symbols or simulate brains through neural nets—but this is not the same as duplicating what the brain is doing. To duplicate what the brain does the artificial system would have to act like the brain. Thus Kurzweil confuses simulation with duplication.

Another confusion is between observer-independent (OI) features of the world, and observer-dependent (OD) features of the world. The former include features of the world studied by, for example, physics and chemistry; while the latter are things like money, property, governments and all things that exist only because there are conscious observers of them. (Paper has objective physical properties, but paper is money only because persons relate to it that way.)

Searle says that he is more intelligent than his dog and his computer in some absolute, OI sense because he can do things his dog and computer cannot. It is only in the OD sense that you could say that computers and calculators are more intelligent than we are. You can use intelligence in the OD sense provided that you remember it does not mean that a computer is more intelligent in the OI sense. The same goes for computation. Machines compute analogously to the way we do, but they don’t computer intrinsically at all—they know nothing of human computation.

The basic problem with Kurzweil’s book, according to Searle, is its assumption that increased computational power leads to consciousness. But he says that increased computational power of machines gives us no reason to believe machines are duplicating consciousness. The only way to build conscious machines would be to duplicate the way brains work and we don’t know how they work. In sum, behaving like one is conscious is not the same as actually being conscious.

Summary – Computers cannot be conscious.


[i] John Searle, “I Married A Computer,” review of The Age of Spiritual Machines, by Ray Kurzweil, New York Review of Books, April 8, 1999.

Ray Kurzweil’s Basic Ideas


I and many other scientists now believe that in around twenty years we will have the means to reprogram our bodies’ stone-age software so we can halt, then reverse, aging. Then nanotechnology will let us live forever. ~ Ray Kurzweil

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 4, 2016.)

Ray Kurzweil (1948 – ) is an author, inventor, futurist, and currently Director of Engineering at Google. He is involved in fields such as optical character recognition, text-to-speech synthesis, speech recognition technology, and electronic keyboard instruments; he is the author of several books on health, artificial intelligence, transhumanism, the technological singularity, and futurism; and he may be the most prominent spokesman in the world today for advocating the use of technology to transform humanity.

In his book, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, Kurzweil argues that in the next one hundred years machines will surpass human intelligence. Computers already surpass humans in playing chess, diagnosing certain medical conditions, buying and selling stocks, guiding missiles, and solving complex mathematical problems. However, unlike human intelligence, machine intelligence cannot describe objects on a table, write a term paper, tie shoes, distinguish a dog from a cat, or appreciate humor. One reason for this is that computers are simpler than the human brain, about a million times simpler. However, this difference will go away as computers continue to double in speed every twelve months, achieving the memory capacity and computing speed of the human brain around 2020.

Still, this won’t allow computers to match the flexibility of human intelligence because the software of intelligence is as important as the hardware. One way to mirror the brain’s software is by reverse engineering—scanning a human brain and copying its neural circuitry into a neural computer of sufficient capacity. If computers reach a human level of intelligence through such technologies, they will then go beyond it. They already remember and process information better than we do, remembering trillions of facts perfectly while we have a tough time with a few phone numbers. The combination of human-level intelligence along with greater speed, accuracy, and memory capabilities will push computers beyond human intelligence. A main reason for this is that our neurons are slow compared with electronic circuits, and most of their complexity support life processes, not computation and information analysis. Thus, while many of us think of evolution as a billion-year drama that leads to human intelligence, the creation of greater than human intelligence will quickly dispel that notion.

Kurzweil supports his case with a number of observations about cosmic evolution and human history. Consider that for most of the history of the universe, cosmologically significant events took eons of time—the interval between significant events was quite long for most of cosmic evolution. But as the universe aged the interval between significant events grew shorter, and cosmically significant events now happen at increasingly shorter intervals. We can see this in the pace of cosmic evolution: ten billion years until the earth’s formation; a few billion more for life to evolve, hundreds of millions of years till the emergence of primates, millions of years till the emergence of humanoids, and the emergence of Homo sapiens a mere 200 thousand years ago. In short, transformation is speeding up; the interval between salient events is shrinking.

Now technology is moving this process. Technology—fashioning and using ever more sophisticated tools—is simply another means of evolution which expedite the process of change considerably. Consider that Homo sapiens sapiens appeared only 90 thousand years ago, and become the lone hominoids a mere 30,000 years ago. Still, it took tens of thousands of years to figure out how to sharpen both ends of stones to make them effective! Needless to say, the pace of technological change has accelerated remarkably since then. For example, the 19th century saw technology increase at a dramatic rate compared to the 18th century and increased unbelievably fast compared to the 12th century. In the 20th century, major shifts in technology began to happen in decades or in some cases in a few years. A little more than a hundred years ago there was no flight or radio; and a mere fifty years ago there were no wireless phones or personal computers, much fewer cell phones or the internet. Today it seems your phone and computer are obsolete in a matter of months.

Technology has enabled our species to dominate the earth, exercise some control over our environment, and survive. Perhaps the most important of these technological innovations has been computation, the ability of machines to remember, compute, and solve problems. So far computers have been governed by Moore’s law: every two years or so the surface area of a transistor is reduced by fifty percent, putting twice as many transistors on an integrated circuit. The implication is that every two years you get twice the computing power for the same amount of money. This trend should continue for another fifteen years or so after which it will break down when transistor insulators will be but a few atoms wide. (At that point quantum computing may move the process forward in fantastic ways.) To really understand what will happen in the 21st century and beyond, we need to look at the exponential growth of the technology that will bring about vast changes in the near future.

Crucial to Kurzweil’s analysis is what he calls “the law of time and chaos.” He asks why some processes begin fast and then slow down—salient events in cosmic evolution or in the biological development of an organism—and why others start slowly and then speed up—the evolution of life forms or technology. The law of time and chaos explains this relationship. If there is a lot of chaos or disorder in a system, the time between salient events is great; as the chaos decreases and the order increases, the time between salient events gets smaller. The “law of accelerating returns” describes the latter phenomenon and is essential to Kurzweil’s argument. (You might say that his entire philosophy is a meditation on accelerating returns or exponential growth.) He argues that though the universe as a whole increases in disorder or entropy, evolution leads to increasing pockets of order (information for the purpose of survival) and complexity. Technology evolution is evolution by means other than biology, and it constantly speeds up as it builds upon itself.

We might reconstruct his basic argument as follows: a) evolution builds on itself, thus; b) in an evolutionary process order increases exponentially, thus; c) the returns accelerate. This law of accelerating returns drives cultural and technological evolution forward, with the returns building on themselves to create higher returns. Thus the entire process changes and grows exponentially, meaning that the near future will be radically different from the present.

… evolution has found a way around the computational limitations of neural circuitry. Cleverly, it has created organisms that in turn invented a computational technology a million times faster than carbon-based neurons … Ultimately, the computing conducted on extremely slow mammalian neural circuits will be ported to a far more versatile and speedier electronic (and photonic) equivalent.[i] This will eventually lead to reverse engineering the human brain by scanning it, mapping it, and eventually downloading our minds into computers. This means that your mind (software) would no longer be dependent on your body (hardware). Moreover, your evolving mind file will not be stuck with the circuitry of the brain, making it capable of being transferred from one medium to another, just as files are transferred from one computer to another. Then “our immortality will be a matter of being sufficiently careful to make frequent backups. If we’re careless about this, we’ll have to load an old backup copy and be doomed to repeat our recent past.”[ii]

We could download our personal evolving mind files into our original bodies, upgraded bodies, nanoengineered bodies, or virtual bodies. As we are currently further along with body transformation than with brain transformation—titanium devices, artificial skin, heart valves, pacemakers—we might want to first completely rebuild our bodies using genetic therapies. But this will only go so far because of the limitations of DNA-based cells that depend on protein synthesis. No matter how well we enhance our bodies, they would still just be second-rate robots.

Instead, Kurzweil suggests we use nanotechnology to rebuild the world atom by atom. The holy grail of nanotechnology would be intelligent self-replicating nanomachines capable of manipulating things at the nanolevel. (The great physicist Richard Feynman originally explained the possibility of nanotechnology in the 1950s. Today, important theorists like Eric Drexler and Ralph Merkle have shown the feasibility of self-replicating nanobots. Nanotechnology programs are now common in major universities.) The possibilities for nanotechnology to transform the world are extraordinary. It could build inexpensive solar cells to replace fossil fuels, or be launched into our bloodstream to improve the immune system, destroy pathogens, eradicate cancer cells, and reconstruct bodily organs and systems. It even has the potential to reverse engineer human neurons or any cell in the human body. Will people use this technology?

There is a clear incentive to go down this path. Given a choice, people will prefer to keep their bones from crumbling, their skin supple, and their life systems strong and vital. Improving our lives through neural implants on the mental level, and nanotech enhance bodies on the physical level, will be popular and compelling. It is another one of those slippery slopes—there is no obvious place to stop this progression until the human race has largely replaced the brains and bodies that evolution first provided.[iii]

Kurzweil also argues that “the law of accelerating returns” applies to the entire universe. He conjectures that life may exist elsewhere in the universe and proceed through various thresholds: the evolution of life forms; of intelligence; of technology; of computation; and finally the merging of a species with its technology—all driven by accelerating returns. Of course, there are many things that can go wrong—nuclear war, climate change, asteroids, bacteria, self-replicating nanobots, and software viruses. Still, he remains optimistic.

Kurzweil ends his book by arguing that intelligence is not impotent against the mighty forces of the universe. Intelligence thwarts gravity and manipulates other physical phenomena despite its density being vanishingly small in a vast cosmos. If intelligence increases exponentially with time, then it will become a worthy competitor for big universal forces. He concludes: “The laws of physics are not repealed by intelligence, but they effectively evaporate in its presence… the fate of the Universe is a decision yet to be made, one which we will intelligently consider when the time is right.”[iv]


[i] Ray Kurzweil, The Age of Spiritual Machines (New York: Penguin, 1999), 101-102
[ii] Kurzweil, The Age of Spiritual Machines, 129.
[iii] Kurzweil, The Age of Spiritual Machines, 141.
[iv] Kurzweil, The Age of Spiritual Machines, 260.

What Will Life Be Like Inside A Computer?

(This article was reprinted in the online magazine of the  “Institute for Ethics and Emerging Technologies,” December 7, 2014)

Many scientists believe that we will soon be able to preserve our consciousness indefinitely. There are a number of scenarios by which this might be accomplished, but so-called mind uploading is one of the most prominent. Mind uploading refers to a hypothetical process of copying the contents of a consciousness from a brain to a computational device. This could be done by copying and transferring these contents into a computer, or by piecemeal replacement with parts of the brain gradually replaced by hardware. Either way, consciousness would no longer be running on a biological brain.

I am in no position to judge the feasibility of mind uploading; experts have both praised and pilloried its viability. Nor can I judge what it would be like to live in a virtual reality, given that I don’t even know what it’s like to be a dog or another person. And I don’t know if I would have subjective experiences inside a computer, in fact, we don’t know how the brain gives rise to subjective experiences. So I certainly don’t know what it would be like to exist as a simulated mind inside a computer or a robotic body. What I do know is that the Oxford philosopher and futurist Nick Bostrom has argued that there is a good chance that we live in a simulation now. And if he’s right, then you are having subjective experiences inside a computer simulation as you read this.

But does it make sense to think a mind program could run on something other than a brain? Isn’t subjective consciousness rooted in the biological brain? Yes, for the moment our mental software runs on the brain’s hardware. But there is no necessary reason that this has to be the case. If I told you a hundred years ago that some integrated silicon circuits will come to play chess better than grandmasters, model future climate change, recognize faces and voices, and solve famous mathematical problems, you would be astonished. Today you might reply, “but computers still can’t feel emotions or taste a strawberry.” And you are right they can’t—for now. But what about a thousand years from now? What about ten thousand or a million years from now? Do you really think that in a million years the best minds will run on carbon-based brains?

If you still find it astounding that minds could run on silicon chips, consider how absolutely remarkable it is that our minds run on meat! Imagine beings from another planet with cybernetic brains discovering that human brains are made of meat. That we are conscious and communicate by means of our meat brains. They would be amazed. They would find this as implausible as many of us do the idea that minds could run on silicon.

The key to understanding how mental software can run on non-biological hardware is to think of mental states not in terms of physical implementation but in terms of functions. Consider for example that one of the functions of the pancreas is to produce insulin which maintains the balance of sugar and salt in the body. It is easy to see that something else could perform this function, say a mechanical or silicon pancreas. Or consider an hourglass or an atomic clock. The function of both is to keep time yet they do this quite differently.

Analogously, if mental states are identified by their functional role then they too could be realized on other substrates, as long as the system performs the appropriate functions. In fact, once you have jettisoned the idea that your mind is a ghostly soul or a mysterious, impenetrable, non-physical substance, it is relatively easy to see that your mind program could run on something besides a brain. It is certainly easy enough to imagine self-conscious computers or intelligent aliens whose minds run on something other than biological brains. Now there’s no way for us to know what it would be like to exist without a brain and body, but there’s no convincing reason to think one couldn’t have subjective experiences without physicality. Perhaps our experiences would be even richer without a brain and body.

We have so far ignored important philosophical questions like whether the consciousness transferred is you or just a copy of you. But I doubt that such existential worries will stop people from using technology to preserve their consciousness when oblivion is the alternative. We are changing every moment and few worry that we are only a copy of ourselves from ten years ago. We wake up every day as little more than a copy of what we were yesterday and few fret about that.

Perhaps an even more pressing concern is what one does inside a simulated reality for an indefinitely long time. This is the question recently raised by the Princeton neuroscientist Michael Graziano. He argues that the question is not whether we will be able to upload our brains into a computer—he says we will—but what will we do afterward.

I suppose that some may get bored with eons of time and prefer annihilation. Some would get bored with the heaven they say they desire. Some are bored now. So who wants to extend their consciousness so that they can love better and know more? Who wants to live long enough to have experiences that surpass our current ones in unimaginable ways? The answer is … many of us do. Many of us aren’t bored so easily. And if we get bored we can always delete the program.

More About Mind Uploading

Here’s a brief follow up to yesterday’s post. While perusing the subreddit  cogsci  at, I noticed that there were a number of comments about my recent blogpost.

I read all of the comments and learned much from them. I wanted to comment briefly about a few issues that arose. Again thanks to all who read the post and commented on it.

To Cyberbyte & filterspam & Sockso –  Yes we should be careful as we don’t want to be tortured infinitely; and yes there are fates a LOT worse than death. The same issue comes up if considering cryonic preservation. Is it better to die and received a zero; or utilize technology and receive anything from heaven to hell? Wow, what a gamble! I suppose each will have to choose for themselves when such technologies are available. But faced with oblivion, I would probably gamble.

To egypturnash- We will need laws governing the procedure. Personally I think only one transfer should be allowed.

To Simulation Brain & reddell –  You captured my basic premise clearly and succinctly. When confronted with death most persons will copy their mind file to an AI and put it in a body or virtual reality. You will ignore philosophical issues about whether this copy is the real you.

To noggin-scratcher – Yes this will take some getting used to. And your comments about whether the uploading transition happens gradually or quickly is important.  I think you are especially insightful when you say: 

“… as I don’t believe in a soul or an essence or any other kind of magic “me” fluid that needs to be carefully poured between containers… doing a thing gradually ought to be equivalent to doing the same thing in a single step, since it results in the same physical arrangement at the final step. Smearing the transition out makes it difficult to pinpoint any single moment where you cease to be “Meat Me” and start to be “Android Me”, but I think that if replacing your brain with an identical synthetic substitute is problematic when done in one go, it should still be considered problematic when done piecemeal.”

I’d have to think about this more but off the top of my head I think I agree.

To Haydork & psiphre  – In the transporter you body is disassembled–annihilated if you will–and then you body is recreated. Is this a copy? Yes, and most people have no problem with this.  However, Larry Krauss,  in his book The Physics of Star Trek, said this almost certainly won’t work

To eudaimondaimon – I don’t think the ship of Theseus is a good analogy. The brain doesn’t have to be replaced piecemeal and the ships never were conscious. Think of it like this. You walk through a machine, your consciousness transferrs to a robotic body, your old body falls down dead emptied of its mind content, and “you” are alive in your new body. That’s not dying in a signficant sense, although its obviously not the same as living forever in your old body. And in principle it could work, as Kurzweil, Moravec, Kaku, Marshall Brain, and others suggest. 

To throweraccount – You see the possibility of living in a virtual reality. Remember the last line of the original Star Trek television pilot? The keeper says to Captain Kirk: “Captain Pike has an illusion, and you have reality. May you find your way as pleasant.” Let none of us forget how much better a virtual reality could be.

To andero – Another Star Trek fan. You worry about the copy not having the experience of dying. It wouldn’t have to have this experience if it doesn’t die but is transferred.

To Moarbrains –  You are right there may be better things to invest in, but I disagree that you can know beforehand if the attempt is useless. Who knows what we might become?

To filterspam – We do have reason to think that cryonics will work. If nanotechnology fulfills its promise, cryonics might very well work.

To vernes1978  – I like your idea that better to have some chance than no chance of immortality. And you are correct that some people will never be satisfied with their copy. I also think you make an excellent point about the richness of our experiences. They may be rich compared to non-human animal experience, but they may pale in comparison to the experiences of intelligent aliens or post humans.

To craigiest – If you can copy your mind file that will be good enough for most people. If it isn’t, you can still die and hope your soul is transferred to heaven.

To nukefudge –  You are correct, first we must overcome the technical problems and they are no doubt enormous.

Thanks again to all who contributed and from whom I learned so much. But keep the image in your mind. Assume you are old and want to live forever. You can die and hope or you can use technology and transfer or copy your mind file to another substrate. If you decide to do the latter, I doubt philosophical concerns about copies versus transfers will deter you.

Most Will Upload Their Minds If They Can

Professor Susan Schneider has written an important piece in today’s New York Times: The Philosophy of ‘Her.’ I applaud her for recognizing that uploading should be pursued, and for writing a timely piece about this topic. But while she places great emphasis on the distinction between whether mind uploading is a copy or a transfer of consciousness—as did nearly all my students through the years—I don’t find the distinction as important.

The primary reason is that when persons consider uploading, if and when its available, they won’t worry about whether they are copying or transferring their consciousness. Whether they upload into a genetically engineered body, a robotic body or to a virtual reality, most will gladly do so rather than die.

Professor Schneider is correct that whether or not the original you survives the copying makes a difference. If the original you survives then there are as many “yous” as there are copies of you, assuming the copies are perfect. Going forward each copy will change as it has new experiences and multiple persons would have been created. The copies would continue to change  just as your current self does. This leads to problem of personal identity—how and if we remain identical over time. It is a philosophical conundrum, but it exists independently of uploading technology. There is always a problem of explaining how “you” persist through time.

If the original “you”” is destroyed in the uploading process, then we have transferred your consciousness into one or more new substrates. But there is no important distinction between being copied or transferred. If you want to hold on to essentialism—the idea that humans have an essence—then you could say that your true self was only copied and not transferred. But if you reject an essentialist theory, then copying yourself will be good enough, especially if you have no other options. Note too that the same problem arises for religious believers who die and wake up in heaven. Is the body that wakes up just a copy, or has your soul been transferred to heaven? But no one worries about this—they just want to wake up in heaven!

Now suppose you are facing death with a decrepit body. A new technology promises to upload your memories, experiences and all your other psychological characteristics to a robotic body, an AI or a virtual reality.  Suppose further that the technology has been well-tested and many friends tell you of the wondrous experiences available to uploaded minds. Should you try it? You may decide to die and hope that Jesus or Mohammed will save you. But most will not. They’ll take the sure thing. Philosophical concerns about whether this new you is a copy or a transfer will not stop you from uploading. Not if you want to live forever.