Category Archives: Mind Uploading

The Impossibility of Mind Uploading

NIMH MEG.jpg

My most recent post, “Living in a Computer Simulation,” elicited some insightful comments from a reader skeptical of the possibility of mind uploading. Here is his argument with my own brief response to it below.

My comment concerns a reductive physicalist theory of the mind, which is the view that all mental states and properties of the mind will eventually be explained by scientific accounts of physiological processes and states … Basically, my argument is that for this view of the mind, mind uploading into a computer is completely impractical due to accumulation of errors.

In order to replicate the functioning of a “specific” human mind within a computer, one needs to replicate the functioning of all parts of that specific brain within the computer. [In fact, the whole human body needs to be represented because the mind is a product of all sensations of all parts of the body coalescing within the brain. But, for the sake of argument, let’s just consider replicating only the brain.] In order to represent a specific human brain in the computer, each neuron in the brain would need a digital or analog representation, instantiated in hardware, software or a combination of the two. Unless this representation is an exact biological copy (clone), it will have some inherent “error” associated with it. So, let’s do a sort of “error analysis” (admittedly non-rigorous).

Suppose that the initial conditions of the mind being uploaded are implanted in the computer with no errors (which is highly unlikely in its own right). When the computer executes its simulation, it starts with that initial condition and then “marches in time”. The action potential duration for a single firing of a neuron is on the order of one millisecond, which implies that the computer time step would need to be no larger than that (and probably much smaller or else additional computational errors are induced). So the computer would be recalculating the state of the brain at least 1,000 times per second as it marches in time (and probably more like 10,000 times per second).

Since the computer representation of the brain is not perfect, errors will accumulate. For example, suppose that the computer representation of one neuron was only 90% accurate. After that neuron “fired”, its interaction with connected neurons would have roughly a 10% error. Now consider that the human brain has roughly 86 billion neurons, each with multiple connections to other neurons. The computer does not know which of those 86 billion neurons are needed at each time step, so all would need to be included in each calculation. One can see that 10% errors in the functioning of individual neurons within the millisecond duration will quickly accumulate to produce a completely erroneous representation of the functioning of the brain a short time after the computer started its simulation. The resulting “mind” that gets created in that computer would probably bear no similarity to the original human mind (or to probably any “human” mind). It would probably be “fuzzy” and unable to function.

Would 99% accuracy in the representation of a neuron be any better? Not really. 99.9% accuracy? Still no good. 86 billion neurons is a large number (and remember, the computer is recalculating the entire brain state 1,000 to 10,000 times per second). In order for accumulated errors to not overwhelm the simulation of the brain in the computer, the accuracy in representing each neuron would need to be extremely high and the amount of information needing to be stored for each of the 86 billion neurons would be huge, leading to an impractical data storage and retrieval problem. The only practical “computer” would be a biological clone, which is not the topic here.

Consequently, if one believes in a reductive physicalist theory of the mind, then uploading the specific mind of an individual human into a computer is, for all intents and purposes, impossible.

My Response 

Let me say briefly that I wouldn’t call mind uploading impossible, as many experts (Ray Kurzweil, Marvin Minsky, Randal A. Koene, Nick Bostrom, Michio Kaku, and others) attest to its possibility. And even skeptics like Kenneth Miller don’t reject the idea in principle. My view is that, with enough time for future innovation, something like it is almost inevitable. Of course we may not have that time.

Living In A Computer Simulation

Many scientists believe that we will soon be able to preserve our consciousness indefinitely. There are a number of scenarios by which this might be accomplished, but so-called mind uploading is one of the most prominent. Mind uploading refers to a hypothetical process of copying the contents of a consciousness from a brain to a computational device. This could be done by copying and transferring these contents into a computer, or by piecemeal replacement with parts of the brain gradually replaced by hardware. Either way, consciousness would no longer be running on a biological brain.

I am in no position to judge the feasibility of mind uploading; experts have both praised and pilloried its viability. Nor can I judge what it would be like to live in a virtual reality, given that I don’t even know what it’s like to be a dog or another person. And I don’t know if I would have subjective experiences inside a computer, in fact, we don’t know how the brain gives rise to subjective experiences. So I certainly don’t know what it would be like to exist as a simulated mind inside a computer or a robotic body. What I do know is that the Oxford philosopher and futurist Nick Bostrom has argued that there is a good chance that we live in a simulation now. And if he’s right, then you are having subjective experiences inside a computer simulation as you read this.

But does it make sense to think a mind program could run on something other than a brain? Isn’t subjective consciousness rooted in the biological brain? Yes, for the moment our mental software runs on the brain’s hardware. But there is no necessary reason that this has to be the case. If I told you a hundred years ago that some integrated silicon circuits will come to play chess better than grandmasters, model future climate change, recognize faces and voices, and solve famous mathematical problems, you would be astonished. Today you might reply, “but computers still can’t feel emotions or taste a strawberry.” And you are right they can’t—for now. But what about a thousand years from now? What about ten thousand or a million years from now? Do you really think that in a million years the best minds will run on carbon-based brains?

If you still find it astounding that minds could run on silicon chips, consider how absolutely remarkable it is that our minds run on meat! Imagine beings from another planet with cybernetic brains discovering that human brains are made of meat. That we are conscious and communicate by means of our meat brains. They would be amazed. They would find this as implausible as many of us do the idea that minds could run on silicon.

The key to understanding how mental software can run on non-biological hardware is to think of mental states not in terms of physical implementation but in terms of functions. Consider for example that one of the functions of the pancreas is to produce insulin which maintains the balance of sugar and salt in the body. It is easy to see that something else could perform this function, say a mechanical or silicon pancreas. Or consider an hourglass or an atomic clock. The function of both is to keep time yet they do this quite differently.

Analogously, if mental states are identified by their functional role then they too could be realized on other substrates, as long as the system performs the appropriate functions. In fact, once you have jettisoned the idea that your mind is a ghostly soul or a mysterious, impenetrable, non-physical substance, it is relatively easy to see that your mind program could run on something besides a brain. It is certainly easy enough to imagine self-conscious computers or intelligent aliens whose minds run on something other than biological brains. Now there’s no way for us to know what it would be like to exist without a brain and body, but there’s no convincing reason to think one couldn’t have subjective experiences without physicality. Perhaps our experiences would be even richer without a brain and body.

We have so far ignored important philosophical questions like whether the consciousness transferred is you or just a copy of you. But I doubt that such existential worries will stop people from using technology to preserve their consciousness when oblivion is the alternative. We are changing every moment and few worry that we are only a copy of ourselves from ten years ago. We wake up every day as little more than a copy of what we were yesterday and few fret about that.

Perhaps an even more pressing concern is what one does inside a simulated reality for an indefinitely long time. This is the question recently raised by the Princeton neuroscientist Michael Graziano. He argues that the question is not whether we will be able to upload our brains into a computer—he says we will—but what will we do afterward.

I suppose that some may get bored with eons of time and prefer annihilation. Some would get bored with the heaven they say they desire. Some are bored now. So who wants to extend their consciousness so that they can love better and know more? Who wants to live long enough to have experiences that surpass our current ones in unimaginable ways? The answer is … many of us do. Many of us aren’t bored so easily. And if we get bored we can always delete the program.

John Searle’s Critique of Ray Kurzweil

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 8, 2016.)

John Searle (1932 – ) is currently the Slusser Professor of Philosophy at the University of California, Berkeley. He received his PhD from Oxford University. He is a prolific author and one of the most important living philosophers.

According to Searle, Kurzweil’s book, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, is an extensive reflection on the implications of Moore’s law.[i] The essence of that argument is that smarter than human computers will arrive, we will download ourselves into this smart hardware, thereby guaranteeing our immortality. Searle attacks this fantasy by focusing on the chess playing computer “Deep Blue,” (DB) which defeated world chess champion Gary Kasparov in 1997.

Kurzweil thinks DB is a good example of the way that computers have begun to exceed human intelligence. But DB’s brute force method of searching through possible moves differs dramatically from the how human brains play chess. To clarify Searle offers his famous Chinese Room Argument. If I’m in a room with a program that answers questions in Chinese even though I do not understand Chinese, the fact that I can output the answer in Chinese does not mean I understand the language. Similarly DB does not understand chess, and Kasparov was playing a team of programmers, not a machine. Thus Kurzweil is mistaken if he believes that DB was thinking.

According to Searle, Kurzweil confuses a computers seeming to be conscious with it actually being conscious, something we should worry about if we are proposing to download ourselves into it! Just as a computer simulation of digestion cannot eat pizza, so too a computer simulation of consciousness is not conscious. Computers manipulate symbols or simulate brains through neural nets—but this is not the same as duplicating what the brain is doing. To duplicate what the brain does the artificial system would have to act like the brain. Thus Kurzweil confuses simulation with duplication.

Another confusion is between observer-independent (OI) features of the world, and observer-dependent (OD) features of the world. The former include features of the world studied by, for example, physics and chemistry; while the latter are things like money, property, governments and all things that exist only because there are conscious observers of them. (Paper has objective physical properties, but paper is money only because persons relate to it that way.)

Searle says that he is more intelligent than his dog and his computer in some absolute, OI sense because he can do things his dog and computer cannot. It is only in the OD sense that you could say that computers and calculators are more intelligent than we are. You can use intelligence in the OD sense provided that you remember it does not mean that a computer is more intelligent in the OI sense. The same goes for computation. Machines compute analogously to the way we do, but they don’t computer intrinsically at all—they know nothing of human computation.

The basic problem with Kurzweil’s book, according to Searle, is its assumption that increased computational power leads to consciousness. But he says that increased computational power of machines gives us no reason to believe machines are duplicating consciousness. The only way to build conscious machines would be to duplicate the way brains work and we don’t know how they work. In sum, behaving like one is conscious is not the same as actually being conscious.

Summary – Computers cannot be conscious.

______________________________________________________________________

[i] John Searle, “I Married A Computer,” review of The Age of Spiritual Machines, by Ray Kurzweil, New York Review of Books, April 8, 1999.

Ray Kurzweil’s Basic Ideas

   

I and many other scientists now believe that in around twenty years we will have the means to reprogram our bodies’ stone-age software so we can halt, then reverse, aging. Then nanotechnology will let us live forever. ~ Ray Kurzweil

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 4, 2016.)

Ray Kurzweil (1948 – ) is an author, inventor, futurist, and currently Director of Engineering at Google. He is involved in fields such as optical character recognition, text-to-speech synthesis, speech recognition technology, and electronic keyboard instruments; he is the author of several books on health, artificial intelligence, transhumanism, the technological singularity, and futurism; and he may be the most prominent spokesman in the world today for advocating the use of technology to transform humanity.

In his book, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, Kurzweil argues that in the next one hundred years machines will surpass human intelligence. Computers already surpass humans in playing chess, diagnosing certain medical conditions, buying and selling stocks, guiding missiles, and solving complex mathematical problems. However, unlike human intelligence, machine intelligence cannot describe objects on a table, write a term paper, tie shoes, distinguish a dog from a cat, or appreciate humor. One reason for this is that computers are simpler than the human brain, about a million times simpler. However, this difference will go away as computers continue to double in speed every twelve months, achieving the memory capacity and computing speed of the human brain around 2020.

Still, this won’t allow computers to match the flexibility of human intelligence because the software of intelligence is as important as the hardware. One way to mirror the brain’s software is by reverse engineering—scanning a human brain and copying its neural circuitry into a neural computer of sufficient capacity. If computers reach a human level of intelligence through such technologies, they will then go beyond it. They already remember and process information better than we do, remembering trillions of facts perfectly while we have a tough time with a few phone numbers. The combination of human-level intelligence along with greater speed, accuracy, and memory capabilities will push computers beyond human intelligence. A main reason for this is that our neurons are slow compared with electronic circuits, and most of their complexity support life processes, not computation and information analysis. Thus, while many of us think of evolution as a billion-year drama that leads to human intelligence, the creation of greater than human intelligence will quickly dispel that notion.

Kurzweil supports his case with a number of observations about cosmic evolution and human history. Consider that for most of the history of the universe, cosmologically significant events took eons of time—the interval between significant events was quite long for most of cosmic evolution. But as the universe aged the interval between significant events grew shorter, and cosmically significant events now happen at increasingly shorter intervals. We can see this in the pace of cosmic evolution: ten billion years until the earth’s formation; a few billion more for life to evolve, hundreds of millions of years till the emergence of primates, millions of years till the emergence of humanoids, and the emergence of Homo sapiens a mere 200 thousand years ago. In short, transformation is speeding up; the interval between salient events is shrinking.

Now technology is moving this process. Technology—fashioning and using ever more sophisticated tools—is simply another means of evolution which expedite the process of change considerably. Consider that Homo sapiens sapiens appeared only 90 thousand years ago, and become the lone hominoids a mere 30,000 years ago. Still, it took tens of thousands of years to figure out how to sharpen both ends of stones to make them effective! Needless to say, the pace of technological change has accelerated remarkably since then. For example, the 19th century saw technology increase at a dramatic rate compared to the 18th century and increased unbelievably fast compared to the 12th century. In the 20th century, major shifts in technology began to happen in decades or in some cases in a few years. A little more than a hundred years ago there was no flight or radio; and a mere fifty years ago there were no wireless phones or personal computers, much fewer cell phones or the internet. Today it seems your phone and computer are obsolete in a matter of months.

Technology has enabled our species to dominate the earth, exercise some control over our environment, and survive. Perhaps the most important of these technological innovations has been computation, the ability of machines to remember, compute, and solve problems. So far computers have been governed by Moore’s law: every two years or so the surface area of a transistor is reduced by fifty percent, putting twice as many transistors on an integrated circuit. The implication is that every two years you get twice the computing power for the same amount of money. This trend should continue for another fifteen years or so after which it will break down when transistor insulators will be but a few atoms wide. (At that point quantum computing may move the process forward in fantastic ways.) To really understand what will happen in the 21st century and beyond, we need to look at the exponential growth of the technology that will bring about vast changes in the near future.

Crucial to Kurzweil’s analysis is what he calls “the law of time and chaos.” He asks why some processes begin fast and then slow down—salient events in cosmic evolution or in the biological development of an organism—and why others start slowly and then speed up—the evolution of life forms or technology. The law of time and chaos explains this relationship. If there is a lot of chaos or disorder in a system, the time between salient events is great; as the chaos decreases and the order increases, the time between salient events gets smaller. The “law of accelerating returns” describes the latter phenomenon and is essential to Kurzweil’s argument. (You might say that his entire philosophy is a meditation on accelerating returns or exponential growth.) He argues that though the universe as a whole increases in disorder or entropy, evolution leads to increasing pockets of order (information for the purpose of survival) and complexity. Technology evolution is evolution by means other than biology, and it constantly speeds up as it builds upon itself.

We might reconstruct his basic argument as follows: a) evolution builds on itself, thus; b) in an evolutionary process order increases exponentially, thus; c) the returns accelerate. This law of accelerating returns drives cultural and technological evolution forward, with the returns building on themselves to create higher returns. Thus the entire process changes and grows exponentially, meaning that the near future will be radically different from the present.

… evolution has found a way around the computational limitations of neural circuitry. Cleverly, it has created organisms that in turn invented a computational technology a million times faster than carbon-based neurons … Ultimately, the computing conducted on extremely slow mammalian neural circuits will be ported to a far more versatile and speedier electronic (and photonic) equivalent.[i] This will eventually lead to reverse engineering the human brain by scanning it, mapping it, and eventually downloading our minds into computers. This means that your mind (software) would no longer be dependent on your body (hardware). Moreover, your evolving mind file will not be stuck with the circuitry of the brain, making it capable of being transferred from one medium to another, just as files are transferred from one computer to another. Then “our immortality will be a matter of being sufficiently careful to make frequent backups. If we’re careless about this, we’ll have to load an old backup copy and be doomed to repeat our recent past.”[ii]

We could download our personal evolving mind files into our original bodies, upgraded bodies, nanoengineered bodies, or virtual bodies. As we are currently further along with body transformation than with brain transformation—titanium devices, artificial skin, heart valves, pacemakers—we might want to first completely rebuild our bodies using genetic therapies. But this will only go so far because of the limitations of DNA-based cells that depend on protein synthesis. No matter how well we enhance our bodies, they would still just be second-rate robots.

Instead, Kurzweil suggests we use nanotechnology to rebuild the world atom by atom. The holy grail of nanotechnology would be intelligent self-replicating nanomachines capable of manipulating things at the nanolevel. (The great physicist Richard Feynman originally explained the possibility of nanotechnology in the 1950s. Today, important theorists like Eric Drexler and Ralph Merkle have shown the feasibility of self-replicating nanobots. Nanotechnology programs are now common in major universities.) The possibilities for nanotechnology to transform the world are extraordinary. It could build inexpensive solar cells to replace fossil fuels, or be launched into our bloodstream to improve the immune system, destroy pathogens, eradicate cancer cells, and reconstruct bodily organs and systems. It even has the potential to reverse engineer human neurons or any cell in the human body. Will people use this technology?

There is a clear incentive to go down this path. Given a choice, people will prefer to keep their bones from crumbling, their skin supple, and their life systems strong and vital. Improving our lives through neural implants on the mental level, and nanotech enhance bodies on the physical level, will be popular and compelling. It is another one of those slippery slopes—there is no obvious place to stop this progression until the human race has largely replaced the brains and bodies that evolution first provided.[iii]

Kurzweil also argues that “the law of accelerating returns” applies to the entire universe. He conjectures that life may exist elsewhere in the universe and proceed through various thresholds: the evolution of life forms; of intelligence; of technology; of computation; and finally the merging of a species with its technology—all driven by accelerating returns. Of course, there are many things that can go wrong—nuclear war, climate change, asteroids, bacteria, self-replicating nanobots, and software viruses. Still, he remains optimistic.

Kurzweil ends his book by arguing that intelligence is not impotent against the mighty forces of the universe. Intelligence thwarts gravity and manipulates other physical phenomena despite its density being vanishingly small in a vast cosmos. If intelligence increases exponentially with time, then it will become a worthy competitor for big universal forces. He concludes: “The laws of physics are not repealed by intelligence, but they effectively evaporate in its presence… the fate of the Universe is a decision yet to be made, one which we will intelligently consider when the time is right.”[iv]

_____________________________________________________________________

[i] Ray Kurzweil, The Age of Spiritual Machines (New York: Penguin, 1999), 101-102
[ii] Kurzweil, The Age of Spiritual Machines, 129.
[iii] Kurzweil, The Age of Spiritual Machines, 141.
[iv] Kurzweil, The Age of Spiritual Machines, 260.

What Will Life Be Like Inside A Computer?

(This article was reprinted in the online magazine of the  “Institute for Ethics and Emerging Technologies,” December 7, 2014)

Many scientists believe that we will soon be able to preserve our consciousness indefinitely. There are a number of scenarios by which this might be accomplished, but so-called mind uploading is one of the most prominent. Mind uploading refers to a hypothetical process of copying the contents of a consciousness from a brain to a computational device. This could be done by copying and transferring these contents into a computer, or by piecemeal replacement with parts of the brain gradually replaced by hardware. Either way, consciousness would no longer be running on a biological brain.

I am in no position to judge the feasibility of mind uploading; experts have both praised and pilloried its viability. Nor can I judge what it would be like to live in a virtual reality, given that I don’t even know what it’s like to be a dog or another person. And I don’t know if I would have subjective experiences inside a computer, in fact, we don’t know how the brain gives rise to subjective experiences. So I certainly don’t know what it would be like to exist as a simulated mind inside a computer or a robotic body. What I do know is that the Oxford philosopher and futurist Nick Bostrom has argued that there is a good chance that we live in a simulation now. And if he’s right, then you are having subjective experiences inside a computer simulation as you read this.

But does it make sense to think a mind program could run on something other than a brain? Isn’t subjective consciousness rooted in the biological brain? Yes, for the moment our mental software runs on the brain’s hardware. But there is no necessary reason that this has to be the case. If I told you a hundred years ago that some integrated silicon circuits will come to play chess better than grandmasters, model future climate change, recognize faces and voices, and solve famous mathematical problems, you would be astonished. Today you might reply, “but computers still can’t feel emotions or taste a strawberry.” And you are right they can’t—for now. But what about a thousand years from now? What about ten thousand or a million years from now? Do you really think that in a million years the best minds will run on carbon-based brains?

If you still find it astounding that minds could run on silicon chips, consider how absolutely remarkable it is that our minds run on meat! Imagine beings from another planet with cybernetic brains discovering that human brains are made of meat. That we are conscious and communicate by means of our meat brains. They would be amazed. They would find this as implausible as many of us do the idea that minds could run on silicon.

The key to understanding how mental software can run on non-biological hardware is to think of mental states not in terms of physical implementation but in terms of functions. Consider for example that one of the functions of the pancreas is to produce insulin which maintains the balance of sugar and salt in the body. It is easy to see that something else could perform this function, say a mechanical or silicon pancreas. Or consider an hourglass or an atomic clock. The function of both is to keep time yet they do this quite differently.

Analogously, if mental states are identified by their functional role then they too could be realized on other substrates, as long as the system performs the appropriate functions. In fact, once you have jettisoned the idea that your mind is a ghostly soul or a mysterious, impenetrable, non-physical substance, it is relatively easy to see that your mind program could run on something besides a brain. It is certainly easy enough to imagine self-conscious computers or intelligent aliens whose minds run on something other than biological brains. Now there’s no way for us to know what it would be like to exist without a brain and body, but there’s no convincing reason to think one couldn’t have subjective experiences without physicality. Perhaps our experiences would be even richer without a brain and body.

We have so far ignored important philosophical questions like whether the consciousness transferred is you or just a copy of you. But I doubt that such existential worries will stop people from using technology to preserve their consciousness when oblivion is the alternative. We are changing every moment and few worry that we are only a copy of ourselves from ten years ago. We wake up every day as little more than a copy of what we were yesterday and few fret about that.

Perhaps an even more pressing concern is what one does inside a simulated reality for an indefinitely long time. This is the question recently raised by the Princeton neuroscientist Michael Graziano. He argues that the question is not whether we will be able to upload our brains into a computer—he says we will—but what will we do afterward.

I suppose that some may get bored with eons of time and prefer annihilation. Some would get bored with the heaven they say they desire. Some are bored now. So who wants to extend their consciousness so that they can love better and know more? Who wants to live long enough to have experiences that surpass our current ones in unimaginable ways? The answer is … many of us do. Many of us aren’t bored so easily. And if we get bored we can always delete the program.