Many scientists believe that we will soon be able to preserve our consciousness indefinitely. There are a number of scenarios by which this might be accomplished, but so-called mind uploading is one of the most prominent. Mind uploading refers to a hypothetical process of copying the contents of a consciousness from a brain to a computational device. This could be done by copying and transferring these contents into a computer, or by piecemeal replacement with parts of the brain gradually replaced by hardware. Either way, consciousness would no longer be running on a biological brain.
I am in no position to judge the feasibility of mind uploading; experts have both praised and pilloried its viability. Nor can I judge what it would be like to live in a virtual reality, given that I don’t even know what it’s like to be a dog or another person. And I don’t know if I would have subjective experiences inside a computer, in fact, we don’t know how the brain gives rise to subjective experiences. So I certainly don’t know what it would be like to exist as a simulated mind inside a computer or a robotic body. What I do know is that the Oxford philosopher and futurist Nick Bostrom has argued that there is a good chance that we live in a simulation now. And if he’s right, then you are having subjective experiences inside a computer simulation as you read this.
But does it make sense to think a mind program could run on something other than a brain? Isn’t subjective consciousness rooted in the biological brain? Yes, for the moment our mental software runs on the brain’s hardware. But there is no necessary reason that this has to be the case. If I told you a hundred years ago that some integrated silicon circuits will come to play chess better than grandmasters, model future climate change, recognize faces and voices, and solve famous mathematical problems, you would be astonished. Today you might reply, “but computers still can’t feel emotions or taste a strawberry.” And you are right they can’t—for now. But what about a thousand years from now? What about ten thousand or a million years from now? Do you really think that in a million years the best minds will run on carbon-based brains?
If you still find it astounding that minds could run on silicon chips, consider how absolutely remarkable it is that our minds run on meat! Imagine beings from another planet with cybernetic brains discovering that human brains are made of meat. That we are conscious and communicate by means of our meat brains. They would be amazed. They would find this as implausible as many of us do the idea that minds could run on silicon.
The key to understanding how mental software can run on non-biological hardware is to think of mental states not in terms of physical implementation but in terms of functions. Consider for example that one of the functions of the pancreas is to produce insulin which maintains the balance of sugar and salt in the body. It is easy to see that something else could perform this function, say a mechanical or silicon pancreas. Or consider an hourglass or an atomic clock. The function of both is to keep time yet they do this quite differently.
Analogously, if mental states are identified by their functional role then they too could be realized on other substrates, as long as the system performs the appropriate functions. In fact, once you have jettisoned the idea that your mind is a ghostly soul or a mysterious, impenetrable, non-physical substance, it is relatively easy to see that your mind program could run on something besides a brain. It is certainly easy enough to imagine self-conscious computers or intelligent aliens whose minds run on something other than biological brains. Now there’s no way for us to know what it would be like to exist without a brain and body, but there’s no convincing reason to think one couldn’t have subjective experiences without physicality. Perhaps our experiences would be even richer without a brain and body.
We have so far ignored important philosophical questions like whether the consciousness transferred is you or just a copy of you. But I doubt that such existential worries will stop people from using technology to preserve their consciousness when oblivion is the alternative. We are changing every moment and few worry that we are only a copy of ourselves from ten years ago. We wake up every day as little more than a copy of what we were yesterday and few fret about that.
Perhaps an even more pressing concern is what one does inside a simulated reality for an indefinitely long time. This is the question recently raised by the Princeton neuroscientist Michael Graziano. He argues that the question is not whether we will be able to upload our brains into a computer—he says we will—but what will we do afterward.
I suppose that some may get bored with eons of time and prefer annihilation. Some would get bored with the heaven they say they desire. Some are bored now. So who wants to extend their consciousness so that they can love better and know more? Who wants to live long enough to have experiences that surpass our current ones in unimaginable ways? The answer is … many of us do. Many of us aren’t bored so easily. And if we get bored we can always delete the program.
My comment concerns a reductive physicalist theory of the mind, which is the view that all mental states and properties of the mind will eventually be explained by scientific accounts of physiological processes and states (Wikipedia, Philosophy of the Mind). Basically, my argument is that for this view of the mind, mind uploading into a computer is completely impractical due to accumulation of errors.
In order to replicate the functioning of a “specific” human mind within a computer, one needs to replicate the functioning of all parts of that specific brain within the computer. [In fact, the whole human body needs to be represented because the mind is a product of all sensations of all parts of the body coalescing within the brain. But, for the sake of argument, let’s just consider replicating only the brain.] In order to represent a specific human brain in the computer, each neuron in the brain would need a digital or analog representation, instantiated in hardware, software or a combination of the two. Unless this representation is an exact biological copy (clone), it will have some inherent “error” associated with it. So, let’s do a sort of “error analysis” (admittedly non-rigorous).
Suppose that the initial conditions of the mind being uploaded are implanted in the computer with no errors (which is highly unlikely in its own right). When the computer executes its simulation, it starts with that initial condition and then “marches in time”. The action potential duration for a single firing of a neuron is on the order of one millisecond, which implies that the computer time step would need to be no larger than that (and probably much smaller or else additional computational errors are induced). So the computer would be recalculating the state of the brain at least 1,000 times per second as it marches in time (and probably more like 10,000 times per second).
Since the computer representation of the brain is not perfect, errors will accumulate. For example, suppose that the computer representation of one neuron was only 90% accurate. After that neuron “fired”, its interaction with connected neurons would have roughly a 10% error. Now consider that the human brain has roughly 86 billion neurons, each with multiple connections to other neurons. The computer does not know which of those 86 billion neurons are needed at each time step, so all would need to be included in each calculation. One can see that 10% errors in the functioning of individual neurons within the millisecond duration will quickly accumulate to produce a completely erroneous representation of the functioning of the brain a short time after the computer started its simulation. The resulting “mind” that gets created in that computer would probably bear no similarity to the original human mind (or to probably any “human” mind). It would probably be “fuzzy” and unable to function.
Would 99% accuracy in the representation of a neuron be any better? Not really. 99.9% accuracy? Still no good. 86 billion neurons is a large number (and remember, the computer is recalculating the entire brain state 1,000 to 10,000 times per second). In order for accumulated errors to not overwhelm the simulation of the brain in the computer, the accuracy in representing each neuron would need to be extremely high and the amount of information needing to be stored for each of the 86 billion neurons would be huge, leading to an impractical data storage and retrieval problem. The only practical “computer” would be a biological clone, which is not the topic here.
Consequently, if one believes in a reductive physicalist theory of the mind, then uploading the specific mind of an individual human into a computer is, for all intents and purposes, impossible.
Mr. Rogers:
Error isn’t introduced by artificial formats. Duplication and performance errors occur all the time in biological systems … yet here we are. Why don’t “we” vanish when things change?
Where does the sense of continuity come from, despite change?
Given that androids, chess-playing robots, and the like have been themes in fiction since the late 19th century, I rather think that some people from 100 years ago, far from being astonished by the progress in artificial intelligence made so far, would be quite disappointed.
Also, AI researchers of Marvin Minsky’s generation massively underestimated how difficult many of the goals of AI (such as computer vision) would be to achieve. We’re about fifty years behind schedule on a lot of things.
Call me when computers can have orgasms. Then I’ll consider uploading to one.
That’s a joke, but it’s a pointed one that underlies the problem I see of trying to convert analog chemical reactions to binary digital ones. Where would pain and pleasure come from? Would uploaded consciousness just be Vulcan logic machines? It sure seems like it.