(This article was reprinted in the online magazine of the “Institute for Ethics and Emerging Technologies,” December 7, 2014)
Many scientists believe that we will soon be able to preserve our consciousness indefinitely. There are a number of scenarios by which this might be accomplished, but so-called mind uploading is one of the most prominent. Mind uploading refers to a hypothetical process of copying the contents of a consciousness from a brain to a computational device. This could be done by copying and transferring these contents into a computer, or by piecemeal replacement with parts of the brain gradually replaced by hardware. Either way consciousness would no longer be running on a biological brain.
I am in no position to judge the feasibility of mind uploading; experts have both praised and pilloried its viability. Nor can I judge what it would be like to live in a virtual reality, given that I don’t even know what it’s like to be a dog or another person. And I don’t know if I would have subjective experiences inside a computer, in fact we don’t know how the brain gives rise to subjective experiences. So I certainly don’t know what it would be like to exist as a simulated mind inside a computer or a robotic body. What I do know is that the Oxford philosopher and futurist Nick Bostrom has argued that there is a good chance that we live in a simulation now. And if he’s right, then you are having subjective experiences inside a computer simulation as we you read this.
But does it make sense to think a mind program could run on something other than a brain? Isn’t subjective consciousness rooted in the biological brain? Yes, for the moment our mental software runs on the brain’s hardware. But there is no necessary reason that this has to be the case. If I told you a hundred years ago that some integrated silicon circuits will come to play chess better than grandmasters, model future climate change, recognize faces and voices, and solve famous mathematical problems, you would be astonished. Today you might reply, “but computers still can’t feel emotions or taste a strawberry.” And you are right they can’t—for now. But what about a thousand years from now? What about ten thousand or a million years from now? Do you really think that in a million years the best minds will run on carbon based brains?
If you still find it astounding that minds could run on silicon chips, consider how absolutely remarkable it is that our minds run on meat! Imagine beings from another planet with cybernetic brains discovering that human brains are made of meat. That we are conscious and communicate by means of our meat brains. They would be amazed. They would find this as implausible as many of us do the idea that minds could run on silicon.
The key to understanding how mental software can run on non-biological hardware is to think of mental states not in terms of physical implementation but in terms of functions. Consider for example that one of the functions of the pancreas is to produce insulin which maintains the balance of sugar and salt in the body. It is easy to see that something else could perform this function, say a mechanical or silicon pancreas. Or consider an hourglass or an atomic clock. The function of both is to keep time yet they do this quite differently.
Analogously, if mental states are identified by their functional role then they too could be realized on other substrates, as long as the system performs the appropriate functions. In fact, once you have jettisoned the idea that your mind is a ghostly soul or a mysterious, impenetrable, non-physical substance, it is relatively easy to see that your mind program could run on something besides a brain. It is certainly easy enough to imagine self-conscious computers or intelligent aliens whose minds run on something other than biological brains. Of course there’s no way for us to know what it would be like to exist without a brain and body, but there’s no convincing reason to think one couldn’t have subjective experiences without physicality. Perhaps our experiences would be even richer without a brain and body.
We have so far ignored important philosophical questions like whether the consciousness transferred is you or just a copy of you. But I doubt that such existential worries will stop people from using technology to preserve their consciousness when oblivion is the alternative. We are changing every moment and few worry that we are only a copy of ourselves from ten years ago. We wake up every day as little more than a copy of what we were yesterday and few fret about that.
Perhaps an even more pressing concern is what one does inside a simulated reality for an indefinitely long time. This is the question recently raised by the prominent Princeton neuroscientist Michael Graziano. He argues that the question is not whether we will be able to upload our brains into a computer—he says we will—but what will become of us when we do. What will we do with all that time?
I suppose that some may get bored with eternity and prefer annihilation. Some would get bored with the heaven they say they desire. Some are bored now. So who wants to extend their consciousness so that they can love better and know more? Who wants to live long enough to have experiences that surpass our current ones in unimaginable ways? The answer is … many of us do. Many of us aren’t bored so easily. And if we get bored we can always delete the program.