What Computers Will Never Do

The Age of Spiritual Machines: When Computers Exceed Human Intelligence

Here is a reply from a computer scientist to my recent post about Ray Kurzweil‘s book. My brief reply is at the bottom.

There is a limit to computer intelligence arising from its database. Human beings require at least 18 years of experiences in order to learn the minimal requirements of an adult Homo sapiens. Moreover, they continue to learn throughout their lives, so that by the time they are our age, they’re just as brilliant as you and I are.

It is impossible to code a life experience into a computer database. To develop that life experience, a computer would have to live a human life. Moreover, it would have to do so with the emotional structures of a human being …

A computer in the future could probably store a life’s worth of such data, but how could it interpret it? The human brain regularly cleans out the meaningless crap of our daily lives … That’s what sleep is for — not resting our muscles but cleaning the garbage out of our minds. But how is a computer to know what to keep and what to throw away?

I think that Mr. Kurzweil is overly optimistic regarding the potential of computer technology. His direct comparison of computers to brains is erroneous. An automobile chassis with an engine is not the same thing as a pair of legs. A camera is not at all the same thing as the human eye. And a computer is not at all the same thing as a brain … [but] you’d never use a computer to decide whether to fall in love.

Technology will NEVER replace our biological faculties for certain tasks because those tasks will often be too closely tied to our entire biological processes to be taken over by technology. The most extreme example of this is provided by sexual interaction. I think we can all agree that the thought of making love to a robot is simply absurd …

Sure, there’s plenty of room for further advances … Yet few of these things will be anywhere near as revolutionary as the desktop computer and the smartphone were in their early years. Fewer people will rush out to buy the latest techie toy …

Nevertheless, I shall never have a deep conversation with any computer. Your philosophical musings on this blog will never be replaced by a computer’s thoughts. Computers will become smarter, but they’ll never be wise.

Brief Reflections

There is a lot to say about all this but here are a few thoughts. I’m not comfortable with saying machines will never be able to do this or that. A well-designed robot may not be a perfect human replica, but if it does what humans do it is similar enough for me to consider it conscious and worthy of moral and legal protection. In fact, good robots will probably be superior to us—think Mr. Data from Star Trek.

Kurzweil has an entire section on robot sex but let’s just say that it is easy enough to imagine having better sex if we are our partners were sexually upgraded. As for deep conversation, I’d prefer to converse with an AI rather than with most human beings. And I believe that minds can run on substrates besides carbon-based brains.

The University of Oxford philosopher Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills”.[1]  I don’t know whether we create such intelligence, or whether they will emerge on their own, but I think the survival of life on earth depends on intellectual enhancement. And, with oceans of time for future innovation almost anything is possible–including the emergence of superintelligence.

One thing I do know. If we have intelligent descendants, if they survive, and if science and technology continue to advance, the future will be unimaginably different from the past.

Computer Scientist’s Response

“I’m not comfortable with saying machines will never be able to do this or that.”

Well, yes, I’m just begging to be made a fool of with my comment. Perhaps I should constrain my statement a bit. How’s this version:

“Until computers can simulate the biochemical ties between brain and body, they’ll never be able to simulate humans.”

My thinking on this was powerfully influenced by Descartes’ Error: Emotion, Reason, and the Human Brain by Antonio R. Damasio. It presents the neurophysiological argument that the brain is inseparable from the body. There’s no such thing as “the mind-body problem” because they are a single system. Hence, replicating human cognition with silicon is rather like trying to build an airplane as if it were a bird, with flapping wings.

An airplane can go much faster and further than any bird, and it can carry a much heavier load, but it can’t land on a moving branch, take off in a fraction of a second, or show any of the maneuverability of a bird. In the same fashion, a computer can do a lot of things that people can’t do, but pursuing replication of human cognition is, In My Vainglorious Opinion, a fool’s errand.

Liked it? Take a second to support Dr John Messerly on Patreon!
Become a patron at Patreon!

14 thoughts on “What Computers Will Never Do

  1. “I’m not comfortable with saying machines will never be able to do this or that.”

    Well, yes, I’m just begging to be made of a fool of with my comment. Perhaps I should constrain my statement a bit. How’s this version:

    “Until computers can simulate the biochemical ties between brain and body, they’ll never be able to simulate humans.”

    My thinking on this was powerfully influenced by “Descartes’ Error” by Antonio R. Damasio. It presents the neurophysiological argument that the brain is inseparable from the body. There’s no such thing as “the mind-body problem” because they are a single system. Hence, replicating human cognition with silicon is rather like trying to build an airplane as if it were a bird, with flapping wings.

    An airplane can go much faster and further than any bird, and it can carry a much heavier load, but it can’t land on a moving branch, take off in a fraction of a second, or show any of the maneuverability of a bird. In the same fashion, a computer can do a lot of things that people can’t do, but pursuing replication of human cognition is, In My Vainglorious Opinion, a fool’s errand.

    Damasio’s book is here at Amazon.com:
    https://www.amazon.com/Descartes-Error-Emotion-Reason-Human/dp/014303622X/ref=sr_1_1?ie=UTF8&qid=1516734519&sr=8-1&keywords=descartes+error

  2. Humans are intelligent but can never use our sense of smell in the way a dog can. Developing this sense of smell has no purpose for humans. We can find food at the grocery store.

    Computers will never do… because artificial intelligence will have no need to emulate humans. An intelligence multiple times greater than humans wont be restrained by the organic composition and needs of a living thing. Computers will ‘do their own thing’ and humans will be irrelevent to them.

  3. The problem I see with many of these speculations about machines becoming human or human-like is the failure to distinguish between consciousness and intelligence. By consciousness I mean mindful awareness, the ability to have experience. I think it was the Vietnamese Buddhist monk, Thich Nhat Hanh, who said,

    “To be alive is a miracle, but to be alive and know it, that is the greatest miracle.”

    It is the knowing that we are alive, the ability to reflect, or shall we say, philosophize, that is missing from current and future (non-biological based) machines.

    If by intelligence, we simply mean the ability to solve problems, then we can agree that machines will, or have already, surpassed human intelligence. In 1996, IBM’s Deep Blue computer beat the world best human chess player. Nevertheless, Deep Blue is not capable of experiencing anything as a sentient being. There is no consciousness there, no mindful awareness. Someday it may be possible to construct machines sophisticated enough mimic human behavior so well as to fool another human – the Turing test. Philosophy has a name for such manifestations. They are called “zombies” – living entities without souls (or without minds if you don’t like the word “soul”). A two year old might not be able to beat Garry Kasparov in chess, but he or she can do what Deep Blue will never do – experience joy, disappointment, and wonder.

    There is a famous thought experiment called The Chinese Room that is designed to help us make the distinction between thinking in the sense of mindful understanding on the one hand, and thinking in the sense of mechanically executing algorithms on the other. If you are not familiar with it, it is worth a look. Also of interest might be John Searle’s Mind, Brains and Science. He talks about the Chinese Room experiment in Chapter 2, “Can Computers Think”. I recommend adding his book to the list.

  4. Hi Steve

    As always you replies are so insightful and welcomed. However, as a rare philosopher who taught in a world-class CS department at UT -Austin, I can assure that the majority of AI specialists disagree with Searle. They would say he doesn’t understand how computers work, or how they will eventually work. However, they may be wrong and the issue is a deep one that Searle, Dennett, and other major phil figures have weighed in on. As a functionalist, I see no reason why computers in principle can’t become conscious.

  5. Steve.

    I respectively disagree with the Buddhist monk. Knowing we are alive is not a miracle. Its the necessary outcome of particular configuration of matter and energy. According to the Drake equation there are trillions of intelligences in our Universe. The Universe is not human centric with our awareness having some special role.

    There has never been a miracle. Nothing has ever been observed that is existential to the observable Universe.. There’s lots of ‘weird’ stuff but no miracles…no gods, angels, elves or spirits. Everything about our brain’s evolution is rational. The answer to understanding consiousness is in science…not monks and religion.

  6. Tom
    It seems you find the use of the word “miracle” very upsetting. Let me paraphrase the quote in a way you might find more acceptable.

    “How we came to be is a mystery. How we can actually be conscious of our being – that is the greatest mystery.”

    I am not a religious person. I am scientist and engineer by training. I believe the world has structure and operates according to natural laws. There is nothing that is not natural. There is nothing that is not real. Things happen for a reason and those reasons cannot be dismissed as simply due to magic or to random chance.
    By the same token, it behooves us, we rational thinkers, to acknowledge mystery where mystery exists. To claim we understand the mind – the very real and undeniable experience of conscious awareness – is to betray a lack of understanding of the very essence of the human condition – being alive and knowing it.

  7. I comletely agree with ‘the computer scientist’s view’ I repeat quoting below:

    I think that Mr. Kurzweil is overly optimistic regarding the potential of computer technology. His direct comparison of computers to brains is erroneous. An automobile chassis with an engine is not the same thing as a pair of legs. A camera is not at all the same thing as the human eye. And a computer is not at all the same thing as a brain … [but] you’d never use a computer to decide whether to fall in love.

    Technology will NEVER replace our biological faculties for certain tasks because those tasks will often be too closely tied to our entire biological processes to be taken over by technology. ‘

    John, with respect, but your materialistic religion of machine worship, with you consistently seem to fail to appreciate the above points.

    You seem to be repeating the Richard Dawkins materialsit superstition where you elevate the machine to falsly assume that it can ever be like a living human being.
    This is materialistic superstition as scientifically erroneous as the most superstitious of sectarian religions.

  8. John, with respect, but again your answer to Steve Donaldson’s excellent points namely. for example, that:

    ‘A two year old might not be able to beat Garry Kasparov in chess, but he or she can do what Deep Blue will never do – experience joy, disappointment, and wonder.’

    seems to indicate that you are unable to experientially grasp the difference between
    mental speculation, which a computer can do, and intuitive sacred insights, which humans can have but never the machines.

    To appreciate this you would need to appreciate what neither you or R. Dawkins are seem to be able to do perhaps because neither of you are able to experience faith, namely the difference between a machine and a human organism.

  9. The question should not be whether computers can simulate us! It should be whether they can control us! For if they can control us they will in the end also be able to use our bodies to simulate us!

  10. The question should not be whether computers can emulate us! It should be whether they can control us! For if they can control us they will in the end also be able to use our bodies to emulate us! And of course if they can use our bodies they will also use our tissue to create neuro-biolological supercomputers. Until they are able to make even that better artificially. At which point they won’t need us anymore otherwise then maybe as pets or toys to play with……

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.