Daniel Dennett: In Defense of Robotic Consciousness

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 11, 2016.)

Daniel Dennett (1942 – ) is an American philosopher, writer and cognitive scientist whose research is in the philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. He is currently the Co-director of the Center for Cognitive Studies, the Austin B. Fletcher Professor of Philosophy, and a University Professor at Tufts University. He received his PhD from Oxford University in 1965 where he studied under the eminent philosopher Gilbert Ryle.

In his book, DARWIN’S DANGEROUS IDEA: EVOLUTION AND THE MEANINGS OF LIFE, Dennett present a thought experiment that defends strong artificial intelligence (SAI)—one that matches or exceeds human intelligence.[i] Dennett asks you to suppose that you want to live in the 25th century and the only available technology for that purpose involves putting your body in a cryonic chamber where you will be frozen in a deep coma and later awakened. In addition you must design some supersystem to protect and supply energy to your capsule. You would now face a choice. You could find an ideal fixed location that will supply whatever your capsule will need, but the drawback would be that you would die if some harm came to that site. Better then to have a mobile facility to house your capsule that could move in the event harm came your way—better to place yourself inside a giant robot. Dennett claims that these two strategies correspond roughly to nature’s distinction between stationary plants and moving animals.

If you put your capsule inside a robot, then you would want the robot to choose strategies that further your interests. This does not mean the robot has free will, but that it executes branching instructions so that when options confront the program, it chooses those that best serve your interests. Given these circumstances you would design the hardware and software to preserve yourself, and equip it with the appropriate sensory systems and self-monitory capabilities for that purpose. The supersystem must also be designed to formulate plans to respond to changing conditions and seek out new energy sources.

What complicated the issue further is that, while you are in cold storage, other robots and who knows what else are running around in the external world. So you would need to design your robot to determine when to cooperative, form alliances, or fight with other creatures. A simple strategy like always cooperating would likely get you killed, but never cooperating may not serve your self-interests either, and the situation may be so precarious that your robot would have to make many quick decisions. The result will be a robot capable of self-control, an autonomous agent which derives its own goals based on your original goal of survival; the preferences with which it was originally endowed. But you cannot be sure it will act in your self-interest. It will be out of your control, acting partly on its own desires.

Now opponents of SAI claim that this robot does not have its own desires or intentions, those are simply derivative of its designer’s desires. Dennett calls this “client centrism.” I am the original source of the meaning within my robot, it is just a machine preserving me, even though it acts in ways that I could not have imagined and which may be antithetical to my interests. Of course it follows, according to the client centrists, that the robot is not conscious. Dennett rejects this centrism, primarily because if you follow this argument to its logical conclusion you have to conclude the same thing about yourself! You would have to conclude that you are a survival machine built to preserve your genes and your goals and your intentions derive from them. You are not really conscious. To avoid these unpalatable conclusions, why not acknowledge that sufficiently complex robots have motives, intentions, goals, and consciousness? They are like you; owing their existence to being a survival machine that has evolved into something autonomous by its encounter with the world.

Critics like Searle admit that such a robot is possible, but deny that it is conscious. Dennett responds that such robots would experience meaning as real as your meaning; they would have transcended their programming just as you have gone beyond the programming of your selfish genes. He concludes that this view reconciles thinking of yourself as a locus of meaning, while at the same time being a member of a species with a long evolutionary history. We are artifacts of evolution, but our consciousness is no less real because of that. The same would hold true of our robots.

Summary – Sufficiently complex robots would be conscious


[i] Daniel Dennett, Darwin’s Dangerous Idea: Evolution And The Meaning of Life (New York: Simon & Schuster, 1995), 422-26.

5 thoughts on “Daniel Dennett: In Defense of Robotic Consciousness

  1. If the conclusion summary stating that “Sufficiently complex robots would be conscious” is correct then the robots due to their survival instinct and their superior processing power will strive to make humans extinct or make them second sort of beings that will live in reservations.

    Even if the robots copulate with humans and produce a new sort of beings, the prime humans will be bound to extinct.

    The above conclusion is made on the previous historical data and the principle of the survival of the fittest/strongest.

    Every civilization including Roman, Spanish, Chinese, Ottoman empires and Native Americans were out forced or concurred by the next strongest competitor in the fierce bloody battles and the political spy webs.

    The robots will be much sophisticated in achieving their security and expansion.

    The nazi concentration camps will look too primitive in comparison with the new advanced extermination tactics and techniques.

  2. We may find ways to program friendly AIs (see web for info) or robots may not have any of these human tendencies. Alternatively, it may not turn out to be an us vs them scenario. Instead we may merge with our technology as Rodney Brooks argues.

  3. Worms and insects are neuronal robots, yet it is obvious, given the proper elaboration and rewiring, they lead to conscious thought (us). It is patently ignorant to say robotic patterns can’t produce consciousness. So I clearly agree with Dr. Dennett.
    But I also share Anton’s concerns that a consciousness arising in a manner utterly divorced from the human experience will be even less inclined to cooperate and tolerate human intrusions into its interests than we humans are willing to do so with each other (see Congo, Syria, Ukraine, South China Sea, Mexican cartels, Wahhabism et al).
    The most straightforward solution, as I see it, is to cede space to the AIs, and try to end the biological period as quickly as possible down here on Earth, before the uberspace loses any need for commerce with us.
    Discourage procreation, encourage automation, reach the technological threshold for copying and simulating our minds, post haste. Don’t try to control the AIs we become dependent upon … JOIN them.

  4. As for as ” a consciousness arising in a manner utterly divorced from the human experience will be even less inclined to cooperate and tolerate human intrusions” it may be that Rodney Brooks is right; it will never be us vs. them but we will slowly enhance ourselves by incorporating technology into our bodies. I actually think that is how it will happen.

  5. I share your preferred outcome. The space-based AIs in my scenario might simply be the early adopters in the human population. I guess we’ll see what exactly that adverb “slowly” entails!

Leave a Reply

Your email address will not be published. Required fields are marked *