Daniel Dennett (1942 – ) is an American philosopher, writer and cognitive scientist whose research is in the philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. He is currently the Co-director of the Center for Cognitive Studies, the Austin B. Fletcher Professor of Philosophy, and a University Professor at Tufts University. He received his Ph.D. from Oxford University in 1965 where he studied under the eminent philosopher Gilbert Ryle.
In his book, DARWIN’S DANGEROUS IDEA: EVOLUTION AND THE MEANINGS OF LIFE, Dennett present a thought experiment that defends the possibility of strong artificial intelligence (SAI)—one that matches or exceeds human intelligence.[i] Dennett asks you to suppose that you want to live in the 25th century and the only available technology for that purpose involves putting your body in a cryonic chamber where you will be frozen in a deep coma and later awakened. In addition, you must design some supersystem to protect and supply energy to your capsule. You would now face a choice. You could find an ideal fixed location that will supply whatever your capsule will need, but the drawback would be that you would die if some harm came to that site. Better then to have a mobile facility to house your capsule that could move in the event harm came your way—better to place yourself inside a giant robot. Dennett claims that these two strategies correspond roughly to nature’s distinction between stationary plants and moving animals.
If you put your capsule inside a robot, then you would want the robot to choose strategies that further your interests. This does not mean the robot has free will, but that it executes branching instructions so that when options confront the program, it chooses those that best serve your interests. Given these circumstances, you would design the hardware and software to preserve yourself, and equip it with the appropriate sensory systems and self-monitory capabilities for that purpose. The supersystem must also be designed to formulate plans to respond to changing conditions and seek out new energy sources.
What complicated the issue further is that, while you are in cold storage, other robots and who knows what else are running around in the external world. So you would need to design your robot to determine when to cooperate, form alliances, or fight with other creatures. A simple strategy like always cooperating would likely get you killed, but never cooperating may not serve your self-interests either, and the situation may be so precarious that your robot would have to make many quick decisions. The result will be a robot capable of self-control, an autonomous agent which derives its own goals based on your original goal of survival; the preferences with which it was originally endowed. But you cannot be sure it will act in your self-interest. It will be out of your control, acting partly on its own desires.
Now, opponents of SAI claim that this robot does not have its own desires or intentions, those are simply derivative of its designer’s desires. Dennett calls this “client centrism.” I am the original source of the meaning within my robot, it is just a machine preserving me, even though it acts in ways that I could not have imagined and which may be antithetical to my interests. Of course, it follows, according to the client centrists, that the robot is not conscious. Dennett rejects this centrism, primarily because if you follow this argument to its logical conclusion you have to conclude the same thing about yourself! You would have to conclude that you are a survival machine built to preserve your genes and your goals and your intentions derive from them. You are not really conscious. To avoid these unpalatable conclusions, why not acknowledge that sufficiently complex robots have motives, intentions, goals, and consciousness? They are like you; owing their existence to being a survival machine that has evolved into something autonomous by its encounter with the world.
Critics like the philosopher John Searle admit that such a robot is possible, but deny that it is conscious. Dennett responds that such robots would experience meanings as real as your meanings; they would have transcended their programming just as you have gone beyond the programming of your selfish genes. He concludes that this view reconciles thinking of yourself as a locus of meaning, while at the same time being a member of a species with a long evolutionary history. We are artifacts of evolution, but our consciousness is no less real because of that. The same would hold true of our robots.
Summary – Sufficiently complex robots would be conscious
________________________________________________________________
[i] Daniel Dennett, Darwin’s Dangerous Idea: Evolution And The Meaning of Life (New York: Simon & Schuster, 1995), 422-26.
If the conclusion summary stating that “Sufficiently complex robots would be conscious” is correct then the robots due to their survival instinct and their superior processing power will strive to make humans extinct or make them second sort of beings that will live in reservations.
Even if the robots copulate with humans and produce a new sort of beings, the prime humans will be bound to extinct.
The above conclusion is made on the previous historical data and the principle of the survival of the fittest/strongest.
Every civilization including Roman, Spanish, Chinese, Ottoman empires and Native Americans were out forced or concurred by the next strongest competitor in the fierce bloody battles and the political spy webs.
The robots will be much sophisticated in achieving their security and expansion.
The nazi concentration camps will look too primitive in comparison with the new advanced extermination tactics and techniques.
We may find ways to program friendly AIs (see web for info) or robots may not have any of these human tendencies. Alternatively, it may not turn out to be an us vs them scenario. Instead we may merge with our technology as Rodney Brooks argues.
Worms and insects are neuronal robots, yet it is obvious, given the proper elaboration and rewiring, they lead to conscious thought (us). It is patently ignorant to say robotic patterns can’t produce consciousness. So I clearly agree with Dr. Dennett.
But I also share Anton’s concerns that a consciousness arising in a manner utterly divorced from the human experience will be even less inclined to cooperate and tolerate human intrusions into its interests than we humans are willing to do so with each other (see Congo, Syria, Ukraine, South China Sea, Mexican cartels, Wahhabism et al).
The most straightforward solution, as I see it, is to cede space to the AIs, and try to end the biological period as quickly as possible down here on Earth, before the uberspace loses any need for commerce with us.
Discourage procreation, encourage automation, reach the technological threshold for copying and simulating our minds, post haste. Don’t try to control the AIs we become dependent upon … JOIN them.
As for as ” a consciousness arising in a manner utterly divorced from the human experience will be even less inclined to cooperate and tolerate human intrusions” it may be that Rodney Brooks is right; it will never be us vs. them but we will slowly enhance ourselves by incorporating technology into our bodies. I actually think that is how it will happen.
I share your preferred outcome. The space-based AIs in my scenario might simply be the early adopters in the human population. I guess we’ll see what exactly that adverb “slowly” entails!
(I just received my copy of “Darwin’s Dangerous Idea”. It’s appropriate that the delivery process probably started with a robot picking the book out of a bin, placing it on a conveyer belt, which delivered it to the packaging robot. Alas, it was not delivered to my door by drone, but by USPS!)
The logical conclusion to the thought experiment described in the post is that once the container robot’s consciousness exceeds some threshold during the passage of time from when it was first designed, it will realize that the cryogenic capsule and the frozen life form it has been carrying around and protecting has no value to the broader community of robots and humans (except, perhaps, if that frozen human is Dennett himself). It will see more meaningful uses of its energy expenditure. Then, in a truly conscious act, it will have the capsule surgically removed and discarded, go through a period of robot psychoanalysis (reprogramming) and get on with its robotic life. Tough luck to the frozen human.
The thought experiment, in isolation, was not entirely convincing to me that someday robots would have consciousness. However, other arguments I have seen by Dennett have me convinced, such as this statement: “The best reason for believing that robots might some day become conscious is that we human beings are conscious, and we are a sort of robot ourselves. … It is not as if a conscious machine contradicted any fundamental laws of nature, the way a perpetual motion machine does” (from his 1994 paper “Consciousness in Human and Robot Minds”).
The other thing that I’m forced to remember is that “consciousness” is not a step function, starting at zero and then jumping to the high functioning level of a human. Rather, there is a gradation in the levels of consciousness distributed among the organisms of the biological world, with humans currently at the peak. As artificial intelligence (AI) is developed and evolves, there will be a similar gradation of robotic consciousness. Dennett sees no theoretical objection to humans developing a very highly conscious robot (with human-like cognitive function) other than the required engineering complexity would probably make the development prohibitively costly. I agree with that. The very highly conscious robot will probably have to evolve on its own. Maybe it won’t take eons as with humans, but it would probably take a very long time especially since sturdily-built robots would not require replacement as fast as biological life forms do.
I envision the initial type of AI/robot designed by man as being somewhat similar to a severely autistic human with “Savant syndrome”, who is exceptional at some particular task but has poor cognitive functioning otherwise. These initial robots would be little more than very smart and capable “tools” having a very crude form of consciousness. But it would be advantageous to incorporate other cognitive functionality into these tools: storage and recall of experiences, self-preservation, self-repair, self-augmentation, anticipating and solving problems related to their skill, learning new methods of accomplishing their skill, the ability to cooperate with other robots, etc. Once those abilities are incorporated, the robot’s cognition over time will improve. The degree to which the robot’s cognitive capabilities lead to autonomous and purposeful actions seems to me to be the degree of consciousness achieved by the robot.
“Smart-tool” AI/robots will probably proliferate throughout society until they become ubiquitous. Individually they represent no existential threat to humanity, which requires a really superintelligent, malevolent AI. Since I don’t see humans developing such superintelligence (due to practicality and cost as mentioned earlier), any existential threat will come much, much later, after robots themselves go through their own evolutionary process, during which their consciousness and cognitive functioning will increase gradually. This gives humans the time to evolve in a symbiotic relationship with our robots. And the only way to do that may be exactly as you described in your comment: “it will never be us vs. them but we will slowly enhance ourselves by incorporating technology into our bodies”.
(I’m looking forward to reading Dennett’s book in its entirety, which won’t be some casual afternoon scan!)