Summary of Charles T. Rubin’s, “Artificial Intelligence and Human Nature,”

Charles T. Rubin is a professor of political science at Duquesne University. His 2003 article, “Artificial Intelligence and Human Nature,” is a systematic attack on the thinking of Ray Kurzweil and Hans Moravec, thinkers we have discussed in recent posts.[i]

Rubin finds nearly everything about the futurism of Kurzweil and Moravec problematic. It involves metaphysical speculation about evolution, complexity, and the universe; technical speculation about what may be possible; and philosophical speculation about the nature of consciousness, personal identity, and the mind-body problem. Yet Rubin avoids attacking the futurists, whom he calls “extinctionists,” on the issue of what is possible, focusing instead on their claim that a future robotic-type state is necessary or desirable.

Rubin argues that the argument that there is an evolutionary necessity for our extinction seems thin. Why should we expedite our own extinction? Why not destroy the machines instead? And the argument for the desirability of this vision raises another question. What is so desirable about a post-human life? The answer to this question, for Kurzweil, Moravec, and the transhumanists, is the power over human limitations that would ensue. The rationale that underlies this desire is the belief that we are but an evolutionary accident to be improved upon, transformed, and remade.

But this leads to another question: will we preserve ourselves after uploading into our technology? Rubin objects that there is a disjunction between us and the robots we want to become. Robots will bear little resemblance to us, especially after we have shed the bodies so crucial to our identities, making the preservation of a self all the more tenuous. Given this discontinuity, how can we know that we would want to be in this new world, or whether it would be better, any more than one of our primate ancestors could have imagined what a good human life would be like. Those primates would be as uncomfortable in our world, as we might be in the post-human world. We really have no reason to think we can understand what a post-humans life would be like, but it is not out of the question that the situation will be nightmarish.

Yet Rubin acknowledges that technology will evolve, moved by military, medical, commercial, and intellectual incentives, hence it is unrealistic to limit technological development. The key in stopping or at least slowing the trend is to educate individuals about the unique characteristics of being human which surpass machine life in so many ways. Love, courage, charity, and a host of other human virtues may themselves be inseparable from our finitude. Evolution may hasten our extinction, but even if it did not there is no need to pursue the process, because there is no reason to think the post-human world will be better than our present one. If we pursue such Promethean visions, we may end up worse off than before.

Summary – We should reject transhumanist ideals and accept our finitude.

__________________________________________________________________

[i] Charles T. Rubin, “Artificial Intelligence and Human Nature,” The New Atlantis, No. 1, spring 2003.

Share the joy
  • 1
  • 1
  • 1
  •  
  • 32
  •  
  •  
  •  
  •  
  •  

Summary of Hans Moravec’s, Robot: Mere Machine To Transcendent Mind

Hans Moravec (1948 – ) is a faculty member at the Robotics Institute of Carnegie Mellon University and the chief scientist at Seegrid Corporation. He received his PhD in computer science from Stanford in 1980, and is known for his work on robotics, artificial intelligence, and writings on the impact of technology, as well as his many of publications and predictions focusing on transhumanism.

Moravec set forth his futuristic ideas most clearly in his 1998 book Robot: Mere Machine to Transcendent Mind. He notes that by almost any measure society is changing faster than ever before, primarily because the products of technology keep speeding up the process. The radical future that awaits us can be understood by thinking of technology as soon reaching an escape velocity. In the same way that rubbing sticks together in the proper manner will produce ignition, or powering a rocket correctly will allow it to escape the earth’s gravity, our machines will soon escape their previous boundaries. At that time the old rules will no longer apply; robots will have achieved their own escape velocity.

For many of us this is hard to imagine because we are like riders in an elevator who forget how high we are until we get an occasional glimpse of the ground—as when we meet cultures frozen in time. Then we see how different the world we live in today is compared to the one we adapted to biologically. For all of human history culture was secondary to biology, but about five thousand years ago things changed, as cultural evolution became the most important means of human evolution. It is the technology created by culture that is exponentially speeding up the process of change. Today we are reaching the escape velocity from our biology.

Not that building intelligent machines will be easy—Moravec constantly reminds us how difficult robotics is. He outlines the history of cybernetics, from its beginnings with Alan Turing and John von Neumann, to the first working artificial intelligence programs which proved many mathematical theorems. He admits that most of these programs were not very good and proved theorems no better or faster than a college freshman. So reaching escape velocity will require hard work.

One of the most difficult issues in robotics/artificial intelligence is the disparity between programs that calculate and reason, versus programs that interact with the world. Robots still don’t perform as well behaviorally as infants or non-human animals but play chess superbly. So the order of difficulty for machines from easier to harder is: calculating; reasoning; perceiving; and acting. For humans the order is exactly the reverse. The explanation for this probably lays in the fact that perceiving and acting were beneficial for survival in a way that calculation and abstract reasoning was not. Machines are way behind in many areas yet catching up, and Moravec predicts that in less than fifty years inexpensive computers will exceed the processing power of a human brain. Can we then program them to intuit and perceive like humans? Moravec thinks there is reason to answer in the affirmative, and much of his book cites the evolution of robotics as evidence for this claim.

He also supports his case with a clever analogy to topography. The human landscape of consciousness has high mountains like hand-eye coordination, locomotion and social interaction; foothills like theorem proving and chess playing; and lowlands like arithmetic and memorization. Computers/robots are analogous to a flood which has drowned the lowlands; has just reached the foothills, and well eventually submerge the peaks.

Robots will advance through generational change as technology advances: from lizard-like robots, to mouse-like, primate-like, and human-like ones. Eventually they will be smart enough to design their own successors —without help from us! So a few generations of robots will mimic the four hundred million year evolution marked by the brain stem, cerebellum, mid-brain, and neo-cortex. Will our machines be conscious? Moravec says yes. Just as the terrestrial and celestial was once a sacred distinction, so today is the animate/inanimate distinction. Of course if the animating principle is a supernatural soul, then the distinction remains, but our current knowledge suggests that complex organization provides animation. This means that our technology is doing what it took evolution billions of years to do—animating dead matter.

Moravec argues that robots will slowly come to have a conscious, internal life as they advance. Fear, shame, and joy may be emotions valuable to robots to help them retreat from danger, reduce the probability of bad decisions, or reinforce good ones. He even thinks there would be good reasons for robots to care about their owners or get angry, but surmises that generally they will be nicer than humans, since robots don’t have to be selfish to guarantee their survival. He recognizes that many reject the view that dead matter can give rise to consciousness. The philosopher Herbert Dreyfus has argued that computers cannot experience subjective consciousness, his colleague John Searle says, as we have already seen, that computers will never think, and the mathematician Roger Penrose argues that consciousness is achieved through certain quantum phenomena in the brain, something unavailable to robots. But Moravec points to the accumulating evidence from neuroscience to disagree. Mind is something that runs of a physical substrate and we will eventually accept sufficiently complex robots as conscious.

Moravec sees these developments as the natural consequence of humans using one of their two channels of heredity. Not the slower biological means utilizing DNA, but the faster culture channel utilizing  books, language, databases, and machines. For most of human history there was more info in our genes than in our culture, but now libraries alone hold thousands of times more information than genes. “Given fully intelligent robots, culture becomes completely independent of biology. Intelligent machines, which will grow from us, learn our skills, and initially share our goals and values, will be the children of our minds.”[i]

To get a better understanding of the coming age of robots consider our history as it relates to technology. A hundred thousand years ago, our ancestors were supported by, what Moravec calls, a fully automated nature. With agriculture we increased production but added work and, until recently, production of food was the chief occupation of humankind. Farmers lost their jobs to machines and moved to manufacturing, but more advanced machines displaced farmers out of factories and into offices—where machines have put them out of work again. Soon machines will do all the work. Tractors and combines amplify farmers; computer workstations amplify engineers; layers of management and clerical help slowly disappear; and the scribe, priest, seer and chief are no longer repositories of wisdom—printing and mass communication ended that. Automation and robots will displace gradually replace labor as never before; just consider how much physical and mental labor has already been replaced by machines. In the short run this will cause panic and the scramble to earn a living in new ways. In the medium run it will provide the opportunity to have a more leisurely lifestyle. In the long run, “it marks the end of the dominance of biological humans and the beginning of the age of robots.”[ii]

Moravec is optimistic that robotic labor will make life more pleasant for humanity, but inevitably evolution will lead beyond humans to a world of “ex-humans” or “exes.” These post-biological beings will populate a galaxy which is as benign for them as it is hostile for biological beings. “We marvel at the Earth’s biodiversity … but the diversity and range of the post-biological world will be astronomically greater. Imagination balks at the challenge of guessing what it could be like.”[iii] Still, he is willing to hazard a guess: “…Exes trapped in neutron stars may become the most powerful minds in the galaxy … But, in the fast-evolving world of superminds, nothing lasts forever …. Exes, [will] become obsolete.”[iv]

In that far future, Moravec speculates that exes will “be transformed into intelligence-boosting computing elements … physical activity will gradually transform itself into a web of increasingly pure thought, here every smallest interaction represents a meaningful computation.”[v] Exes may learn to arrange space-time and energy into forms of computation, with the result that “the inhabited portions of the universe will be rapidly transformed into a cyberspace, where overt physical activity is imperceptible, but the world inside the computation is astronomically rich.”[vi] Beings won’t be defined by physical location but will be patterns of information in cyberspace. Minds, pure software, will interact with other minds. The wave of physical migration into space will have long given way to “a bubble of Mind expanding at near lightspeed.”[vii] Eventually, the expanding bubble of cyberspace will recreate all it encounters “memorizing the old universe as it consumes it.”[viii]

For the moment our small minds cannot give meaning to the universe, but a future universal mind might be able to do so, when that cosmic mind is infinitely subjective, self-conscious, and powerful. At that point our descendents will be capable of traversing in and through other possible worlds. Unfortunately, those of us alive today are governed by the laws of the universe, at least until we die when our ties to physical reality will be cut. It is possible we will then be reconstituted in the minds of our super intelligent successors or in simulated realities. But for the moment this is still fantasy, all we have for now is Shakespeare’s lament:

To die, to sleep;
To sleep: perchance to dream: ay there’s the rub;
For in that sleep of death what dreams may come
When we have shuffled off this mortal coil …

Summary – Our robotic descendents will be our mind children and they will live in realities now unimaginable to us. For now though, we die.

______________________________________________________________________

[i] Hans Moravec, Robot: Mere Machine to Transcendent Mind (New York: Oxford University Press, 2000), 126.
[ii] Moravec, Robot: Mere Machine to Transcendent, 131.
[iii] Moravec, Robot: Mere Machine to Transcendent, 145.
[iv] Moravec, Robot: Mere Machine to Transcendent, 162.
[v] Moravec, Robot: Mere Machine to Transcendent, 164.
[vi] Moravec, Robot: Mere Machine to Transcendent, 164.
[vii] Moravec, Robot: Mere Machine to Transcendent, 165.
[viii] Moravec, Robot: Mere Machine to Transcendent, 167

Share the joy
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

How Science Can Make Us Immortal

If death is inevitable, then all we can do is die and hope for the best. But perhaps we don’t have to die. Many respectable scientists now believe that humans can overcome death and achieve immortality through the use of future technologies. But how will we do this?

The first way we might achieve physical immortality is by conquering our biological limitations—we age, become diseased, and suffer trauma. Aging research, while woefully underfunded, has yielded positive results. Average life expectancies have tripled since ancient times, increased by more than fifty percent in the industrial world in the last hundred years, and most scientists think we will continue to extend our life-spans. We know that we can further increase our life-span by restricting calories, and we increasingly understand the role that telomeres play in the aging process. We also know that certain jellyfish and bacteria are essentially immortal, and the bristlecone pine may be as well. There is no thermodynamic necessity for senescence—aging is presumed to be a byproduct of evolution —although why mortality should be selected for remains a mystery. There are reputable scientists who believe we can conquer aging altogether—in the next few decades with sufficient investment—most notably the Cambridge researcher Aubrey de Grey.

If we do unlock the secrets of aging, we will simultaneously defeat many other diseases as well, since so many of them are symptoms of aging. Many researches now consider aging itself to be a disease which progresses as you age. There are a number of strategies that could render disease mostly inconsequential. Nanotechnology may give us nanobot cell-repair machines and robotic blood cells; biotechnology may supply replacement tissues and organs; genetics may offer genetic medicine and engineering; and full-fledge genetic engineering could make us impervious to disease.

Trauma is a more intransigent problem from the biological perspective, although it too could be defeated through some combination of cloning, regenerative medicine, and genetic engineering. We can even imagine that your physicality could be recreated from a bit of your DNA, and other technologies could then fast forward your regenerated body to the age of your traumatic death, where a backup file with all your experiences and memories would be implanted in your brain. Even the dead may be resuscitated if they have undergone the process of cryonics—preserving organisms at very low temperatures in glass-like states. Ideally these clinically dead would be brought back to life when future technology was sufficiently advanced. This may now be science fiction, but if nanotechnology fulfills its promise there is a reasonably good chance that cryonics will be successful.

In addition to biological strategies for eliminating death, there are a number of technological scenarios for immortality which utilize advanced brain scanning techniques, artificial intelligence, and robotics. The most prominent scenarios have been advanced by the renowned futurist Ray Kurzweil and the roboticist Hans Moravec. Both have argued that the exponential growth of computing power in combination with advances in other technologies will make it possible to upload the contents of one’s consciousness into a virtual reality. This could be accomplished by cybernetics, whereby hardware would be gradually installed in the brain until the entire brain was running on that hardware, or via scanning the brain and simulating or transferring its contents to a computer with sufficient artificial intelligence. Either way we would no longer be living in a physical world.

In fact we may already be living in a computer simulation. The Oxford philosopher and futurist Nick Bostrom has argued that advanced civilizations may have created computer simulations containing individuals with artificial intelligence and, if they have, we might unknowingly be in such a simulation. Bostrom concludes that one of the following must be the case: civilizations never have the technology to run simulations; they have the technology but decided not to use it; or we almost certainly live in a simulation.

If one doesn’t like the idea of being immortal in a virtual reality—or one doesn’t like the idea that they may already be in one now—one could upload one’s brain to a genetically engineered body if they liked the feel of flesh, or to a robotic body if they liked the feel of silicon or whatever materials comprised the robotic body. MIT’s Rodney Brooks envisions the merger of human flesh and machines, whereby humans slowly incorporate technology into their bodies, thus becoming more machine-like and indestructible. So a cyborg future may await us.

The rationale underlying most of these speculative scenarios has to do with adopting an evolutionary perspective. Once one embraces that perspective, it is not difficult to imagine that our descendants will resemble us about as much as we do the amino acids from which we sprang. Our knowledge is growing exponentially and, given eons of time for future innovation, it easy to envisage that humans will defeat death and evolve in unimaginable ways. For the skeptics, remember that our evolution is no longer moved by the painstakingly slow process of Darwinian evolution—where bodies exchange information through genes—but by cultural evolution—where brains exchange information  through memes. The most prominent feature cultural evolution is the exponentially increasing pace of technological evolution—an evolution that may soon culminate in a technological singularity.

The technological singularity, an idea first proposed by the mathematician Vernor Vinge,refers to the hypothetical future emergence of greater than human intelligence. Since the capabilities of such intelligences is difficult for our minds to comprehend, the singularity is seen as an event horizon beyond which the future becomes nearly impossible to understand or predict. Nevertheless we may surmise that this intelligence explosion will lead to increasingly powerful minds for which the problem of death will be solvable. Science may well vanquish death—quit possibly in the lifetime of some of my readers.

But why conquer death? Why is death bad? It is bad because it ends something which at its best is beautiful; bad because it puts an end to all our projects; bad because all the knowledge and wisdom of a person is lost at death; bad because of the harm it does to the living; bad because it causes people to be unconcerned about the future beyond their short lifespan; bad because it renders fully meaningful lives impossible; and bad because we know that if we had the choice, and if our lives were going well, we would choose to live on. That death is generally bad—especially for the physically, morally, and intellectually vigorous—is nearly self-evident.

Yes there are indeed fates worse than death and in some circumstances death may be welcomed. Nevertheless for most of us most of the time, death is one of the worst fates that can befall us. That is why we think that suicide and murder and starvation are tragic. That is why we cry at the funerals of those we love.

Our lives are not our own if they can be taken from us without our consent. We are not truly free unless death is optional.

Share the joy
Share the joy
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Daniel Dennett: In Defense of Robotic Consciousness

Daniel Dennett (1942 – ) is an American philosopher, writer and cognitive scientist whose research is in the philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. He is currently the Co-director of the Center for Cognitive Studies, the Austin B. Fletcher Professor of Philosophy, and a University Professor at Tufts University. He received his PhD from Oxford University in 1965 where he studied under the eminent philosopher Gilbert Ryle.

In his book, DARWIN’S DANGEROUS IDEA: EVOLUTION AND THE MEANINGS OF LIFE, Dennett present a thought experiment that defends strong artificial intelligence (SAI)—one that matches or exceeds human intelligence.[i] Dennett asks you to suppose that you want to live in the 25th century and the only available technology for that purpose involves putting your body in a cryonic chamber where you will be frozen in a deep coma and later awakened. In addition you must design some supersystem to protect and supply energy to your capsule. You would now face a choice. You could find an ideal fixed location that will supply whatever your capsule will need, but the drawback would be that you would die if some harm came to that site. Better then to have a mobile facility to house your capsule that could move in the event harm came your way—better to place yourself inside a giant robot. Dennett claims that these two strategies correspond roughly to nature’s distinction between stationary plants and moving animals.

If you put your capsule inside a robot, then you would want the robot to choose strategies that further your interests. This does not mean the robot has free will, but that it executes branching instructions so that when options confront the program, it chooses those that best serve your interests. Given these circumstances you would design the hardware and software to preserve yourself, and equip it with the appropriate sensory systems and self-monitory capabilities for that purpose. The supersystem must also be designed to formulate plans to respond to changing conditions and seek out new energy sources.

What complicated the issue further is that, while you are in cold storage, other robots and who knows what else are running around in the external world. So you would need to design your robot to determine when to cooperative, form alliances, or fight with other creatures. A simple strategy like always cooperating would likely get you killed, but never cooperating may not serve your self-interests either, and the situation may be so precarious that your robot would have to make many quick decisions. The result will be a robot capable of self-control, an autonomous agent which derives its own goals based on your original goal of survival; the preferences with which it was originally endowed. But you cannot be sure it will act in your self-interest. It will be out of your control, acting partly on its own desires.

Now opponents of SAI claim that this robot does not have its own desires or intentions, those are simply derivative of its designer’s desires. Dennett calls this “client centrism.” I am the original source of the meaning within my robot, it is just a machine preserving me, even though it acts in ways that I could not have imagined and which may be antithetical to my interests. Of course it follows, according to the client centrists, that the robot is not conscious. Dennett rejects this centrism, primarily because if you follow this argument to its logical conclusion you have to conclude the same thing about yourself! You would have to conclude that you are a survival machine built to preserve your genes and your goals and your intentions derive from them. You are not really conscious. To avoid these unpalatable conclusions, why not acknowledge that sufficiently complex robots have motives, intentions, goals, and consciousness? They are like you; owing their existence to being a survival machine that has evolved into something autonomous by its encounter with the world.

Critics like Searle admit that such a robot is possible, but deny that it is conscious. Dennett responds that such robots would experience meaning as real as your meaning; they would have transcended their programming just as you have gone beyond the programming of your selfish genes. He concludes that this view reconciles thinking of yourself as a locus of meaning, while at the same time being a member of a species with a long evolutionary history. We are artifacts of evolution, but our consciousness is no less real because of that. The same would hold true of our robots.

Summary – Sufficiently complex robots would be conscious

________________________________________________________________

[i] Daniel Dennett, Darwin’s Dangerous Idea: Evolution And The Meaning of Life (New York: Simon & Schuster, 1995), 422-26.

Share the joy
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

John Searle’s Critique of Ray Kurzweil

John Searle (1932 – ) is currently the Slusser Professor of Philosophy at the University of California, Berkeley. He received his PhD from Oxford University. He is a prolific author and one of the most important living philosophers.

According to Searle, Kurzweil’s book is an extensive reflection on the implications of Moore’s law.[i] The essence of that argument is that smarter than human computers will arrive, we will download ourselves into this smart hardware, thereby guaranteeing our immortality. Searle attacks this fantasy by focusing on the chess playing computer “Deep Blue,” (DB) which defeated world chess champion Gary Kasparov in 1997.

Kurzweil thinks DB is a good example of the way that computers have begun to exceed human intelligence. But DB’s brute force method of searching through possible moves differs dramatically from the how human brains play chess. To clarify Searle offers his famous Chinese Room Argument. If I’m in a room with a program that answers questions in Chinese even though I do not understand Chinese, the fact that I can output the answer in Chinese does not mean I understand the language. Similarly DB does not understand chess, and Kasparov was playing a team of programmers, not a machine. Thus Kurzweil is mistaken if he believes that DB was thinking.

According to Searle, Kurzweil confuses a computers seeming to be conscious with it actually being conscious, something we should worry about if we are proposing to download ourselves into it! Just as a computer simulation of digestion cannot eat pizza, so too a computer simulation of consciousness is not conscious. Computers manipulate symbols or simulate brains through neural nets—but this is not the same as duplicating what the brain is doing. To duplicate what the brain does the artificial system would have to act like the brain. Thus Kurzweil confuses simulation with duplication.

Another confusion is between observer-independent (OI) features of the world, and observer-dependent (OD) features of the world. The former include features of the world studied by, for example, physics and chemistry; while the latter are things like money, property, governments and all things that exist only because there are conscious observers of them. (Paper has objective physical properties, but paper is money only because persons relate to it that way.)

Searle says that he is more intelligent than his dog and his computer in some absolute, OI sense because he can do things his dog and computer cannot. It is only in the OD sense that you could say that computers and calculators are more intelligent than we are. You can use intelligence in the OD sense provided that you remember it does not mean that a computer is more intelligent in the OI sense. The same goes for computation. Machines compute analogously to the way we do, but they don’t computer intrinsically at all—they know nothing of human computation.

The basic problem with Kurzweil’s book is its assumption that increased computational power leads to consciousness. Searle says that increased computational power of machines gives us no reason to believe machines are duplicating consciousness. The only way to build conscious machines would be to duplicate the way brains work and we don’t know how they work. In sum, behaving like one is conscious is not the same as actually being conscious.

Summary – Computers cannot be conscious.

______________________________________________________________________

[i] John Searle, “I Married A Computer,” review of The Age of Spiritual Machines, by Ray Kurzweil, New York Review of Books, April 8, 1999.

Share the joy
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •