Why Do People Fear Immortality?

Anyone who reads this blog knows that I think death should be optional. Yet I always encounter resistance when introducing this idea to others. Why is that? There are many reasons. For some the idea that we should choose whether to live or die contradicts religious beliefs or seems impossible. For others death is thought natural or what gives life meaning. And fiction influences others by often portraying immortality as bad. It is bad because:

1) You will be bored.
2) You will be unable to die.
3) You will hurt others to attain it.
4) You will lose your humanity.
5) You will turn into a monster
6) You will destroy the environment.

My guess is that negative views of the future are more exciting, and sell more books and movie tickets than descriptions of utopias. But think of it this way. About ten generations ago the average life expectancy in most of the world was about thirty years. If someone told you then that they could triple that lifespan, would you voice the above concerns? I doubt it. Maybe people will be bad or bored or destructive because they live longer. But maybe not. Perhaps with age comes new interests, kindness, and the wisdom. Yes there are bored, horrific people in the world, but that is not connected with how long they live. Some people are just horrible.

Now suppose we tripled the lifespan again? Say an average healthy lifespan becomes about 250 years. What would change? I can’t say for sure but I see no reason to think any of the bad futures would ensue. In fact the knowledge that our lifetimes were relatively long might force us to better cooperate with others and better preserve the environment. If we are going to be alive when the ecosystem is ruined, we might be more likely to care for it.

Of course if we had the option to live forever that would be different. That would create different problems some of which I’ve tried to answer previously. But as long as we could opt out of immortality if we wanted, even objection 2 is overcome. So let’s continue to increase our lifespans and see what happens.


Torture and the Ticking Time Bomb

 (This article was reprinted in the online magazine of the Institute for Ethics and Emerging Technologies, December 16, 2014.)

Why do people torture others? Why do they march others into gas chambers? Because some are psychopaths or sadists or power hungry. Depravity is in their DNA.

Some are not inherently depraved but believe the situation demands torture. If others are evil and we are good, then we should kill and torture them with impunity. Such ideas result from the demonization of others, from a simplistic worldview in which good battles evil. If others torture, they are war criminals; if we torture are motives are pure. But the world is more nuanced than this. There is good and evil within us all.

The apologists for torture say they are protecting you. They may believe this but that doesn’t make it true. It may be in their interest to wage war, construct secret torture facilities or incarcerate millions in their home country, but it is probably not in yours. You or your children might be doing the fighting or the torturing, you might suffer the reprisals from the policies of the rich and powerful. Dick Cheney will get another deferment.

Moreover the torture advocates can easily turn you into instruments of their perversion, unlocking the perversion within you, as the Stanford Prison Experiment shows. If the best government jobs  program hires mercenaries, then some sign up. But be warned. Those who were caught and photographed at Abu Ghraib were sentenced to prison—scapegoats for those who authorized the policies. Donald Rumsfeld received a book contract.

So do you really feel safer knowing that your corporate-owned government wages continual warfare and tortures around the world? That they incarcerate millions of their own citizens in high-tech dungeons? That thousands languish in solitary confinement for years, some since they were children? That police often kill without repercussions? You may suffer no blowback. Perhaps your nationality, race or socio-economic class will shield you. But the depravity sown may also be reaped. If you are not among the rich and the powerful, the judge will not be lenient. If you are captured in a foreign land, being an American is not a plus.

Now I can construct thought experiments to justify torture or almost anything else. Should I imprison, torture or kill one to save a hundred? A utilitarian calculation says yes, one is less than a hundred. Torture’s defenders invoke such stories. They especially like the ticking time bomb scenario. It goes like this.

There is a ticking time bomb ready to blow up an American city. (If you’ve been to many inner cities in America, you’ll find little left for a bomb to destroy.) The bomb will soon detonate and the man who planted it is in custody. Surely we shouldn’t be squeamish about torturing him to save thousands of lives. Supreme Court justice Antonin Scalia, a horrific human being,  recently defended this argument. Scalia is a Catholic in good standing. So was the Grand Inquisitor.

The ticking time bomb story reminds me of Wittgenstein’s insight that we can be bewitched by a picture, seduced by simplistic examples that misrepresent the world. Think about the problems with this hypothetical story. You may kill this man before getting any relevant information, he may know nothing of the plan, there may be no such plan or he may lie to stop the torture. In such cases your torturing was for naught, it did nothing but corrupt you. The image cheats because it assumes there is a ticking bomb and you have the man who planted or knows of it. In real life it never works that way.

In real life it works like this. There might be a bomb or an attack planned, and you may or may not have people in custody who knows something relevant. Now how long and how severely should you torture these people? If they don’t talk is that a sign that they don’t know anything or that you should up the torture? If you have twenty prisoners and are sure that one of them knows something important but you don’t know which one, do you torture all twenty? Should you torture suspects’ children to see if that induces them to give you the information you want? Remember you don’t know if that will work until you torture their children. (The CIA of the United States used this tactic.) How many children do you torture before you stop? In such cases was your torture justified? Was it moral? Or did it engender hatred? Was it counter-productive?

If you are worried about enemies foreign and domestic why not just torture everyone who is a potential threat—college professors, torture opponents, ACLU members, Buddhist monks, grandmothers and bloggers who don’t like torture. Perhaps the enemies are among us like we thought they were during the McCarthy era. Maybe your colleague in torture is a spy. Should you torture him? Should he torture you?

The picture of the ticking time bomb bewitches because it’s a fabrication. In the real world the choice isn’t one person’s pain versus the suffering of thousands, it is the moral affront of torture and its repercussions versus the possibility of finding something useful. Remember too that the story portrays the decision as a one-time emergency choice, while in the real world decisions are made in the context of procedures and policies. That’s why the following questions need to be asked to. Should we have professional torturers who, like medieval executioners, are schooled in their practices? Maybe a torture major in college? Trade conventions showing the latest high-tech torture devices? These are not idle questions; they need to be addressed if we are to proceed.

So I ask. Do you really want to set a precedent of using barbaric practices that appeal to our worst instincts? Do we want to bring forth from human nature that which lies just below the surface of civility? Do you really want to create a torture culture and the people who inhabit it? Do you really want torturers walking among us? I think not.

More on Writing Well

 A recent post mentioned the two writing books that most influenced my writing.  But there is another one, William Zinsser’s Writing To Learn. I pulled it down from my bookshelf recently and found it marked up throughout. On the inside cover I found the inscription “Read, April 2000.” Somehow I had forgotten about it. Below are a few highlighted passages from the book.


“Writing is thinking on paper. Anyone who thinks clearly should be able to write clearly … ”

“Writing is learned by imitation. I learned to write mainly by reading writers who were doing the kind of writing I wanted to do and by trying to figure out how they did it.”

” … the essence of writing is rewriting.”

“Writing is how we think our way into a subject and make it our own.”

“Putting an idea into written words is like defrosting a windshield: The idea, so vague out there in the murk, slowly begins to gather into a sensible shape.”

“… I draw on two sources of energy that I commend to anyone trying to survive in this vulnerable craft: confidence and ego.”

” … I learned at an early age what has been an important principle to me ever since—that what we want to do we will do well.”

” What finally impels them [writers] is not the work they achieve, but the work of achieving it.”


Why do I care about writing well? Because language is our most advanced form of communication. With it ideas move between minds. This movement opens our minds, and changing minds change the world.

I’ll do my best to take the passion and enthusiasm within and give it artistic form. I cannot say success will follow, but I will try my best. – JGM

Review of Michio Kaku’s, Visions: How Science Will Revolutionize …

Summary of Michio Kaku’s Visions: How Science Will Revolutionize the 21st Century (1997)

“There are three great themes in science in the 20th century—the atom, the computer, and the gene.” – Harold Varmus, NIH Director

Three centuries ago Newton said that he was a boy, playing on the seashore while a “great ocean of truth lay all undiscovered before me.” Life in Newton’s time was, as Hobbes said, “nasty, brutish, and short.” But Newton unleashed a revolution that he could never have imagined. Within a few generations “the basic laws of matter, life, and computation were … solved.” [3-4]

The forward march continues. “In the past decade more scientific knowledge has been created than in all of human history.” We no longer need to be bystanders in the dance of nature. We are ready to move “from being passive observers of Nature to being active choreographers of Nature.” We are moving from the Age of Discovery to the Age of Mastery. Regarding predictions about the future, Kaku suggests we listen to those who create it. And there is an emerging consensus about the future. [6]

The 3 Pillars of Science – Matter, Life, and Mind

THE QUANTUM REVOLUTION – The quantum revolution spawned the other two revolutions. Until 1925 no one understood the world of the atom, but now we have almost a complete description of matter. The basic postulates are: 1) energy is not continuous but occurs in discrete bundles called “quanta;” 2) sub-atomic particles have both wave and particle characteristics which obey Schrodinger’s wave equation which determines the probability that certain events will occur. With the Standard Model we can predict the properties of things from quarks to supernovas. We now understand matter and we may be able to manipulate it almost at will in this century.

THE COMPUTER REVOLUTION – Computers were crude until the laser was developed in 1948. Today there are tens of millions of transistors into the area the size of a fingernail. As microchips become ubiquitous, life will change dramatically. We used to marvel at intelligence; in the future we may create and manipulate it.

THE BIOMOLECULAR REVOLUTION – There is a genetic code written on the molecules within the cells—DNA. The techniques of molecular biology allow us to read the code of life like a book. With the owner’s manual for human beings science and medicine will be irrevocably altered. Instead of watching life we will be able to manipulate it almost at will.

FROM PASSIVE TO ACTIVE – We are moving from the unraveling of nature stage to the mastering of nature stage. We are like aliens from outer space who land and view a chess game. It takes a long time to unravel the rules but by careful observation one learns. But this doesn’t mean you are a grand master. We have just learned the rules of matter, life, and mind and now we need to become masters. We are moving from being amateurs to grand masters.

FROM REDUCTION TO SYNERGY – Quantum technology gave birth to the computer revolution via transistors and lasers; it gave birth to the biomolecular revolution via x-ray crystallography and the theory of chemical bonding. While reductionism and specialization paid great dividends for these disciplines, intractable problems in each have forced them back together, calling for synergy of the three. Computers decipher genes, but DNA research will make possible new computer architecture using organic molecules. Kaku calls this “cross-fertilization,” and it will keep the pace of scientific advance accelerating.

THE WEALTH OF NATIONS – Wealth traditionally was with those who had natural resources or lots of capital. But brainpower, innovation, imagination, invention and new technologies will be the key to wealth in the future. The key technologies that serve as engines of wealth:

TIME FRAMES FOR THE FUTURE – Now till 2020 – “scientists foresee an explosion in scientific activity such as the world has never seen before.” We will grow organs, cure cancer, etc.

2020-2050 – biotech – everything including aging, physics, nanotech, interstellar travel, and nuclear fusion.
2050-2100 – create new organisms, first space colonies.
Beyond 2100 – extend life by growing new organs and bodies, manipulating genes, or by merging with computers.[13]

TOWARD A PLANETARY CIVILIZATION – Where is all this leading? One way to answer this question is to scan the heavens for advanced civilizations. Applying laws of thermodynamics and energy, astrophysicists have classified hypothetical civilizations based on ways they utilize energy—labeled Type I, II, and III civilizations.

Type I – mastery of terrestrial energy, ability to modify weather, mine oceans, extract energy from planet’s core. Harnessing the energy of the entire planet necessitates planetary cooperation.
Type II – mastery of stellar energy, use of the sun to drive their machines, begin to explore other stars. (The united federation of planets (star trek) is an  emerging Type II civilization.
Type III – mastery of interstellar energy, since they have exhausted their stars energy.

Energy is available on the planet, its star and its galaxy. Based on a growth rate of about 3%, we can estimate when we might make the transition from one civilization to another.

We expect to become a Type I civilization in a century or two;
A Type II civilization in about 800 years;
And a Type III in about 10,000 or more.

Right now we are a Type 0 civilization. We use dead plants for our machines but by the 22nd century Kaku predicts we will be getting close to a Type 1 civilization and taking our first steps into space.

THE INVISIBLE COMPUTER – “Long-term the PC and workstation will wither because computing access will be everywhere; in the walls, on wrists, and in ‘scrap computers’ (like scrap paper) lying about to be grabbed as needed.” –   Mark Weiser, XEROX PARC

By the way, if you think this quote is futuristic, investigate the Xerox PARC’s (Palo Alto Research Center) great record of prediction. As microchips become smaller, cheaper, and more powerful, the general consensus is that they “will quietly disappear by the thousands into the very fabric of our lives.” They will be in the walls, furniture, appliances, home, car, and in our jewelry. The computer will be more liberating and less demanding than it is today when it enters our environment rather than having us enter its environment. These devices will communicate with each other and tap into the internet, gradually becoming intelligent and anticipating our wishes. By comparison the personal computer is just a computing appliance. A consensus is growing among computer experts: “Computers, instead of becoming the rapacious monsters featured in science fiction movies, will become so small and ubiquitous that they will be invisible, everywhere and nowhere, so powerful that they will disappear from view.”

THE DISAPPEARING PC – Weiser believes that the trend toward invisibility is built into the human psyche. When we master technologies and they become ubiquitous , we cease to be aware of them. Consider electric motors that were once huge and bulky, demanding entire factories. Now electricity is everywhere and motors are small—more than twenty surround you in a typical car moving the windows, mirrors, radio dial, etc. Or consider writing. Once an art for scribes who wrote on clay tablets, writing was changed with the invention of paper. Still paper was precious and used only by royalty. Most persons went their entire lives and never saw paper. Today paper is ubiquitous and most of it has writing on it. Weiser thinks we’ll go to the store to pick up six-packs of computer like we do batteries today. If the trend of about fifteen years from conception of an idea to its entering the market then ubiquitous computing should begin to take hold around 2003. It may take until about 2010 until the it really catches the public’s fancy but by 2020 it should dominate our lives.

THREE PHASES OF COMPUTING – The history of computers is generally thought to be divided roughly into three stages. The first phase was dominated by the huge mainframes. Computers were so expensive that one computer was shared by hundreds of scientists and humans approached computers like ancient Greeks approached oracles. The second phase began in the early 1970s when computing power was exploding and the size of chips imploding. At Xerox the dream of one person per computer began to take shape; shortly thereafter the first PC was built. But complicated commands and manuals made PCs not very appealing—computers weren’t user-friendly. Thus they created a machine with pictures that you could just point too.

THE THIRD PHASE AND BEYOND – The third phase is ubiquitous computing, where computers are connected and the ratio is now hundreds of computers for each individual. This phase is expected to begin its decline in 2020 as silicon is replaced by new computer architecture. Some experts believe this will lead to the 4th phase, the intro of AI into computers, especially speech recognition, reasoning, and maybe common sense. But the 5th phase is the self-aware, conscious phase.

MOORE’S LAW – Since 1950 there has been an increase in computer power by a factor of about TEN BILLION! Moore’s law explains this growth, computer power doubles every 18 months. This is a fantastic increase, greater than the transition from chemical explosives to hydrogen bombs. In the past 80 years computing power has increased by a factor of ONE TRILLION!! Thus we can see how the 3rd phase of computing will be quickly upon us, especially when it is driven by economics and physics. The price of microprocessors (MP) plunges driving us into the 3rd phase. MPS will be as cheap and plentiful as paper. When chips are so cheap the incentive will be to put them everywhere. (Right now musical greeting cards with chips have more computer power than computers did in 1950.) In the same way that almost everything has writing on it, everything will have penny processors. In addition to all of this economic incentive pushing us to the 3rd phase, we must understand the power or quantum theory.

WHAT DRIVES MOORE’S LAW? – The secret behind Moore’s law is the transistor—a valve that controls the flow of electricity—whose dynamics are governed by quantum theory. The original transistors were about the size of a dime and connected by wires. MPs success is driven by the reduction in the size of transistors. While we can put 7 million transistors on a chip the size of a postage stamp, this reduction cannot continue forever—because of the limit of the wavelength of a light beam. New technology will be needed to continue this reduction.

SENSORS AND THE INVISIBLE COMPUTER – Paul Saffo, director of the Institute for Future, calls the 3rd phase “electronic ecology.” (EE) If the ecology of a forest is the collection of animals and plants that interact dynamically, then analogously we can speak of creatures in the EE. The EE changes when technological advance is made. In the 1980s it was the microchip, in the 1990s the Internet was driven by the power of MP and cheap lasers.

Saffo thinks the 3rd phase will be driven “by cheap sensors coupled to microprocessors and lasers … we will be surrounded by tiny, MPs sensing our presence, anticipating our wishes, even reading our emotions. And these MPs will be connected to the Internet.” In this electronic forest our moods will be sensed the way toilets sense our presence. The computers of the future will sense the world around them using sound and the electromagnetic spectrum. Sensors will pick up our voice commands, hidden video cameras will locate our presence and recognize our faces, and smart cars will use radar to detect the presence of other cars.

THE SMART OFFICE AND HOME OF THE FUTURE – The smart office will include TABS, tiny clip on badges with the power of a PC, allowing for doors to open, lights to go on, communication with other employees, and connection to the Internet. PADS, about the size of a piece of paper will be a fully operational PC, the beginning of smart paper. BOARDS will be about the size of TV screens and hung on walls will be used for teleconferencing, as bulletin boards, interactive tv, etc. And the home will detect bad weather and warn family members, the bathroom will diagnose illness, etc.

THE MIT MEDIA LAB – The director of the things that think project, Neil Gershenfeld, imagines when inanimate objects will all think. Gershenfeld has discovered that the space around our bodies is filled by an invisible electric field generated by electrons which accumulate on our skin like static electricity, and when we move this “aura” moves with us. Since we now have sensors that detect this field, the location of our hands and arms and legs can be detected. What this means is that we have a powerful new way to interact with computers that would be better than using a mouse. This means virtual reality is getting closer. Gershenfeld wants to animate empty space, and he is particularly interested in animating our shoes, from which one watt of energy could easily be drawn. And if we put an electrode in our shoes we could transfer data from our shoe to our hand—the body is salty and conducts electricity—and when we shake hands we could exchange computer files. This leads us to the Things That Think Labs motto:

In the past, shoes could stink.
In the present, shoes can blink.
In the future, shoes will think.

THE INTELLIGENT PLANET – The 3rd phase of computing is creating “a vibrant electronic membrane girding the earth’s surface … [the net] like a dirt road waiting to be paved over into an information superhighway, is rapidly wiring up the computers of the world.”

And when we enter the 4th phase, when AI programs are added to the net, we will communicate with the net as if it were intelligent. We will talk to the net in our wall screen a screen may that may have a personality, be a confidant, aide, and secretary simultaneously. Like in Disney movies, the teapots and coffee cups will talk to each other and to us.

WHY NO POLICE? – Yet the net today is chaotic—no directories to speak of, no rules, etc. There are many theories of why the net took shape in this haphazard way—most notably the secrecy surrounding the Cold war—but the net has taken off.

HOW THE NET AND OTHER TECHNOLOGIES CAME ABOUT – In 1977 important members of the Carter administration were considering how to protect the President and themselves in the event of all out nuclear war. It became apparent that the whole proposed plan was a fiasco, causing the Pentagon’s researches to propose several technologies to compensate. Among which were: teleconferencing, virtual reality (flight simulators), GPS, & email. Scientists, who would have to rebuild the country fast after all out nuclear war needed something fast—the ARPANET—which became the net.

THE MOTHER OF ALL NETS – In 1844 Morse sent the first telegraph message, in 1961 UCLA and Stanford connected their computers. Ten years later there were only two dozen sites, and by 1981 only 200. It wasn’t until 1990 that the critical masses was reached that the reached the public and began to take off and the WWW was created in Geneva in 1991. Now the Net grows 20% per QUARTER. This exceeds the growth rate of computers and we have 10 million servers and 40 million users. (This was written in 1997] Most experts think the net will be as big as the phone system by 2005 or before, and, with the merger of TV possible soon, 99% of all US homes may be linked to the net in the next few years. Finally, consider this: In 1996 there were 70 million pages of info on the Net; but by 2020 the net should access “the sum total of the human experience on this planet, the collective knowledge and wisdom of the past 5,000 years of recorded history.”

THE HISTORICAL SIGNIFICANCE OF THE NET – The net can be compared to Gutenberg’s printing press of the 1450s. For the first time books could reach a mass audience. Before Gutenberg there were about 30,000 books in all of Europe! By 1500 there were more than 9 million. [Roughly the size of a good size university library.] Of course there have been technologies that failed to reach critical mass—picture phones, CB radios—but the net is unlikely to become extinct.

TO 2020: HOW THE NET WILL SHAPE OUR LIVES – The net will allow us to work from home, bring specialized hobbyists from around the world together, enjoy the cyber marketplace, etc. On line bookstores, brokerage firms, banking, and travel agencies will light up the net.

THE MERGER OF TV AND THE NET – In 1996 the FCC and TV giants agreed to go analog which doubles the resolution. In short, TVs of the future will be connected to the Net, making TV interactive. But TVs may well be replaced shortly thereafter by … WALL SCREENS – TV screens flat enough to hang like pictures of small enough to fit in your watch.

SPEECH RECOGNITION – Machines can already recognize human speech, but they don’t understand what they are hearing unless one speaks pretty slowly. However most of the basic difficulties should be solved in the next few years. Still, hearing is not understanding and it would take very good AI for real comprehension. This problem may have to wait until the 4th phase of computing, between 2020 and 2050 when we have good AI.

FROM THE PRESENT TO 2020: INTELLIGENT AGENTS – In the meantime, we are working on intelligent agents—programs that can make primitive decisions and act as filters, distinguishing between junk and valuable material. IA may be particularly good at gathering information we want and saving us the time of searching for it.

2020-2050: GAMES AND EXPERT SYSTEMS – After IA is HEURISTICS, the branch of AI that tries to codify logic and intelligence with a series of rules. Heuristics would allow us to speak to computerized doctors, lawyers, etc who would answer tech questions. Expert systems are heuristic programs that contain the knowledge of a number of human experts to dissect problems. Consider going to the doctor where you receive a series of if…then questions that lead to diagnosis. This task can be done by an expert system. It is easy to see that the comprehensive and methodical nature of an expert systems would be better than a human physician.

COMMON SENSE (CS) IS NOT SO COMMON – The problem with today’s computers is that they are glorified adding machines. They are marvelous at mathematical logic, but very poor with physics and biology. They have trouble with the concept of time for example. S and J are twins and S is 20 years old so how old is J? This is a tough problem for a computer. Or consider this conversation: Human: Ducks fly, Chuck is a duck. C: Chuck can fly. Human: Chuck is dead. Computer: Chuck is dead and can fly. That dead things don’t fly is not obvious from the laws of logic.

THE ENCYCLOPEDIA OF COMMON SENSE – Some have advocated creating an EOCS containing all the rules of CS. If CS programs are loaded into computers, intelligent conversation is more possible. For example computers need to know things like: Nothing can be in two places at the same time; when humans die they aren’t born again; dying is generally undesirable;  animals don’t like pai; time advances at the same rate for everyone; when it rains people get wet; etc. As of 1997, a project to give computers CS  had accumulated 10 million assertions. But the task is extraordinarily difficult. For example, it took three months for a computer programmed with CS to understand “Napoleon died on St. Helen’s. Wellington was saddened.” In short AI, in whatever form it takes, has a long way to go.

A WEEK IN THE LIFE IN 2020 – A face on the wall says wake up. As you walk to the kitchen the appliances sense your presence and the coffee starts brewing, bread is toasted, while your favorite Bach concerto plays softly. Molly has printed out a version of the paper that you are especially interested in by scanning the web, and as you leave the kitchen it reminds you you need milk. Before you leave you tell the robot to vacuum. You drive to work in your hybrid, smart cars, whizzing by a toll booth that scans your smart car. At work you insert your wallet card into the computer to pay your bills, have a few video conferences, and head home. You get home, connect with your virtual doctor who tells you that he will zap out a few cancer cells with smart molecules so you don’t get cancer in ten years.

BOTTOM UP OR TOP DOWN? – MIT’s famed AI lab is “a high-tech version of Santa’s workshop.” Kaku begins by focusing on research that is not interested in creating creatures who play chess but INSECTOIDS and BUGBOTS, small insect like creations with the ability to learn by bumping into things, crawling around, etc. The idea is that while insects can’t play chess they get along quite well in your home.

This biology-based approach is termed the bottom up school. The inspiration for this school is evolution which has produced complexity from simple structures. In short, the idea is that “learning is everything; logic and programming are nothing. [This seems to overstate the case even in terms of evolution. We learn from our environment, but our programming—cognitive structures in place at birth—are clearly essential.] Still, AI may be immensely enriched by interplay with the insights of the biomolecular and quantum revolutions. Kaku mentions how many physicists have moved from superstring theory and quantum gravity to brain research as an example of the interplay between the three big revolutions.

On the other side of the debate is the top down school. The digital computer provides their model of thinking machines: “They assumed that thinking … would emerge fully developed from a computer.” Their strategy is to put (program) the rules of logic and intelligence directly into the machine, along with subroutines for speech, vision, etc. and you’d have an intelligent robot. Of course this is based on the idea that intelligence can be simulated by a Turing machine. Kaku argues that the problem here is that the top down school underestimated “the enormity of writing down the complete road map of human int.” The two camps are often at odds with the one arguing that the bottom up robots may get from here to there but won’t know what to do when they get there; while the other replies that the top down computes play chess but don’t know how to take a walk. Most feel that some combination of the two approaches will work best.

PREPROGRAMMED ROBOTS – Since it may be twenty years or more until the creations at the MIT lab enter the marketplace, what will see immediately are “increasingly sophisticated industrial robots.” From 2020 to 2050 we should enter the 4th phase of computing “when intelligent automatons begin to walk the earth …” Beyond 2050 we will enter the 5th phase of “robots of consciousness and self-awareness.”

To better understand all of this, consider the difference between industrial or remote- controlled robots—just preprogrammed windup toys—and the more sophisticated versions to come. Kaku describes the evolution of these technologies.

2000-10 – develop into reliable helpers in factories, hospitals, and homes. (Volks-robots) 2010-20 – these robots replaced by machines that learn from their mistakes.

2020-2050 ROBOTICS AND THE BRAIN – One of the big problems in robotics is the problem of pattern recognition. Robots can see but don’t understand what they see. The reason we have so much trouble duplicating pattern recognition is that our understanding of our brains is primitive.

We know that our brain is layered which reflects its evolutionary development. Nature preserves its older forms creating a museum of our evolutionary history. The first layer of the brain is the “neural chassis” controlling basic functions like respiration, heartbeat, and blood circulation. It consists of brain stem, spinal cord, and midbrain. The second layer is the R-complex controlling aggression, territoriality, and social hierarchies. The so-called “reptilian brain.” Surrounding this is the limbic system which is found in mammals. It controls emotions, social behavior, smell, and memory. This was necessary because mammals live in complex social groups. Lastly, is the neocortex which controls reason, language, spatial perception and other higher functions. Humans have wrinkles on the cerebral cortex increasing the surface area.

Today’s robots possess only the first layer of brain, so there is a long way to go. But experts such as Miguel Virasoro, one of the most famous physicists in the world, believe that microchips will eventually approach the computing power of human brains. Right now the Cray-3 processes at 100 million bits per second, about the speed of a rats brain. Estimates are that the human brain calculates 1000 times faster than this but, if  Moore’s law continues to hold, supercomputers should match humans around 2020 and desktops by 2040. Virasoro objects to this whole top down approach since he argues that the brain is not a Turing Machine, it’s not even a computer. Thus, faster computers will not duplicate human brains.

Virasoro also argues as follows: The brain has about 200 billion neurons which fire 10 million billion times per second. The nerve impulses travel very slow—300 feet per second—but the complexity of the brain’s neural connections compensates. Since each neuron is connected to 10,000 other neurons the brain is a parallel processor which carries out trillions of operations per second and yet is powered by the energy of a lightbulb—that’s efficiency. Computers calculate at nearly the speed of light but perform one calculation at a time. The brain calculates slowly but performs trillions of computations per second. And while a brain can have a part of itself damaged and still function, a Turing machine can be destroyed by the loss of a single transistor. Since the brain is a complex neural net, the bottom up approach is the only one that will work.

TALKING ROBOTS – NETalk is a neural network that has learned to speak English almost from scratch. Rather than using the top down approach and stuffing a program with dictionaries, phonics rules, exceptions to grammar rules, etc. a simple neural net was created that learned from its mistakes. While the difference between real and model neurons is immense the fact that a simple neural net can speak suggests “that perhaps human abilities can be simulated by electronics.”

ROBOTICS MEETS QUANTUM PHYSICS – There has been a migration from quantum physics to brain research. Physics is different from biology, the former looks for simple elegant solutions while the later is messy, inelegant and full of dead ends. The former is based on universal laws, the latter has only the law of evolution as its universal law. Physicists wonder if there are any fundamental principles behind AI, like there are in physics which led to questions like “Can a neuron in the brain be treated like an atom in a lattice?”

Thus, while the top down school held that mind was a complicated program inserted into a computer, The bottom up school suggested that mind arose “from the quantum theory of mindless atoms, without any programs whatsoever!” The founding father of the neural net field, John Hopenfield, summarized it as follows: Individual atoms in a solid can exist in a few discrete states—spin up or down—and neurons similarly either fire or don’t fire. In a quantum solid a universal principle states that atoms are arranged so that the energy is minimized. Might Nnet circuits minimize their energy? If the answer is yes, the unifying principle behind Nnets is: “all the neurons in the brain would fire in such a way as to minimize the energy of the net.” Hopenfield also found that neural net behave a lot like brains. For example, even after the removal of neurons, the neural nets behaved pretty much the same; it seemed to have memories. The strangest finding was that neural nets began to dream.

WHAT ARE DREAMS? – Hopenfield believes “dreams are fluctuating energy states in a quantum mechanical system.” And just as we need to dream, especially after exhausting experiences, if neural nets are filled with too many memories they malfunction from overload and began to recall previous learned memories. Moreover, ripples began to form that didn’t correspond to memories at all, but rather to fragments of memories pieced together. These “spurious memories” correspond to dreams. A slight disturbance to this system allowed it to again settle down to a state of deep energy minimization—sleep? After several episodes of dreaming and sleeping, the system “awakens,” i.e., stops malfunctioning. Kake believes the top down and bottom up schools will merge in forty years or so, creating truly intelligent robots.

CAN ROBOTS FEEL? – Kaku thinks we’ll find it easier to interact with robots if they have some emotions and common sense. This may seem counter-intuitive, that clumps of metal can feel, but it isn’t impossible to do. He thinks providing a robot with the capacity to care for its master, to want to make him/her happy will increase the commercial success of robots. And this is a kind of love. Jealousy, anger, laughter, and fear are similarly worthwhile. But does this mean that robots are self-aware?

BEYOND 2050: ROBOT CONSCIOUSNESS – By 2050 AI systems should have a modest range of emotions, the internet will be a magic mirror, accessing the entire database of human knowledge and talking and joking with us. But are these AI systems conscious? The answer to this question is tough, inasmuch as we don’t know what the consciousness is but “many of the scientists who have dedicated their lives to building machines that think feel it’s only a matter of time before some form of consciousness in captured in the laboratory.” Of course neural nets have already produced thinking machines—me and you. This isn’t all that surprising since consciousness is an “emergent” property, something that “happens naturally when a system becomes complex enough.” There are also theorist like Daniel Dennett, Herbert Simon, and Marvin Minsky who believe they have explained consciousness in some way. But all of them seem to share the view that consciousness arises out of the complex interactions unconscious systems.

PET scans seem to bear these thinkers out. Consciousness seems to be spread out over the brain, like a dance between the various parts, and thus there is only the illusion that there is a center for con. [A theme in Buddhism 2500 years ago.] Others believe that the brain generates lots of thoughts simultaneously and the consciousness is just the thoughts that “win out.” Of course the other extreme are those who think robots will never be conscious, thinkers like: Penrose, Searle, and McGinn.] Kaku replies to the mysterians as follows:

“The problem with these criticisms is that trying to prove that machines can never be conscious is like trying to prove the nonexistence of unicorns.” Even if you could show that there are no unicorns in Texas or in the solar system, it is always possible to find one somewhere. “Therefore, to say that thinking machines can never be built has, to me, no scientific content.” For now the question is undecidable, and if and when we build them, then we can decide. Kaku thinks that consciousness exists in degrees and will be created that way.

DEGREES OF CONSCIOUSNESS – The lowest levels monitor your body and environment. Since computers perform self-diagnostic and print error messages they probably fall into this category. Plants probably are probably a little higher up, they must react to changes in the environment. Machines with vision are probably on this scale. The next level is “the ability to carry out well-defined goals.” The future Mars probe is an example. Higher still is the entire animal kingdom. Goals are fixed and plans are implemented to carry them out. Still most behavior is probably hard-wired. And this level is probably the dominant one for humans too. Most of our time is spent thinking about survival and reproduction. Of course there are thousands of levels here. The highest level of consciousness is “the ability to set one’s goal, whatever they may be.” If robots can do this, they are conscious. But what if ours and our creation’s goals conflict?


What happens after 2020, the end of the microchip, and when quantum computers become a reality? One possibility not investigated thus far is bionics. Can we interface computers directly with the brain? First, we need to show that neurons can grow on silicon and then connect them with neurons in living beings, such as human neurons. Finally, we would have to decode the neurons that make up our spinal cord. In 1995 at the Max Planck Institute scientists did just this with a leech neuron and a silicon chip. They welded hardware to software. The way has been paved “to developing silicon chips that can control the firing of neurons at will, which in turn could control muscle movement.” The neurons of baby rats have also been grown on silicon surfaces. At Harvard Medical School they have already begun to build a bionic eye. This should be able to restore vision to the blind. This should help ten million Americans. We should soon be able to make eyes that are better than the ape eyes we have, eyes that could see at the ultraviolet and infrared. Of course we could do similar things to the arms, legs, etc. allowing for superhuman feats, but then we would need a superhuman skeleton as well.

This merging of mind and machine will take, according to Ralph Merkle at Xerox PARC, a human genome project effort to map the brain neuron by neuron at a cost of $340 billion dollars. The technology to begin will probably have to wait another decade. As far as the distant future, Kaku believes that all three revolutions will merge. Quantum technology will provide transistors smaller than neurons. The computer revolution will give us neural nets as powerful as brains. And the biomolecular revolution will give us the power to replace our neural nets with synthetic ones, “thereby giving us a form of immortality.”

Since evolution favors organism that are able to survive, a human/mechanical blend may be the best way to survive. And if we map every neuron in the brain can we then give our brains immortal bodies? Kaku thinks we will gradually transfer our consciousness to robotic bodies, that this as the next step in evolution, as does Marvin Minsky who thinks of this as “unnatural selection” a process of deliberately replacing humans.

Minsky says robots will inherit the earth and they will be our children. “We owe our minds to the deaths and lives of all the creatures that were ever engaged in the struggle called evolution. Our job is to see that all this work shall not end up meaningless.”

Review of Hans Moravec’s, Robots Mere Machine to Transcendent Mind

Robot: Mere Machine to Transcendent Mind (1999)


“… change sculpted our universe and our society …” By almost any measure society is changing faster than ever “a statement true for at least half a millennium, and mostly true since the agricultural revolution and the invention of writing over five thousand years ago.” This accelerated pace continues because of the products of technology which further speed up the process; humans struggle to keep pace: “the lessons of a technical education are often obsolete before the education is complete.”

If you rub wood is will get warm but will soon cool down—a rule for our ancestors—unless you rub it so hard it ignites—escape velocity (EV). Similarly our machines will achieve escape velocity and the rules will no longer apply. In Moravec’s (M) words: “the wood is already smoldering.”

M says we are like riders in an elevator who forget how high we are until we get an occasional glimpse of the ground—as when we meet cultures frozen in time. Then we see how different our world is compared to the one we adapted to biologically. For all of human history culture was secondary to biology, but about 5000 years ago things changed, as cultural evolution in the form of memes became far and away the most important means of evolution for humans. Today we are reaching the EV from our biology. But the world we will produce will be “unlike the villages, fixed and nomadic, in which human behavior evolved, …”

There is a mismatch between “our stone-age biology and our information-age…” This is apparent when you consider how long it takes to become specialized in the difficult and esoteric work we do. Still, despite our misgivings about contemporary life, few of us would be prepared to be stone-aged forest dwellers. Besides, M argues that we have substitutes for our tribal group in competitive sports, outdoor vacations, and BBQs. Certainly some groups reject all this, yet the ubiquitous nature of industrial society and the benefits it offers—medicine, food, clothing, etc—seem to be preferred by most. This disenchanted “are outvoted by the demands of billions for food, housing, and civilized comforts.”

As machine productivity rises, humans are left with less physical labor and more leisure time—which M says we can use to satisfy our hunter-gatherer instincts if we want. Furthermore, a real green revolution will be possible when we are sufficiently wealthy and technology sophisticated enough to move, for example, production to outer space. M has done a detailed analysis that shows that pollution increases as wealth increases since personal wants outweigh communal concerns—to a point. When wealth reaches a certain point, communal concerns are affordable and persons pay for them, “wealth increases the options available to individuals.”


M begins by reminding us of how difficult robotics is. He reiterates the point that he has made several times—humans find glancing at a chessboard and seeing the pieces easy; while it takes much thought to make a good move. Machines have the opposite problem.

Cybernetics is “the science of control and communication in the animal and the machine.” Scientists have made artificial nervous systems and insect robots under this banner. The field began in the 1940s but by the 1960s challenging problems like building reading machines were stumping the field. The development of computers suggested a different approach to thinking machines. Alan Turing’s computer cracked the German code in WWII and later speculated about the development of intelligent machines. John von Neumann picked up where Turing left off and by the 1950s the idea of artificial intellects was in the air. The term “AI” was coined in 1956 and “Logic Theorist,” the first working program of AI, proved many of the theorems in Russell & Whitehead’s Principia Mathematica.  But most of these programs were not very good and proved theorems no better or faster than a college freshman. Moreover there was no common sense in these programs.

The first attempts at AI s in the 1970s added arms and eyes to the robots but they picked things up less well than a six month old. M notes that this disparity between programs that calculate and reason versus programs that interact with the world “remains to this day.” Robots still don’t perform as well behaviorally as infants or non-human animals but play chess superb. So the order of difficulty for machines from easier to harder is calculating; reasoning; perceiving, and acting. For humans the order is exactly the reverse. The explanation most likely lies in the fact that perceiving and acting were beneficial for survival in a way that calculation and abstract reasoning were not. Machines are way behind in many areas and yet catching up: “In less than fifty years, inexpensive computers will match and exceed-in raw info processing power… the human brain.” But can we program them to intuit and perceive like humans?

COCKROACH RACE –  Cybernetics tries to copy the nervous system by imitating its physical structure. After slowing down in the 60s, neural net technology reinvigorated the discipline when “computers became powerful enough to simulate interesting assemblies of neurons.” But copying a brain is difficult because examining a brain with present instruments is relatively primitive. M believes that both top down and bottom up methods will be used to create robots that interact with the world thus “recapitulating the evolution of biological minds …” M goes into detail “about the slow buildup of their [robots] arrival.”


Since computers have far to go to match humans, M will estimate future trends on the basis of analogy and inference and extrapolation. Computer vision can now follow roads but need to improve tenfold [1000 Million Instructions Per Second (MIPS] for reasonable 3D spatial awareness and another tenfold to find 3D objects in a clutter reasonably fast. There are lots of other technology that seek to replace brains and eyes like handwriting and speech recognition..

The key question is how much more computer power is needed to reach human performance. M suggests we relate nerve volume to computation. For example, the neural assemblies of the retina can be compared to what has happened so far with robot vision. The human retina has about 100 million neurons and processes about 10 images per second. Based on his extrapolation, M estimates that it will take 100 million MIPS of computing power to match human sight functionality.

Today’s most powerful super computers can do a few million MIPS; that is, they are within a factor of 100 of having the power to mimic a human mind. Of course such computers would have to cost about 1K for them to make economic sense. M extrapolates that “computers suitable for humanlike robots will appear in the 2020s.”

M is unconcerned over claims that the exponential growth of computing power will subside. He suggests a number of possibilities/ideas/technologies that will overcome difficulties including: single-electron transistors, quantum dots, quantum interference logic, molecular computers, and of course quantum computers. M believes that humanlike robots will arrive without these more exotic techniques. (M will detail in a later chapter how the evolution of robot minds will parallel the evolution of human minds but be 10 million times as fast. With robots achieving “humanlike intelligence in about forty years.” )

M counters the critics by arguing that the next fifty years will see change happening more quickly than the last fifty. Why? First of all the growth and competitiveness of the computer industry itself. Second, machine research has only been progressing since about 1990 with the funding necessary to double power every year. The result: Machine-read text, speech recognition, robots driving cars and crawling on Mars, and composing music. From the inside robots will be machines, from the outside they will appear intelligent.

M draws an analogy to topography. The human landscape of consciousness has high mountains like hand-eye coordination, locomotion and social interaction; foothills like theorem proving and chess playing; and lowlands like arithmetic and memorization. computers are analogous to a flood which drown the lowlands; has just reached the foothills, and well eventually submerge the peaks.

Turing anticipated the development of minds a half-century ago. He thought machines would pass the test in about 2000, and though they do for certain restricted topic tests, it will be a few more decades until they can truly be said to have passed the Turing test. In Turing’s famous paper “Computing Machinery and Intelligence” he responded to 9 classes of objections to the machines can be made to think issue. They were:

1) Theological – thinking comes from souls, machines don’t have souls, machines can’t think.
2) The “Heads-in-the-Sand” Objection – thinking machines aren’t possible because the consequences would be terrible.
3) Mathematical mechanical reasoning has limits that human reasoning doesn’t.
4) The Argument from Consciousness – machines have no inner experience to give meaning to what they do.
5) Arguments form Various Disabilities – machines can’t be kind, moral, joyous, etc.
6) Lady Lovelace’s Objection – computers can only do what they’re programmed to do.
7) The Argument from Continuity in a Nervous System – nerves respond to tiny signal differences, Cs work in fixed-size steps.
8) The Argument from Informality of Behavior – it isn’t possible to specify what a C should do in every possible situation a human might be in.
9) The Argument from ESP – humans sense things that deterministic computers can’t.

Turing (T) replies:

1) T was an atheist who rejected religious explanations. But for the sake of argument, T asked if a God couldn’t put a soul in a machine if she wanted to. [He assumes the answer is yes.] Furthermore, the soul is a name for subjective consciousness. The mechanistic idea is that consciousness arises from patterns of brain activity; no dualism necessary. This suggests that mechanisms could be made to produce consciousness. M tries not to settle the metaphysical issues but suggests that when robots interact with us and appear intelligent we will recognize them as such.

2) T thought arguments against AI came from a fear of being replaced. T suggested that the way around this problem would be to download our brains into robotic bodies.  M argues that we do fear other “tribes” encroaching on our territory but robots will resemble us, acquire our values, and share our goals: “antisocial robot software would sell poorly…” We should look upon those who will inherit the world from us as our children.

3) The math objection revolves around Godel’s theorem, that axiomatic mathematical systems contain true statements that cannot be deduced. Analogously, T showed that universal T machines confront the same self-referential paradoxes. Since machines can be stumped by Godel questions that humans can answer, machines have limits that humans don’t. T thought this painted machines as rigid deterministic mechanisms and that this characterization was true for only the short-term

4) T pointed out that we don’t know if other persons have subjective experiences so we just assume they do because they act conscious and we’ll reach similar conclusions regarding intelligent machines. 

5) T considered this a generalization from the experience of intelligent machines of the day. Fifty years later things look a bit different. And fifty years hence?

6) This is obviously false—programs do lots of unexpected things often producing solutions that would have taken humans lifetimes.

7) Computers count rather than measuring  and some feel that this discontinuity has less potential than continuity as in nervous systems. T believes this is human hubris.

8) Sophisticated machines will react sometimes by rules and sometimes unexpectedly.

9) Do we need to say anything?


There are only a few thousand robots—some over ten years old and they aren’t very advanced. M predicts that the earliest general use robots may be the robot vacuum cleaner followed by robots that dust, pick up clutter, mow lawns, etc. If successful this should bring about a spiral effect for more research and better robots. Robots are to physical work what computers are to paperwork, and, since there is more of the former than the latter, M predicts that robots will eventually be much more numerous than computers.

First gen robots – 2010 – 3000 MIPS (lizard scale) – Distinguishing feature – general purpose perception, manipulation, and mobility. They will do light mechanical work, food preparation, household tasks, and car tune-ups.

Second gen robots – 2020 – 100,000 MIPS (mouse scale) – accommodation learning. They will adjust to an action’s past effectiveness, that is, use genetic algorithms. M thinks they will find jobs and become the largest industry on earth.

Third gen robots – 2030 – 3,000,000 MIPS (monkey-scale) – world modeling. will learn much faster than 2nd gen robots. Most importantly it will be able “to simulate its world in real time.” They will create simple programs of their own and do cool stuff.

Fourth gen robots – 2040 – 100,000,000 MIPS (human scale) – reasoning. The bottom up method slowly transfers perceptual and motor faculties while AI will transfer reasoning to robots. They will understand language “hey, the water is still running in the bathtub, get your little mechanism up there.” And of course, they will be smart enough to design their own successors—without us!

So four generations of robots will mimic the 400 hundred million year evolution marked by the brain stem, cerebellum, mid-brain, and neocortex. But will these things be conscious? Have emotions? M knows it upsets many to say yes. Just as the terrestrial and celestial was once a sacred distinction, so is the animate/inanimate thought to be a sacred distinction. Of course if the animating principle is a supernatural soul, then the distinction remains. But our current knowledge suggests that complex organization provides animation. We are, in effect, “inspiriting the dead matter around us.”

Naturally robots will manifest conscious/internal life as they advance. Fear, shame, and joy may be emotions valuable to robots so as to retreat from danger, reduce the probability of a future bad decision, or reinforce good decisions. M thinks there would be good reasons for rs to have platonic love for their owners and, since robots don’t have to be selfish to guarantee their survival, they will be nicer than most humans. Anger is more complicated, but M thinks it might be necessary to have robots capable of anger. 

M notes that many reject the view that dead matter can give rise to consciousness. Philosopher Herbert Dreyfus argues that computers can’t “capture the ineffable intuitive subconscious,” while his colleague John Searle says that computers “may simulate thought, but will never actually think meaningfully.” Roger Penrose argues that consciousness is achieved “through gravitational collapse of the quantum wave function in individual neurons.” But M points to the accumulating evidence from neuroscience to disagree.

As robots become increasingly proficient, they could do almost all the work for us and support us from activity in the solar system: “leaving behind a nature preserve subsidized from space.” M sees this as a natural development with humans using one of their two channels of heredity. Not the slower biological DNA, but thru culture by books, language, and machines. For most of human history there was more info in our genes than in our culture but now libraries alone hold thousands of times more info than genes.

Given fully intelligent robots, culture becomes completely independent of biology. Intelligent machines, which will grow from us, learn our skills, and initially share our goals and values, will be the children of our minds.”


A 100,000 years ago, our ancestors were supported by fully automated nature. With agriculture, we increased production but added work. Until recently everyone was a farmer, and most worked producing food. Farmers lost their jobs to machines and moved to manufacturing; but more advanced machines moved displaced farmers out of factories and into offices; where machines have put them out of work again. Soon machines will do all the work. As tractors and combines amplify farmers, computer workstations amplify engineers, resulting in productivity previously undreamt of. In the office layers of management and clerical help slowly disappear. The scribe, priest, seer and chief no longer are the repository of the sages wisdom—printing and mass communication ended that. There are no telephone operators to speak of and most queries are handled by voice recognition. Text readable machines sort mail, and phones and cash registers have been replaced by voice mail and ATMs. “Advancing automation and a coming army of robots will displace labor as never before.” In the short run this causes panic and the scramble to earn a living in new ways. In the med run it provides the opportunity to have a more leisurely lifestyle. In the long run, “it marks the end of the dominance of biological humans and the beginning of the age of robots.

So there will be no work, when the robots become sufficiently advanced. M thinks this is good—less stress, urban strife, tribalism, war, etc. Humans will be able to live wherever they want—probably not cities—as robotic workers care for all our needs. The robots need to be constructed to enjoy serving us, along the model of the social insects for example. And it will be prosperity that will eliminate most instances of aggression. We will then become Exes(ex-humans or post-biological beings) and we will explore outer space.


M maintains that the future actually exceeds the imaginings of Verne, Franklin, Da Vinci, etc. Exes will compete and eventually “be transformed into intelligence-boosting computing elements. … Physical activity will gradually transform itself into a web of increasingly pure thought, here every smallest interaction represents a meaningful computation.” M thinks high-energy physics has only scratched the surface and we “may learn to tailor spacetime…” Exes will arrange spacetime and energy “into forms best for computation” with the result that “the inhabited portions of the universe will be rapidly transformed into a cyberspace, where overt physical activity is imperceptible, but the world inside the computation is astronomically rich.” Beings wont be defined by physical location but will be patterns of info in cyberspace. Minds, pure software, will interact with other minds. The wave of physical migration into space will have long given way to “a bubble of Mind expanding at near lightspeed.” [165]

As for the “state of Mind” M sees boundaries of personal identity as breaking down but still remaining. Ineffective thoughts will still be weeded out by a Darwinian evolution. Exes will have more future since they will “cram more events” into physical time. And cyberspace will be “much bigger and longer lasting than the raw spacetime it displaces.” After some difficult calculation, M determines that the 1045 bits of a single human body “could contain the efficiently encoded biospheres of a thousand galaxies-or a quadrillion individuals each with a quadrillion times the capacity of a human mind.”  And the “expanding bubble of cyberspace” will recreate all it encounters, “memorizing the old universe as it consumes it.” [167]

M wonders if some ind mind might escape from its small role in a godlike mind to be… independent? To consider this M turns to a discussion of telepresence and VR. Imagine you are in a good simulated reality, your brain attached to a simulator and then eventually replaced altogether by artificial hardware. ” … our essences will become patterns that can migrate the information networks at will.” Surprisingly, M thinks we might still want a body. Well, the illusion of a body at least for a while, but he speculates that the slow substitution of AI  we will finally be liberated from any sense of our original brain/body. ” … the bodiless mind that results … would hardly be human. It will have become an AI.” 172-3

M moves next considers theorist  who think time-travel is possible: the Kerr-Newman solutions of the 1960s; Tipler’s work in the 70s; and the Wheeler-Feynman model of the 1940s which sends signals to the past. M goes into detail about time-loop logic. The warping effects of time loops “may be magic paddles to navigate the alternative worlds in powerful ships.”


M is a “physical fundamentalist” and yet Descartes beckons—simulated worlds with simulated persons seem possible. “A possible world is as real … as conscious observers, especially inside the world, think it is!” But what is consciousness? The prescientific idea of soul has been successful socially, but not scientifically. M thinks that con is a byproduct of “a brain evolved for social living.” We told stories of physical and psychological events including the teller’s state of mind. Thus consciousness is “the continuous story we tell ourselves … about what we did and why we did it.” It is often inaccurate and it is subjective. Objectively it “is just a pattern of electrochemical events…”

 suggests that consciousness is ubiquitous, even in those things that appear to lack consciousness. Furthermore he believes a future universal mind might be able to give our lives meaning. Cosmic mind, he believes, will be subjectively infinite and self-conscious.

Quantum mechanics is not common sense. A particular problem is how to reconcile the commonsense view that things are in particular positions with the a scientific cosmology that asserts that the  universal behaves as a wave function that has not collapsed? In other words, why does the universe appear the way it is, when it isn’t really any way? The many worlds interpretation of quantum mechanics may resolve thIS above question. The world is what it appears to be only because we see it that way. But our mind children will be able to “transcend our narrow notions of what is.”

Since our existence is largely self-produced; our descendents will exist in different realms. They may be able to traverse in and thru other possible worlds. Consciousness may be able to exist in many possible worlds and our existence in this one may be a consequence of it being the made up of the simplest rules that could have produced consciousness. As long as we are alive we are governed by the laws of the universe. But after death things will change. And in the future they will too; when our mind children may learn how to proceed thru other worlds. But for now, there is only Shakespeare’s lament:

To die, to sleep;
To sleep: perchance to dream: aye, there’s the rub;
For in that sleep of death what dreams may come,
When we have shuffled off this mortal coil …