Visions: How Science Will Revolutionize the 21st Century
“There are three great themes in science in the 20th century—the atom, the computer, and the gene.” – Harold Varmus, NIH Director
Three centuries ago Newton said that he was a boy, playing on the seashore while a “great ocean of truth lay all undiscovered before me.” Life in Newton’s time was, as Hobbes said, “nasty, brutish, and short.” But Newton unleashed a revolution that he could never have imagined. Within a few generations “the basic laws of matter, life, and computation were … solved.” [3-4]
The forward march continues. “In the past decade more scientific knowledge has been created than in all of human history.” We no longer need to be bystanders in the dance of nature. We are ready to move “from being passive observers of Nature to being active choreographers of Nature.” We are moving from the Age of Discovery to the Age of Mastery. Regarding predictions about the future, Kaku suggests we listen to those who create it. And there is an emerging consensus about the future. 
The 3 Pillars of Science – Matter, Life, and Mind
THE QUANTUM REVOLUTION – The quantum revolution spawned the other two revolutions. Until 1925 no one understood the world of the atom, but now we have almost a complete description of matter. The basic postulates are: 1) energy is not continuous but occurs in discrete bundles called “quanta;” 2) sub-atomic particles have both wave and particle characteristics which obey Schrodinger’s wave equation which determines the probability that certain events will occur. With the Standard Model we can predict the properties of things from quarks to supernovas. We now understand matter and we may be able to manipulate it almost at will in this century.
THE COMPUTER REVOLUTION – Computers were crude until the laser was developed in 1948. Today there are tens of millions of transistors into the area the size of a fingernail. As microchips become ubiquitous, life will change dramatically. We used to marvel at intelligence; in the future we may create and manipulate it.
THE BIOMOLECULAR REVOLUTION – There is a genetic code written on the molecules within the cells—DNA. The techniques of molecular biology allow us to read the code of life like a book. With the owner’s manual for human beings science and medicine will be irrevocably altered. Instead of watching life we will be able to manipulate it almost at will.
FROM PASSIVE TO ACTIVE – We are moving from the unraveling of nature stage to the mastering of nature stage. We are like aliens from outer space who land and view a chess game. It takes a long time to unravel the rules but by careful observation one learns. But this doesn’t mean you are a grand master. We have just learned the rules of matter, life, and mind and now we need to become masters. We are moving from being amateurs to grand masters.
FROM REDUCTION TO SYNERGY – Quantum technology gave birth to the computer revolution via transistors and lasers; it gave birth to the biomolecular revolution via x-ray crystallography and the theory of chemical bonding. While reductionism and specialization paid great dividends for these disciplines, intractable problems in each have forced them back together, calling for synergy of the three. Computers decipher genes, but DNA research will make possible new computer architecture using organic molecules. Kaku calls this “cross-fertilization,” and it will keep the pace of scientific advance accelerating.
THE WEALTH OF NATIONS – Wealth traditionally was with those who had natural resources or lots of capital. But brainpower, innovation, imagination, invention and new technologies will be the key to wealth in the future. The key technologies that serve as engines of wealth:
TIME FRAMES FOR THE FUTURE – Now till 2020 – “scientists foresee an explosion in scientific activity such as the world has never seen before.” We will grow organs, cure cancer, etc.
2020-2050 – biotech – everything including aging, physics, nanotech, interstellar travel, and nuclear fusion.
2050-2100 – create new organisms, first space colonies.
Beyond 2100 – extend life by growing new organs and bodies, manipulating genes, or by merging with computers.
TOWARD A PLANETARY CIVILIZATION – Where is all this leading? One way to answer this question is to scan the heavens for advanced civilizations. Applying laws of thermodynamics and energy, astrophysicists have classified hypothetical civilizations based on ways they utilize energy—labeled Type I, II, and III civilizations.
Type I – mastery of terrestrial energy, ability to modify weather, mine oceans, extract energy from planet’s core. Harnessing the energy of the entire planet necessitates planetary cooperation.
Type II – mastery of stellar energy, use of the sun to drive their machines, begin to explore other stars. (The united federation of planets (star trek) is an emerging Type II civilization.
Type III – mastery of interstellar energy, since they have exhausted their stars energy.
Energy is available on the planet, its star and its galaxy. Based on a growth rate of about 3%, we can estimate when we might make the transition from one civilization to another.
We expect to become a Type I civilization in a century or two;
A Type II civilization in about 800 years;
And a Type III in about 10,000 or more.
Right now we are a Type 0 civilization. We use dead plants for our machines but by the 22nd century Kaku predicts we will be getting close to a Type 1 civilization and taking our first steps into space.
THE INVISIBLE COMPUTER – “Long-term the PC and workstation will wither because computing access will be everywhere; in the walls, on wrists, and in ‘scrap computers’ (like scrap paper) lying about to be grabbed as needed.” – Mark Weiser, XEROX PARC
By the way, if you think this quote is futuristic, investigate the Xerox PARC’s (Palo Alto Research Center) great record of prediction. As microchips become smaller, cheaper, and more powerful, the general consensus is that they “will quietly disappear by the thousands into the very fabric of our lives.” They will be in the walls, furniture, appliances, home, car, and in our jewelry. The computer will be more liberating and less demanding than it is today when it enters our environment rather than having us enter its environment. These devices will communicate with each other and tap into the internet, gradually becoming intelligent and anticipating our wishes. By comparison the personal computer is just a computing appliance. A consensus is growing among computer experts: “Computers, instead of becoming the rapacious monsters featured in science fiction movies, will become so small and ubiquitous that they will be invisible, everywhere and nowhere, so powerful that they will disappear from view.”
THE DISAPPEARING PC – Weiser believes that the trend toward invisibility is built into the human psyche. When we master technologies and they become ubiquitous , we cease to be aware of them. Consider electric motors that were once huge and bulky, demanding entire factories. Now electricity is everywhere and motors are small—more than twenty surround you in a typical car moving the windows, mirrors, radio dial, etc. Or consider writing. Once an art for scribes who wrote on clay tablets, writing was changed with the invention of paper. Still paper was precious and used only by royalty. Most persons went their entire lives and never saw paper. Today paper is ubiquitous and most of it has writing on it. Weiser thinks we’ll go to the store to pick up six-packs of computer like we do batteries today. If the trend of about fifteen years from conception of an idea to its entering the market then ubiquitous computing should begin to take hold around 2003. It may take until about 2010 until the it really catches the public’s fancy but by 2020 it should dominate our lives.
THREE PHASES OF COMPUTING – The history of computers is generally thought to be divided roughly into three stages. The first phase was dominated by the huge mainframes. Computers were so expensive that one computer was shared by hundreds of scientists and humans approached computers like ancient Greeks approached oracles. The second phase began in the early 1970s when computing power was exploding and the size of chips imploding. At Xerox the dream of one person per computer began to take shape; shortly thereafter the first PC was built. But complicated commands and manuals made PCs not very appealing—computers weren’t user-friendly. Thus they created a machine with pictures that you could just point too.
THE THIRD PHASE AND BEYOND – The third phase is ubiquitous computing, where computers are connected and the ratio is now hundreds of computers for each individual. This phase is expected to begin its decline in 2020 as silicon is replaced by new computer architecture. Some experts believe this will lead to the 4th phase, the intro of AI into computers, especially speech recognition, reasoning, and maybe common sense. But the 5th phase is the self-aware, conscious phase.
MOORE’S LAW – Since 1950 there has been an increase in computer power by a factor of about TEN BILLION! Moore’s law explains this growth, computer power doubles every 18 months. This is a fantastic increase, greater than the transition from chemical explosives to hydrogen bombs. In the past 80 years computing power has increased by a factor of ONE TRILLION!! Thus we can see how the 3rd phase of computing will be quickly upon us, especially when it is driven by economics and physics. The price of microprocessors (MP) plunges driving us into the 3rd phase. MPS will be as cheap and plentiful as paper. When chips are so cheap the incentive will be to put them everywhere. (Right now musical greeting cards with chips have more computer power than computers did in 1950.) In the same way that almost everything has writing on it, everything will have penny processors. In addition to all of this economic incentive pushing us to the 3rd phase, we must understand the power or quantum theory.
WHAT DRIVES MOORE’S LAW? – The secret behind Moore’s law is the transistor—a valve that controls the flow of electricity—whose dynamics are governed by quantum theory. The original transistors were about the size of a dime and connected by wires. MPs success is driven by the reduction in the size of transistors. While we can put 7 million transistors on a chip the size of a postage stamp, this reduction cannot continue forever—because of the limit of the wavelength of a light beam. New technology will be needed to continue this reduction.
SENSORS AND THE INVISIBLE COMPUTER – Paul Saffo, director of the Institute for Future, calls the 3rd phase “electronic ecology.” (EE) If the ecology of a forest is the collection of animals and plants that interact dynamically, then analogously we can speak of creatures in the EE. The EE changes when technological advance is made. In the 1980s it was the microchip, in the 1990s the Internet was driven by the power of MP and cheap lasers.
Saffo thinks the 3rd phase will be driven “by cheap sensors coupled to microprocessors and lasers … we will be surrounded by tiny, MPs sensing our presence, anticipating our wishes, even reading our emotions. And these MPs will be connected to the Internet.” In this electronic forest our moods will be sensed the way toilets sense our presence. The computers of the future will sense the world around them using sound and the electromagnetic spectrum. Sensors will pick up our voice commands, hidden video cameras will locate our presence and recognize our faces, and smart cars will use radar to detect the presence of other cars.
THE SMART OFFICE AND HOME OF THE FUTURE – The smart office will include TABS, tiny clip on badges with the power of a PC, allowing for doors to open, lights to go on, communication with other employees, and connection to the Internet. PADS, about the size of a piece of paper will be a fully operational PC, the beginning of smart paper. BOARDS will be about the size of TV screens and hung on walls will be used for teleconferencing, as bulletin boards, interactive tv, etc. And the home will detect bad weather and warn family members, the bathroom will diagnose illness, etc.
THE MIT MEDIA LAB – The director of the things that think project, Neil Gershenfeld, imagines when inanimate objects will all think. Gershenfeld has discovered that the space around our bodies is filled by an invisible electric field generated by electrons which accumulate on our skin like static electricity, and when we move this “aura” moves with us. Since we now have sensors that detect this field, the location of our hands and arms and legs can be detected. What this means is that we have a powerful new way to interact with computers that would be better than using a mouse. This means virtual reality is getting closer. Gershenfeld wants to animate empty space, and he is particularly interested in animating our shoes, from which one watt of energy could easily be drawn. And if we put an electrode in our shoes we could transfer data from our shoe to our hand—the body is salty and conducts electricity—and when we shake hands we could exchange computer files. This leads us to the Things That Think Labs motto:
In the past, shoes could stink.
In the present, shoes can blink.
In the future, shoes will think.
THE INTELLIGENT PLANET – The 3rd phase of computing is creating “a vibrant electronic membrane girding the earth’s surface … [the net] like a dirt road waiting to be paved over into an information superhighway, is rapidly wiring up the computers of the world.”
And when we enter the 4th phase, when AI programs are added to the net, we will communicate with the net as if it were intelligent. We will talk to the net in our wall screen a screen may that may have a personality, be a confidant, aide, and secretary simultaneously. Like in Disney movies, the teapots and coffee cups will talk to each other and to us.
WHY NO POLICE? – Yet the net today is chaotic—no directories to speak of, no rules, etc. There are many theories of why the net took shape in this haphazard way—most notably the secrecy surrounding the Cold war—but the net has taken off.
HOW THE NET AND OTHER TECHNOLOGIES CAME ABOUT – In 1977 important members of the Carter administration were considering how to protect the President and themselves in the event of all out nuclear war. It became apparent that the whole proposed plan was a fiasco, causing the Pentagon’s researches to propose several technologies to compensate. Among which were: teleconferencing, virtual reality (flight simulators), GPS, & email. Scientists, who would have to rebuild the country fast after all out nuclear war needed something fast—the ARPANET—which became the net.
THE MOTHER OF ALL NETS – In 1844 Morse sent the first telegraph message, in 1961 UCLA and Stanford connected their computers. Ten years later there were only two dozen sites, and by 1981 only 200. It wasn’t until 1990 that the critical masses was reached that the reached the public and began to take off and the WWW was created in Geneva in 1991. Now the Net grows 20% per QUARTER. This exceeds the growth rate of computers and we have 10 million servers and 40 million users. (This was written in 1997] Most experts think the net will be as big as the phone system by 2005 or before, and, with the merger of TV possible soon, 99% of all US homes may be linked to the net in the next few years. Finally, consider this: In 1996 there were 70 million pages of info on the Net; but by 2020 the net should access “the sum total of the human experience on this planet, the collective knowledge and wisdom of the past 5,000 years of recorded history.”
THE HISTORICAL SIGNIFICANCE OF THE NET – The net can be compared to Gutenberg’s printing press of the 1450s. For the first time books could reach a mass audience. Before Gutenberg there were about 30,000 books in all of Europe! By 1500 there were more than 9 million. [Roughly the size of a good size university library.] Of course there have been technologies that failed to reach critical mass—picture phones, CB radios—but the net is unlikely to become extinct.
TO 2020: HOW THE NET WILL SHAPE OUR LIVES – The net will allow us to work from home, bring specialized hobbyists from around the world together, enjoy the cyber marketplace, etc. On line bookstores, brokerage firms, banking, and travel agencies will light up the net.
THE MERGER OF TV AND THE NET – In 1996 the FCC and TV giants agreed to go analog which doubles the resolution. In short, TVs of the future will be connected to the Net, making TV interactive. But TVs may well be replaced shortly thereafter by … WALL SCREENS – TV screens flat enough to hang like pictures of small enough to fit in your watch.
SPEECH RECOGNITION – Machines can already recognize human speech, but they don’t understand what they are hearing unless one speaks pretty slowly. However most of the basic difficulties should be solved in the next few years. Still, hearing is not understanding and it would take very good AI for real comprehension. This problem may have to wait until the 4th phase of computing, between 2020 and 2050 when we have good AI.
FROM THE PRESENT TO 2020: INTELLIGENT AGENTS – In the meantime, we are working on intelligent agents—programs that can make primitive decisions and act as filters, distinguishing between junk and valuable material. IA may be particularly good at gathering information we want and saving us the time of searching for it.
2020-2050: GAMES AND EXPERT SYSTEMS – After IA is HEURISTICS, the branch of AI that tries to codify logic and intelligence with a series of rules. Heuristics would allow us to speak to computerized doctors, lawyers, etc who would answer tech questions. Expert systems are heuristic programs that contain the knowledge of a number of human experts to dissect problems. Consider going to the doctor where you receive a series of if…then questions that lead to diagnosis. This task can be done by an expert system. It is easy to see that the comprehensive and methodical nature of an expert systems would be better than a human physician.
COMMON SENSE (CS) IS NOT SO COMMON – The problem with today’s computers is that they are glorified adding machines. They are marvelous at mathematical logic, but very poor with physics and biology. They have trouble with the concept of time for example. S and J are twins and S is 20 years old so how old is J? This is a tough problem for a computer. Or consider this conversation: Human: Ducks fly, Chuck is a duck. C: Chuck can fly. Human: Chuck is dead. Computer: Chuck is dead and can fly. That dead things don’t fly is not obvious from the laws of logic.
THE ENCYCLOPEDIA OF COMMON SENSE – Some have advocated creating an EOCS containing all the rules of CS. If CS programs are loaded into computers, intelligent conversation is more possible. For example computers need to know things like: Nothing can be in two places at the same time; when humans die they aren’t born again; dying is generally undesirable; animals don’t like pai; time advances at the same rate for everyone; when it rains people get wet; etc. As of 1997, a project to give computers CS had accumulated 10 million assertions. But the task is extraordinarily difficult. For example, it took three months for a computer programmed with CS to understand “Napoleon died on St. Helen’s. Wellington was saddened.” In short AI, in whatever form it takes, has a long way to go.
A WEEK IN THE LIFE IN 2020 – A face on the wall says wake up. As you walk to the kitchen the appliances sense your presence and the coffee starts brewing, bread is toasted, while your favorite Bach concerto plays softly. Molly has printed out a version of the paper that you are especially interested in by scanning the web, and as you leave the kitchen it reminds you you need milk. Before you leave you tell the robot to vacuum. You drive to work in your hybrid, smart cars, whizzing by a toll booth that scans your smart car. At work you insert your wallet card into the computer to pay your bills, have a few video conferences, and head home. You get home, connect with your virtual doctor who tells you that he will zap out a few cancer cells with smart molecules so you don’t get cancer in ten years.
BOTTOM UP OR TOP DOWN? – MIT’s famed AI lab is “a high-tech version of Santa’s workshop.” Kaku begins by focusing on research that is not interested in creating creatures who play chess but INSECTOIDS and BUGBOTS, small insect like creations with the ability to learn by bumping into things, crawling around, etc. The idea is that while insects can’t play chess they get along quite well in your home.
This biology-based approach is termed the bottom up school. The inspiration for this school is evolution which has produced complexity from simple structures. In short, the idea is that “learning is everything; logic and programming are nothing. [This seems to overstate the case even in terms of evolution. We learn from our environment, but our programming—cognitive structures in place at birth—are clearly essential.] Still, AI may be immensely enriched by interplay with the insights of the biomolecular and quantum revolutions. Kaku mentions how many physicists have moved from superstring theory and quantum gravity to brain research as an example of the interplay between the three big revolutions.
On the other side of the debate is the top down school. The digital computer provides their model of thinking machines: “They assumed that thinking … would emerge fully developed from a computer.” Their strategy is to put (program) the rules of logic and intelligence directly into the machine, along with subroutines for speech, vision, etc. and you’d have an intelligent robot. Of course this is based on the idea that intelligence can be simulated by a Turing machine. Kaku argues that the problem here is that the top down school underestimated “the enormity of writing down the complete road map of human int.” The two camps are often at odds with the one arguing that the bottom up robots may get from here to there but won’t know what to do when they get there; while the other replies that the top down computes play chess but don’t know how to take a walk. Most feel that some combination of the two approaches will work best.
PREPROGRAMMED ROBOTS – Since it may be twenty years or more until the creations at the MIT lab enter the marketplace, what will see immediately are “increasingly sophisticated industrial robots.” From 2020 to 2050 we should enter the 4th phase of computing “when intelligent automatons begin to walk the earth …” Beyond 2050 we will enter the 5th phase of “robots of consciousness and self-awareness.”
To better understand all of this, consider the difference between industrial or remote- controlled robots—just preprogrammed windup toys—and the more sophisticated versions to come. Kaku describes the evolution of these technologies.
2000-10 – develop into reliable helpers in factories, hospitals, and homes. (Volks-robots) 2010-20 – these robots replaced by machines that learn from their mistakes.
2020-2050 ROBOTICS AND THE BRAIN – One of the big problems in robotics is the problem of pattern recognition. Robots can see but don’t understand what they see. The reason we have so much trouble duplicating pattern recognition is that our understanding of our brains is primitive.
We know that our brain is layered which reflects its evolutionary development. Nature preserves its older forms creating a museum of our evolutionary history. The first layer of the brain is the “neural chassis” controlling basic functions like respiration, heartbeat, and blood circulation. It consists of brain stem, spinal cord, and midbrain. The second layer is the R-complex controlling aggression, territoriality, and social hierarchies. The so-called “reptilian brain.” Surrounding this is the limbic system which is found in mammals. It controls emotions, social behavior, smell, and memory. This was necessary because mammals live in complex social groups. Lastly, is the neocortex which controls reason, language, spatial perception and other higher functions. Humans have wrinkles on the cerebral cortex increasing the surface area.
Today’s robots possess only the first layer of brain, so there is a long way to go. But experts such as Miguel Virasoro, one of the most famous physicists in the world, believe that microchips will eventually approach the computing power of human brains. Right now the Cray-3 processes at 100 million bits per second, about the speed of a rats brain. Estimates are that the human brain calculates 1000 times faster than this but, if Moore’s law continues to hold, supercomputers should match humans around 2020 and desktops by 2040. Virasoro objects to this whole top down approach since he argues that the brain is not a Turing Machine, it’s not even a computer. Thus, faster computers will not duplicate human brains.
Virasoro also argues as follows: The brain has about 200 billion neurons which fire 10 million billion times per second. The nerve impulses travel very slow—300 feet per second—but the complexity of the brain’s neural connections compensates. Since each neuron is connected to 10,000 other neurons the brain is a parallel processor which carries out trillions of operations per second and yet is powered by the energy of a lightbulb—that’s efficiency. Computers calculate at nearly the speed of light but perform one calculation at a time. The brain calculates slowly but performs trillions of computations per second. And while a brain can have a part of itself damaged and still function, a Turing machine can be destroyed by the loss of a single transistor. Since the brain is a complex neural net, the bottom up approach is the only one that will work.
TALKING ROBOTS – NETalk is a neural network that has learned to speak English almost from scratch. Rather than using the top down approach and stuffing a program with dictionaries, phonics rules, exceptions to grammar rules, etc. a simple neural net was created that learned from its mistakes. While the difference between real and model neurons is immense the fact that a simple neural net can speak suggests “that perhaps human abilities can be simulated by electronics.”
ROBOTICS MEETS QUANTUM PHYSICS – There has been a migration from quantum physics to brain research. Physics is different from biology, the former looks for simple elegant solutions while the later is messy, inelegant and full of dead ends. The former is based on universal laws, the latter has only the law of evolution as its universal law. Physicists wonder if there are any fundamental principles behind AI, like there are in physics which led to questions like “Can a neuron in the brain be treated like an atom in a lattice?”
Thus, while the top down school held that mind was a complicated program inserted into a computer, The bottom up school suggested that mind arose “from the quantum theory of mindless atoms, without any programs whatsoever!” The founding father of the neural net field, John Hopenfield, summarized it as follows: Individual atoms in a solid can exist in a few discrete states—spin up or down—and neurons similarly either fire or don’t fire. In a quantum solid a universal principle states that atoms are arranged so that the energy is minimized. Might Nnet circuits minimize their energy? If the answer is yes, the unifying principle behind Nnets is: “all the neurons in the brain would fire in such a way as to minimize the energy of the net.” Hopenfield also found that neural net behave a lot like brains. For example, even after the removal of neurons, the neural nets behaved pretty much the same; it seemed to have memories. The strangest finding was that neural nets began to dream.
WHAT ARE DREAMS? – Hopenfield believes “dreams are fluctuating energy states in a quantum mechanical system.” And just as we need to dream, especially after exhausting experiences, if neural nets are filled with too many memories they malfunction from overload and began to recall previous learned memories. Moreover, ripples began to form that didn’t correspond to memories at all, but rather to fragments of memories pieced together. These “spurious memories” correspond to dreams. A slight disturbance to this system allowed it to again settle down to a state of deep energy minimization—sleep? After several episodes of dreaming and sleeping, the system “awakens,” i.e., stops malfunctioning. Kake believes the top down and bottom up schools will merge in forty years or so, creating truly intelligent robots.
CAN ROBOTS FEEL? – Kaku thinks we’ll find it easier to interact with robots if they have some emotions and common sense. This may seem counter-intuitive, that clumps of metal can feel, but it isn’t impossible to do. He thinks providing a robot with the capacity to care for its master, to want to make him/her happy will increase the commercial success of robots. And this is a kind of love. Jealousy, anger, laughter, and fear are similarly worthwhile. But does this mean that robots are self-aware?
BEYOND 2050: ROBOT CONSCIOUSNESS – By 2050 AI systems should have a modest range of emotions, the internet will be a magic mirror, accessing the entire database of human knowledge and talking and joking with us. But are these AI systems conscious? The answer to this question is tough, inasmuch as we don’t know what the consciousness is but “many of the scientists who have dedicated their lives to building machines that think feel it’s only a matter of time before some form of consciousness in captured in the laboratory.” Of course neural nets have already produced thinking machines—me and you. This isn’t all that surprising since consciousness is an “emergent” property, something that “happens naturally when a system becomes complex enough.” There are also theorist like Daniel Dennett, Herbert Simon, and Marvin Minsky who believe they have explained consciousness in some way. But all of them seem to share the view that consciousness arises out of the complex interactions unconscious systems.
PET scans seem to bear these thinkers out. Consciousness seems to be spread out over the brain, like a dance between the various parts, and thus there is only the illusion that there is a center for con. [A theme in Buddhism 2500 years ago.] Others believe that the brain generates lots of thoughts simultaneously and the consciousness is just the thoughts that “win out.” Of course the other extreme are those who think robots will never be conscious, thinkers like: Penrose, Searle, and McGinn.] Kaku replies to the mysterians as follows:
“The problem with these criticisms is that trying to prove that machines can never be conscious is like trying to prove the nonexistence of unicorns.” Even if you could show that there are no unicorns in Texas or in the solar system, it is always possible to find one somewhere. “Therefore, to say that thinking machines can never be built has, to me, no scientific content.” For now the question is undecidable, and if and when we build them, then we can decide. Kaku thinks that consciousness exists in degrees and will be created that way.
DEGREES OF CONSCIOUSNESS – The lowest levels monitor your body and environment. Since computers perform self-diagnostic and print error messages they probably fall into this category. Plants probably are probably a little higher up, they must react to changes in the environment. Machines with vision are probably on this scale. The next level is “the ability to carry out well-defined goals.” The future Mars probe is an example. Higher still is the entire animal kingdom. Goals are fixed and plans are implemented to carry them out. Still most behavior is probably hard-wired. And this level is probably the dominant one for humans too. Most of our time is spent thinking about survival and reproduction. Of course there are thousands of levels here. The highest level of consciousness is “the ability to set one’s goal, whatever they may be.” If robots can do this, they are conscious. But what if ours and our creation’s goals conflict?
BEYOND SILICON: CYBORGS AND THE ULTIMATE COMPUTER –
What happens after 2020, the end of the microchip, and when quantum computers become a reality? One possibility not investigated thus far is bionics. Can we interface computers directly with the brain? First, we need to show that neurons can grow on silicon and then connect them with neurons in living beings, such as human neurons. Finally, we would have to decode the neurons that make up our spinal cord. In 1995 at the Max Planck Institute scientists did just this with a leech neuron and a silicon chip. They welded hardware to software. The way has been paved “to developing silicon chips that can control the firing of neurons at will, which in turn could control muscle movement.” The neurons of baby rats have also been grown on silicon surfaces. At Harvard Medical School they have already begun to build a bionic eye. This should be able to restore vision to the blind. This should help ten million Americans. We should soon be able to make eyes that are better than the ape eyes we have, eyes that could see at the ultraviolet and infrared. Of course we could do similar things to the arms, legs, etc. allowing for superhuman feats, but then we would need a superhuman skeleton as well.
This merging of mind and machine will take, according to Ralph Merkle at Xerox PARC, a human genome project effort to map the brain neuron by neuron at a cost of $340 billion dollars. The technology to begin will probably have to wait another decade. As far as the distant future, Kaku believes that all three revolutions will merge. Quantum technology will provide transistors smaller than neurons. The computer revolution will give us neural nets as powerful as brains. And the biomolecular revolution will give us the power to replace our neural nets with synthetic ones, “thereby giving us a form of immortality.”
Since evolution favors organism that are able to survive, a human/mechanical blend may be the best way to survive. And if we map every neuron in the brain can we then give our brains immortal bodies? Kaku thinks we will gradually transfer our consciousness to robotic bodies, that this as the next step in evolution, as does Marvin Minsky who thinks of this as “unnatural selection” a process of deliberately replacing humans.
Minsky says robots will inherit the earth and they will be our children. “We owe our minds to the deaths and lives of all the creatures that were ever engaged in the struggle called evolution. Our job is to see that all this work shall not end up meaningless.”