Kaku’s: Visions…

Visions: How Science Will Revolutionize the 21st Century

Summary of Michio Kaku’s Visions: How Science Will Revolutionize the 21st Century

(chapter 1)

“There are three great themes in science in the 20th century—the atom, the computer, and the gene.” – Harold Varmus, NIH Director

Three centuries ago Newton said that he was a boy, playing on the seashore while a “great ocean of truth lay all undiscovered before me.” Life in Newton’s time was, as Hobbes said, “nasty, brutish, and short.” But Newton unleased a revolution that he could never have imagined, [3-4] Within a few generations “the basic laws of matter, life, and computation were … solved.”

The rush continues. “In the past decade more scientific knowledge has been created than in all of human history.” We no longer need to be bystanders in the dance of Nature. We are ready to move “from being passive observers of Nature to being active choreographers of Nature.” We are moving from the Age of Discovery to the Age of Mastery. Regarding predictions of the future, K suggests he listen to those who create it. [When you want to build an airplane you consult the Wright brothers, not philosophers or ordinary dudes.]  And there is an emerging consensus about the future. [6]

The 3 Pillars of Science – Matter, Life, and Mind


The QR spawned the other 2 revolutions. Until 1925 no one understood the world of the atom. But now we have almost a complete description of matter. The basic postulates are: 1) energy is not continuous but occurs in discrete bundles called “quanta;” 2) sub-atomic particles have both wave and particle characteristics and obey Schrodinger’s wave equation which determines the probability that certain events will occur. With the Standard Model we can predict the properties of things from quarks to supernovas. We now understand matter and we may be able to manipulate it almost at will in this century.


Cs were crude until the laser was developed in 1948, and a decade later the laser, both quantum mechanical devices. Today there are tens of millions of transistors into the area the size of a fingernail. As microchips become ubiquitous, life will change dramatically. We used to marvel at intelligence; in the future we may create and manipulate it.


There is a genetic code written on the molecules within the cells—DNA. The techniques of molecular biology allow us to read the code of life like a book. With the owner’s manual for human beings science and medicine will be irrevocably altered. Instead of watching life we will be able to manipulate it almost at will.”


In one sense Horgan is right; science has ended. [9] But we are moving from the unraveling stage to the mastering stage. We are like aliens from outer space who land and view a chess game. It takes a long time to unravel the rules but by careful observation one learns. But this doesn’t mean you are a grand master. We have just learned the rules of matter, life, and mind and now we need to become masters. We are moving from being amateurs to grand masters.


QT gave birth to the computer revolution via transistors and lasers; it gave birth to the biomolecular revolution via x-ray crystallography and the theory of chemical bonding. While reductionism and specialization paid great dividends for these disciplines, intractable problems in each have forced them back together, calling for synergy of the 3. Computers decipher Genes, but DNA research will make possible new computer architecture using organic molecules. K calls this “cross-fertilization,” and it will keep the pace of scientific advance accelerating.


Wealth traditionally was with those who had natural resources or lots of capital. But brainpower, innovation, imagination, invention and new technologies will be the key to wealth in the future. The key technologies that serve as engines of wealth: [13]


Now till 2020 – “scientists foresee an explosion in scientific activity such as the world has never seen before.” We will grow organs, cure cancer, etc.

2020-2050 – biotech – everything including aging, – physics – nanotech, interstellar travel, nuclear fusion.

2050-2100 – create new organisms, first space colonies.

Beyond 2100 – extend life by growing new organs and bodies, manipulating genes, or by merging with computers.


Where is all this leading? One way to answer this question is to scan the heavens for advanced civilizations. Applying laws of thermodynamics and energy, astrophysicists have classified hypothetical civilizations based on ways they utilize energy—labeled Type I, II, and III civilizations.

Type I – mastery of terrestrial energy, ability to modify weather, mine oceans, extract energy from planet’s core. Harnessing the energy of the entire planet necessitates planetary cooperation.

Type II – mastery of stellar energy, use of the sun to drive their machines. They have begun to explore other stars. (The united federation of planets (star trek) is an  emerging Type II civilization.

Type III – mastery of interstellar energy, since they have exhausted their stars energy.

Energy is available on the planet, its star and its galaxy. Based on a growth rate of about 3%, we can estimate when we might make the transition from one civilization to another.

We expect to become a Type I civilization in a century or 2
A Type II civilization in about 800 years. (ST is off a few hundred)
And a Type III in about 10,000 or more.

Right now we are a Type 0 civilization. We use dead plants for our machines but by then 22nd century Kaku predicts we will be getting close to a Type 1 civilization and taking our first steps into space.

THE INVISIBLE COMPUTER – (notes from Michio Kaku’s Visions: How Science Will Revolutionize the 21 Century)

“Long-term the PC and workstation will wither because computing access will be everywhere; in the walls, on wrists, and in ‘scrap computers’ (like scrap paper) lying about to be grabbed as needed.” –   Mark Weiser, XEROX PARC

By the way, if you think this quote is futuristic, investigate the Xerox PARC’s (Palo Alto Research Center) great record of prediction. (Weiser was the former head of its Computer Science Laboratory.) As microchips become more powerful, smaller, and cheaper, the general consensus is that they “will quietly disappear by the thousands into the very fabric of our lives.” They will be in the walls, furniture, appliances, home, car, and in our jewelry. The computer will be more liberating and less demanding than it is today when it enters our environment rather than having us enter its. These devices will communicate with each other and tap into the internet, gradually becoming intelligent and anticipating our wishes, by comparison, the PC is just a computing appliance. A consensus is growing among computer experts: “Computers, instead of becoming the rapacious monsters featured in science fiction movies, will become so small and ubiquitous that they will be invisible, everywhere and nowhere, so powerful that they will disappear from view.”


Weiser believes that the trend toward invisibility is built into the human psyche. When people learn about something well, they cease to be aware of it. Consider electric motors that were once huge and bulky, demanding entire factories. Now electricity is everywhere and motors are small and ubiquitous—more than 20 surround you in a typical car moving the windows, mirrors, radio dial, etc. Or consider writing. Once an art for scribes who wrote on clay tablets, writing was changed with the invention of paper. Still, paper was precious and used only by royalty. Most persons went their entire lives and never saw paper. Today paper is ubiquitous and most of it has writing on it. Weiser thinks we’ll go to the store to pick up six-packs of computer like we do (no, not beer) batteries today. If the trend of about 15 years from conception of an idea to its entering the market (the PC was built at Xerox in 1972 but caught the public’s fancy about 15 years later) then ubiquitous computing should begin to take hold around 2003. It may take until about 2010 until the it really catches the public’s fancy but by 2020 it should dominate our lives. (You’ll be about mid-forties. I’ll be, like lots of baby boomers a ubiquitous senior citizen.)


The history of computers is generally thought to be divided roughly into 3 stages. The first phase was dominated by the huge mainframes. Computers were so expensive that one computer was shared by hundreds of scientists and humans approached computers like ancient Greeks approached oracles. The second phase began in the early 70s when computing power was exploding and the size of chips imploding. At Xerox, the dream of one person per computer began to take shape; shortly thereafter the first PC was built. But complicated commands and manuals made PCs not very appealing—i.e., computers weren’t user-friendly. And thus they created a machine with pictures that you could just point too. (Of course Apple pirated this idea from Xerox and Microsoft pirated it from Apple. And during this transition the giants IBM, Wang, and Digital were changed forever. The dinosaur computers didn’t last. PS. There was no Dell, Compaq, etc.)


The third phase is ubiquitous computing, where computers are connected and the ratio is now hundreds of computers for each individual. This phase is expected to begin its decline in 2020 as silicon is replaced by new computer architecture. Some experts believe this will lead to the 4th phase, the intro of AI into computers. (Here AI means speech recognition, reasoning, and maybe common sense—still a long way from the conscious beings talk,) But the 5th phase is the self-aware, conscious phase. [Note how the evolution of culture is so obvious and the evolutionary model so applicable. This results, in my opinion, because cultural evolution goes so fast it is undeniable.]


Since 1950 there has been an increase in computer power by a factor of about TEN BILLION! Moore’s law explains this growth, computer power doubles every 18 months. This is a fantastic increase, greater than the transition from chemical explosives to hydrogen bombs. In the past 80 years computing power has increased by a factor of ONE TRILLION!! Thus, we can see how the 3rd phase of computing will be quickly upon us, especially when it is driven by economics and physics. The price of microprocessors plunges driving us into the 3rd phase. (A microchip that costs a dime now, will cost 2 cents ten years from now.) MP will be as cheap and plentiful as paper. When chips are so cheap the incentive will be to put them everywhere. (Right now musical greeting cards with chips have more c power than computers in 1950.) In the same way that almost everything has writing on it, everything will have penny processors. In addition to all of this economic incentive pushing us to the 3rd phase, we must understand the power or quantum theory.


The secret behind ML is of course the transistor—a valve that controls the flow of electricity—whose dynamics are governed by Q theory. The original Ts were about the size of a dime and connected by wires. Needless to say ML success is driven by the reduction is size of transistors. While we can put 7 million Ts on a chip the size of a postage stamp this reduction cannot continue forever—because of the limit of the wavelength of a light beam. New tech will be needed to continue this reduction.


Paul Saffo, director of the Institute for Future, calls the 3rd phase “electronic ecology.” If the ecology of a forest for example is the collection of animals and plants that interact dynamically, then analogously we can speak of creatures in the EE. The EE changes when tech advance is made. In the 80s it was the microchip, in the 90s the Internet was driven by the power of MP and cheap lasers. [about 5 years ago I’m at the faculty meeting of the small college I taught at. A chemistry teacher, in fact the same one who the lawyer didn’t want on the jury, told the faculty that her kids were doing things on the internet and we needed to have email and make the college more tech savvy. I swear, and this is 5 years ago, ½ the faculty didn’t know what she was talking about.]

Saffo thinks the 3rd phase will be driven “by cheap sensors coupled to microprocessors and lasers … we will be surrounded by tiny, MPs sensing our presence, anticipating our wishes, even reading our emotions. And these MPs will be connected to the Internet.” In this electronic forest our moods will be sensed the way toilets sense our presence. But note that a meteorite can hit me right now and my PC will still be waiting for me to continue …

Writing these class notes! Dumb thing! But the Cs of the future will sense the world around them using sound and the electromagnetic spectrum. Sensors will pick up our voice commands [no more carpel tunnel syndrome?] hidden video cameras will locate our presence and recognize our faces, smart cars will use radar to detect the presence of other cars, etc.


The smart office will include TABS, tiny clip on badges with the power of a PC, allowing for doors to open, lights to go on, communication with other employees, connection to the Internet, etc. [StarFleet com badge.] PADS, about the size of a piece of paper will be a fully operational PC, the beginning of smart paper. BOARDS will be about the size of TV screens and hung on walls will be used for teleconferencing, as bulletin boards, interactive tv, etc. And the home will detect bad weather and warn family members, the bathroom will diagnose illness, etc. [the only thing about this office is why have an office, why not just use all of this from home?]


The director of the things that think project, Neil Gershenfeld, imagines when inanimate objects will all think. G has discovered that the space around our bodies is filled by an invisible electric field generated by electrons which accumulate on our skin like static electricity, and when we move this “aura” moves with us. Since we now have sensors that detect this field, the location of our hands and arms and legs can be detected. What this means is that we have a powerful new way to interact with computers that would be better than using a mouse and a way to make virtual reality much better. Essentially, G wants to animate empty space. G is particularly interested in animating our shoes, where 1 watt of energy could easily be drawn. And, if we put an electrode in our shoes we could transfer data from our shoe to say our hand—the body is salty and conducts electricity—and when we sake hands we could exchange computer files. This leads us to the Things That Think Labs motto:

In the past, shoes could stink.
In the present, shoes can blink.]
In the future, shoes will think.

THE INTELLIGENT PLANET (Chap 3 of Kaku’s Visions: …)

The 3rd phase of C is creating “a vibrant electronic membrane girding the earth’s surface … [the net] like a dirt road waiting to be paved over into an information superhighway, is rapidly wiring up the computers of the world.”

And when we enter the 4th phase, when AI programs are added to the Net, we will communicate with the Net AS IF it were intelligent. We will talk to the Net in our wall screen or tie accessing the entire info of the planet. And this screen may have a personality, be a confidant, aide, and secretary simultaneously. Like in Disney movies, the teapots and coffee cups will talk to each other and to us.

WHY NO POLICE? – Yet the Net today is chaotic—no directories to speak of, no rules, etc. There are many theories of why the Net took shape in this haphazard way—most notably the secrecy surrounding the Cold war—but the net has taken off.

HOW THE NET AND OTHER TECHS CAME ABOUT – In 1977 important members of the Carter administration were considering how to protect the President and themselves in the event of all out nuclear war. To make the story short, it became apparent that the whole proposed plan was a fiasco, causing the Pentagon’s researches to propose several techs to compensate. Among which were: teleconferencing, virtual reality (flight simulators), GPS, & e-mail. Scientists, who would have to re-build the country fast after all out nuclear war needed something fast wo/rules—the ARPANET—which became the Net.

THE MOTHER OF ALL NETS – In 1844 Morse sent the 1st telegraph message, in 1961 UCLA and Stanford connected their Cs. 10 years later there were only 2 dozen sites, and by 1981 only 200. It wasn’t until 1990 that the critical masses was reached that the reached the public and began to take off and the WWW was created in Geneva in 91. Now the Net grows 20% per QUARTER. This exceeds the growth rate of computers and we have 10 mil servers and 40 mil users. [whoops the book is 1997—the figure is now at least 160 mil users—EVOLUTION.] Most experts think the Net will be as big as the phone system by 2005 or before, and, with the merger of TV possible soon, 99% of all US homes may be linked to the Net in the next few years. Finally, consider this: In 1996 there were 70 million pages of info on the Net; but buy 2020 the net should access “the sum total of the human experience on this planet, the collective knowledge and wisdom of the past 5,000 years of recorded history.”

THE HISTORICAL SIGNIFICANCE OF THE NET – The Net can be compared to Gutenberg’s PP of the 1450s. For the first time books could reach a mass audience. Before G there were about 30,000 books in all of Europe! By 1500 there were more than 9 million. [Roughly the size of UT’s collection.] Of course there have been tech that failed to reach critical mass—picture phones, CB radios—but the Nets subject matter, all of human knowledge easily available, suggest that it will not become extinct.

TO 2020: HOW THE NET WILL SHAPE OUR LIVES – The Net will allow us to work from home, bring specialized hobbyists from around the world together, enjoy the cyber marketplace, etc. On line bookstores, brokerage firms, banking, and travel agencies will light up the net. [this was written in 97 and all of this has come true, and the rest of this section, written a few years ago as futuristic is already “old hat.”]

BOTTLENECKS ON THE NET – Still all of know about bandwidth problems, interface issues, and the needed creation of personalized agents and filters. [Yes, in this book the problems of FAST 28K modems are discussed –EVOLUTION.] Alternatives to copper wires are of course satellites, cables [wow, where did I get my RR?] and fiber optics. As far as interface bottlenecks—screens and voice inputs—well we need digital TV to create the Magic Mirror.

THE MERGER OF TV AND THE NET – Of course YALL know that in 96 the FCC and TV giants agreed to go analog which doubles the resolution. In short, TVs of the future will be connected to the Net, making TV interactive. But TVs may well be replaced shortly thereafter by …

WALL SCREENS – TV screens flat enough to hang like pictures of small enough to fit in your watch.

SPEECH RECOGNITION – Machines can already recognize human speech, but they don’t understand what they are hearing unless one speaks pretty slowly. However, most of the basic difficulties should be solved in the next few years. Still, hearing is not understanding and it would take very good AI for real comprehension. This problem may have to wait until the 4th phase of computing, between 2020 and 2050 when we have good AI.

FROM THE PRESENT TO 2020: INTELLIGENT AGENTS – In the meantime, we are working on intelligent agents—programs that can make primitive decisions and act as filters, distinguishing between junk and valuable material. IA may be particularly good at gathering info we want and saving us the time of searching for it—a good research asst!

2020-2050: GAMES AND EXPERT SYSTEMS – After IA is HEURISTICS, the branch of AI that tries to codify logic and intelligence with a series of rules. H would allow us to speak to computerized doctors, lawyers, etc who would answer tech questions. [Chess playing Cs are good examples of H.] ES are H programs that contain the knowledge of a number of human experts and dissect problems like we do. Consider going to the doctor where you receive a series of if…then questions that lead to diagnosis. This task can be done by ES. It is easy to see that the comprehensive and methodical nature of an ES would be better than a human physician. However, ES have traditionally lacked the common sense of a child.

COMMON SENSE IS NOT SO COMMON – The problem with today’s Cs is that they are glorified adding machines. They are marvelous at mathematical logic, but very poor with physics and biology. They have trouble with the concept of time for example. S and J are twins and S is 20 years old so how old is J? This is a tough problem for a C. Or consider this conversation: Human: Ducks fly, Chuck is a duck. C: Chuck can fly. Human: Chuck is dead. C: Chuck is dead and can fly.

That dead things don’t fly is not obvious from the laws of logic.

THE ENCYCLOPEDIA OF COMMON SENSE – Some have advocated creating an EOCS containing all the rules of CS. If CS programs are loaded into Cs, intelligent conversation is much more possible. For example Cs need to know: Nothing can be in 2 places at the same time When humans die they aren’t born again Dying is undesirable Animals don’t like pain Time advances at the same rate for everyone When it rains people get wet – ETC.

As of 97, a project to do this had accumulated 10 million assertions. But the task is extraordinarily difficult. [again, wouldn’t it be simpler to take what we have, in this case brains with common sense, and build some of the computer ability into us thru genetic engineering?] For ex, it took 3 months for a C programmed with CS to understand “Napoleon died on St. Helen’s. Wellington was saddened.” In short AI, in whatever form it takes, has a long way to go.

A WEEK IN THE LIFE IN 2020 – A face on the wall says wake up dear. As you walk to the kitchen the appliances sense your presence and the coffee starts brewing, bread is toasted, while your favorite Bach concerto plays softly. Molly has printed out a version of the paper that you are especially interested in by scanning the web, and as you leave the kitchen it reminds you you need milk and that you’re out of computers. Before you leave you tell the robot to vacuum. You drive to work in your hybrid, smart cars, whizzing by a toll booth that scans your smart car. At work you insert your wallet card into the computer to pay your bills, have a few video conferences, and head home. You get home, connect with your virtual dr who tells you that he will zap out a few cancer cells with smart molecules so you don’t get cancer in 10 years. You head off to your party where Molly tells you who everyone is from a transmitter in your glasses. You drink too much and Molly won’t let you drive your car. [I guess it isn’t quite smart enough yet.] etc. etc.

MIT’s famed AI lab is “a high-tech version of Santa’s workshop.” K begins by focusing on research that is NOT interested in creating creatures who play chess but INSECTOIDS and BUGBOTS, small insect like creations with the ability to learn by bumping into things, crawling around, etc. The idea is that while insects can’t play chess they get along quite well in your home.

This biology-based approach is termed the BOTTOM UP school. The inspiration for this school is evol which has produced complexity from simple structures. In short, the idea is that “learning is everything; logic and programming are nothing. [This seems to overstate the case even in terms of evolution. We learn from our environment, but our programming—cognitive structures in place at birth—are clearly essential.] Still, AI may be immensely enriched by interplay with the insights of the biomolecular and quantum revolutions. K mentions how many physicists have moved from superstring theory and quantum gravity to brain research as an example of the interplay between the 3 big revolutions.

On the other side of the debate is the TOP DOWN school. The digital C provides their model of thinking machines: “They assumed that thinking … would emerge fully developed from a computer.” Their strategy is to put (program) the rules of logic and intelligence directly into the machine, along with subroutines for speech, vision, etc. and you’d have an int robot. Of course this is based on the idea that int can be simulated by a Turing machine. K argues that the problem here is that the TD school underestimated “the enormity of writing down the complete road map of human int.” The 2 camps are often at odds with the one arguing that the BU robots may get from here to there but won’t know what to do when they get there; while the other replies that the TD computes play chess but don’t know how to take a walk. Most feel that some combination of the 2 approaches will lead us onward.

PREPROGRAMMED ROBOTS – Since it may be 20 years or more until the creations at the MIT lab enter the marketplace, what will see immediately are “increasingly sophisticated industrial robots.” From 2020 to 2050 we should enter the 4th phase of computing “when intelligent automatons begin to walk the earth …” Beyond 2050 we will enter the 5th phase of “robots of consciousness and self-awareness.”

To better understand all of this, consider the difference between industrial or remote- controlled robots—just preprogrammed windup toys—and the more sophisticated versions to come. K, quoting M, describes the evolution of these techs.

2000-10 – develop into reliable helpers in factories, hospitals, and homes. (Volks-robots) 2010-20 – these robots replaced by machines that learn from their mistakes.

2020-2050 ROBOTICS AND THE BRAIN – One of the big problems in robotics is the problem of PATTERN RECOGNITION. Robots can see, but don’t understand what they see. Part of the problem, that we have so much trouble duplicating pattern recognition is that our understanding of our brains is primitive.

We do know that our brain is layered which reflects its evol development. Nature preserves its older forms creating a museum of our evol history. The first layer of the brain is the “neural chassis” controlling basic functions like respiration, heartbeat, and blood circulation. It consists of brain stem, spinal cord, and midbrain. The second layer is the R-complex controlling aggression, territoriality, and social hierarchies. The so-called “reptilian brain.” Surrounding this is the limbic system which is found in mammals. It controls emotions, social behavior, smell, and memory. This was necessary because mammals live in complex social groups. Lastly, is the neocortex which controls reason, language, spatial perception and other higher functions. Humans have wrinkles on the cerebral cortex increasing the surface area.

Today’s robots possess only the first layer of brain, so there is a long way to go. But experts such as Miguel Virasoro, one of the most famous physicists in the world, believe that microchips will eventually approach the computing power of human brains. Right now the Cray-3 processes at 100 million bits per second, about the speed of a rats brain. Estimates are that the human brain calculates 1000 times faster than this, but, if Mlaw continues to hold, supercomputers should match humans around 2020 and desktops by 2040. But V objects to this whole TD approach since he argues that the brain is NOT a Turing Machine, it’s not even a computer. Thus, faster computers will not duplicate human brains.

V argues for his thesis as follows: The brain has about 200 billion neurons which fire 10 million billion times per second. The nerve impulses travel very slow—300 feet per second—but the complexity of the brain’s neural connections compensates. Since each neuron is connected to 10,000 other neurons the brain is a PARALLEL PROCESSOR which carries out trillions of operations per second and yet is powered by the energy of a lightbulb—that’s efficiency. Cs calculate at nearly the speed of light but perform ONE calculation at a time. The brain calculates slowly but performs trillions of computations per second. And while a brain can have a part of itself damaged and still function, a Turing machine can be destroyed by the loss of a single transistor. Since the brain is a complex Nnet, the BU approach is the only one that will work.

TALKING ROBOTS – NETalk is a neural network that has learned to speak English almost from scratch. Rather than using the TD approach and stuffing a program with dictionaries, phonics rules, exceptions to grammar rules, etc. a simple neural net was created that learned from its mistakes. While the difference between real and model neurons is immense the fact that a simple neural net can speak suggests “that perhaps human abilities can be simulated by electronics.”

ROBOTICS MEETS QUANTUM PHYSICS – There has been a migration from QP to brain research. Physics is different than biology, the former looks for simple elegant solutions while the later is messy, inelegant and full of dead ends. The former is based on universal laws, the latter has only the law of evolution as its universal law. Physicists wonder if there are any fundamental principles behind AI, like there are in physics which led to questions like “Can a neuron in the brain be treated like an atom in a lattice?” [I think he’s looking for a unifying principle in the brain like QP provides the organizing principles in solid-state physics.]

Thus, while the TD school held that mind was a complicated program inserted into a computer, The BU suggested that mind arose “from the quantum theory of mindless atoms, without any programs whatsoever!” The founding father of the NN field, John Hopenfield, summarized it as follows: Ind atoms in a solid can exist in a few discrete states—spin up or down—and neurons similarly either fire or don’t fire. In a quantum solid a universal principle states that atoms are arranged so that the energy is minimized. Might Nnet circuits minimize their energy? If the answer is yes, the unifying principle behind Nnets is: “all the neurons in the brain would fire in such a way as to minimize the energy of the net.” JH also found that neural net behave a lot like brains. For example, even after the removal of neurons, the NN behaved pretty much the same; it seemed to have memories. The strangest finding was that NN began to dream.

WHAT ARE DREAMS? – JH believes “dreams are fluctuating energy states in a quantum mechanical system.” And just as we need to dream, especially after exhausting experiences, if NN are filled with too many memories they malfunction from overload and began to recall previous learned memories. Moreover, ripples began to form that didn’t correspond to memories at all, but rather to fragments of memories pieced together. These “spurious memories” correspond to dreams. A slight disturbance to this system allowed it to again settle down to a state of deep energy minimization—sleep? After several episodes of dreaming and sleeping, the system “awakens,” i.e., stops malfunctioning. Like M, K believes the TD and BU schools will merge in 40 years or so, creating truly intelligent robots.

CAN ROBOTS FEEL? – K thinks we’ll find it easier to interact with Rs if they have some emotions and common sense. This may seem counter-intuitive, that clumps of metal can feel, but it isn’t impossible to do. M thinks providing a R with the capacity to care for its master, to want to make him/her happy will increase the commercial success of Rs. And this is a kind of love. Jealousy, anger, laughter, and fear are similarly worthwhile. [And Rs should have more control over their emotions that we do, since ours are hard-wired deep into the limbic system] But does this mean that Rs are self-aware?

BEYOND 2050: ROBOT CONSCIOUSNESS – By 2050 AI systems should have a modest range of emotions, the internet will be a magic mirror, accessing the entire database of human knowledge and talking and joking with us. But are these AI systems conscious? [Hmm? I won’t be around in 2050—or if so I’ll be an old codger—so how does one act in such a way as to have a lasting impact? How can we make our lives significant? Maybe by creating the future, in your case, or by writing about the future in my case.] Anyway, the answer to this question is tough, inasmuch as we don’t know what cons is, but “many of the scientists who have dedicated their lives to building machines that think feel it’s only a matter of time before some form of consciousness in captured in the laboratory.” Of course NN have already produced thinking machines—me and you. And this isn’t all that surprising since consciousness is an “emergent” property, something that “happens naturally when a system becomes complex enough.” There are also theorist like Daniel Dennett, Herbert Simon, and Marvin Minsky who believe they have explained consciousness in some way. But all of them seem to share the view that con arises out of the complex interactions unconscious systems.

PET scans seem to bear these thinkers out. Con seems to be spread out over the brain, like a dance between the various parts, and thus there is only the illusion that there is a center for con. [a theme in Buddhism 2500 years ago.] Others believe that the brain generates lots of thoughts simultaneously and the con is just the thoughts that “win out.” Of course the other extreme are those who think robots will never be conscious. [We have already discussed many of these thinkers: Penrose, Searle, and McGinn.] K replies to the mysterians in a unique way.

“The problem with these criticisms is that trying to prove that machines can never be conscious is like trying to prove the nonexistence of unicorns.” Even if you could show that there are no unicorns in Texas or in the solar system, it is always possible to find one somewhere. “Therefore, to say that thinking machines can never be built has, to me, no scientific content.” [Lots of issues here, especially the problem of induction.] But for now the question is undecidable, and if and when we build them, then we can decide. [of course we can do it, but we can never know if we cannot.] Most likely, K thinks that con exists in degrees, and will be created that way.

DEGREES OF CONSCIOUSNESS – The lowest level—monitor your body and environment. Since Cs perform self-diagnostic and print error messages they probably fall into this category. Plants probably are probably a little higher up, they must react to changes in the environment. Machines with vision are probably on this scale. The next level is “the ability to carry out well-defined goals.” The future Mars probe is an example. Higher still, is the entire animal kingdom. Goals are fixed and plans are implemented to carry them out. Still most behavior is probably hard-wired. And this level is probably the dominant one for humans too. Most of our time is spent thinking about survival and reproduction. Of course there are thousands of levels here. The highest level of con is “the ability to set one’s goal, whatever they may be.” If Rs can do this, they are con. But what if ours and our creation’s goals conflict?

Hendler, Program Manager at the Defense Advanced Research Projects Agency (DARPA), and Professor of Computer Science at the University of Maryland, joined CNN.com to chat about artificial intelligence as part of as our @2000 chat series with leading authors, historians and experts to contemplate life at the turn of the century.

Dr. Hendler was the recipient of a 1995 Fulbright Foundation Fellowship, is a member of the US Air Force Science Advisory Board, and is a Fellow of the American Association for Artificial Intelligence. He has authored over 100 technical papers in artificial intelligence, robotics, intelligent agents and high performance computing. Dr. Hendler joined the chat from Maryland on December 16, 1999. The following is an edited transcript.

Do you believe that machines will ever become “aware”? In any sense of the word?

Jim Hendler: This is a very deep question, and I have an answer which may seem odd. I don’t think it matters. Let me put it this way, is your cat aware? Yes, in some senses, no in others. So maybe the question is will we ever think that our machines are aware. I think the answer to this is yes, we’ll see more and better capabilities that we tend to attribute as awareness. That said, I don’t think machines will ever have “human awareness” in the philosophical sense of the term, but they’ll be awfully close some day.


Most of K’s chapter takes us from 2020, the end of the microchip, thru QCs. One possibility not investigated thus far is BIONICS. Can we interface directly with the brain? First, we need to show that neurons can grow on silicon and then connect them with neurons in living beings, such as human neurons. Finally, we would have to decode the neurons that make up our spinal cord. In 1995 at the Max Planck Institute (where my best friend from grad school works) scientists did just this with a leech neuron and a silicon chip. They welded hardware to software. The way has been paved “to developing silicon chips that can control the firing of neurons at will, which in turn could control muscle movement.” The neurons of baby rats have also been grown on silicon surfaces. At Harvard Med School they have already begun to build a bionic eye. This should be able to restore vision to the blind. This should help 10 million Americans. We should soon be able to make eyes that are better than the ape eyes we have, eyes that could see at the ultraviolet and infrared. Of course we could do similar things to the arms, legs, etc. allowing for superhuman feats, but then we would need a superhuman skeleton as well.

[In short, McGwrire hits a baseball farther than Ruth did, and Woods hits a golf ball farther than Hogan did. We run faster and jump higher than we did just a few generations ago. Do you really think this will stop? Ps – I was informed yesterday that some of you might be offended by our references to biological evolution because of religious reasons. I didn’t mean to offend it just didn’t occur to me. But even if you don’t like biological evolution, cultural evolution is all around you, and goes so fast you can see it in a lifetime-or in 2 years if you go back to get your PC upgraded.]

Anyway, this merging of mind and machine will take, according to Ralph Merkle at Xerox PARC a human genome project effort TO MAP THE BRAIN NEURON BY NEURON at a cost of $340 billion dollars. The tech to begin will probably have to wait another decade. As far as the distant future, K believes that all 3 revolutions will merge. QT will provide the Q transistors smaller than neurons. The Crevolution will give us NN as powerful as brains. And the biomolecular rev will give us the power to replace our NN with synthetic ones, “thereby giving us a form of immortality.”

And since evolution favors organism that are able to survive, a human/mechanical blend may be the best way to survive. And if we map every neuron in the brain can we then give our brains immortal bodies? M thinks we will gradually transfer our cons to robotic bodies. How? [quote 116] K thinks of this as the next step in evolution, as does Minsky who thinks of this as :unnatural selection” a process of deliberately replacing humans. [Fascinating. Darwin compared natural selection with artificial selection-as any country Englishman was familiar with. Now instead of selecting animals, we select not humans, but CYBERBEINGS. Of course the idea of being steel and plastic may not appeal, but a number of scientists reply that there is so much to know and learn they could use the time well. [naturally I agree. And what of boredom? People would get bored of boredom and decide to do something more interesting.]

Minsky says robots will inherit the earth and they will be our children. “We owe our minds to the deaths and lives of all the creatures that were ever engaged in the struggle called evolution. Our job is to see that all this work shall not end up meaningless.” [and if our best evidence is correct that universal death is the ultimate fate of the universe, then we might as well see that it isn’t all meaningless by trying to figure a way out.]









Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.