Category Archives: Book Reviews-Futurism

Summary of Jaron Lanier’s “Who Owns the Future?”

Lanier blowing into a woodwind instrument with several chambers

Jaron Lanier‘s recent book, Who Owns the Future? discusses the role that technology plays in both eliminating job and increasing income inequality. Early in that book, Lanier quotes from Aristotle’s Politics: “If every instrument could accomplish its own work … if … the shuttle would weave and the plectrum touch the lyre without a hand to guide them, chief workmen would not want servants, nor masters slaves.”

In other words, Aristotle saw that the human condition largely depends on what machines can and cannot do, and we can imagine that machines will do much more of our work in the future. How then would Aristotle respond to today’s technology? Would he advocate for a new economic system that met the basic needs of everyone, including those who no longer needed to work; or would he try to eliminate those who didn’t own the machines that run society? 

Surely this question has a modern ring. If, as Lanier suggests, only those close to the computers that run society have good incomes, then what happens to the rest of us? What happens to the steel mill and auto factory workers, to the butchers and bank tellers, and, increasingly, to the accountants, professors, lawyers, engineers, and physicians when artificial intelligence improves? (Lanier discusses how this will come about in his book.)

Lanier worries that automata, especially AI and robotics, create a situation where we don’t have to pay others. Why pay for maid service if you have a robotic maid, or for software engineers if computers are self-programming? Aristotle used music to illustrate the point. He said that it was terrible to enslave people to make music (playing instruments in his time was undesirable and labor intensive) but we need music so someone must be enslaved. If we had machines to make music or could get by without it, that would be better. Music was an interesting choice because now so many want to play it for a living, although almost no one makes money for their music through internet publicity. People may be followed online for their music or their blog, but they rarely get paid for it.

So what do we do? Should we eliminate or ignore the apparently unnecessary people? Should we retire to the country or the gated community where our apparent safety is ensured by a global military empire and their paid mercenaries? Where the first victims of society sleep on street corners, populate our prisons, endure unemployment, or involuntarily join our voluntary armies? (Remember technology will eventually replace the accountants, attorneys, professors and software engineers too!) Or should we recognize how we benefit from each other, from our diverse temperaments and talents, and from the safety and sustenance we can enjoy together?

So a question we now face is: what happens to the extra people—which will soon be almost all of us—when technology does all the work or the remaining work is unpaid? Are the rest of us killed or must we slowly starve? Surprisingly Lanier thinks these questions are misplaced. After all, human intelligence and human data drive the machines. So the issue is how to think about the work that machines can’t do.

I think that Lanier is on to something. We can think of the non-automated work as anything from essential to frivolous to harmful. If we think of it as frivolous, then so too are the people who produce it. If we don’t care about human expression in art, literature, music, theatre, sport or philosophy, then why care about the people who produce it.

But even if machines write better music or poetry or blogs than human beings, we can still value human generated effort. Even if machines did all of society’s work we can still share the wealth with people who want to think and write and play music. Perhaps people just enjoy these activities. No human being plays chess as well as the best supercomputers, but people still enjoy playing chess; I don’t write as well as Carl Sagan did, but I still enjoy it.

I’ll go further. Suppose someone wants to sit on the beach, surf, ski, golf, smoke marijuana, or watch TV. What do I care? Maybe a society of contented people doing what they wanted would be better than one driven by the Protestant work ethic. A society of stoned, TV watching, skiers, golfers, and surfers would probably be a happier one than the one we live in now. (In fact, the happiest countries are those with strong social safety nets, the ones with generous vacation and leave policies.) And people in countries with strong social safety nets still write music and books, do science, volunteer, and visit their grandchildren. They aren’t drug addicts!

This is what I envision. A society where machines do all the work that humans don’t want to do and humans would express themselves however they like, without harming others. A society much more like Denmark and Norway, and much less like Alabama and Mississippi. Yes, I believe that all persons are entitled to the minimal amount it takes to live a decent human life. All of us would benefit from such an arrangement, as we all have much to contribute. I’ll leave with some words inspiring words from Eliezer Yudkowsky:

There is no evil I have to accept because ‘there’s nothing I can do about it’. There is no abused child, no oppressed peasant, no starving beggar, no crack-addicted infant, no cancer patient, literally no one that I cannot look squarely in the eye. I’m working to save everybody, heal the planet, solve all the problems of the world.

Review of Phil Torres’ “Morality, Foresight & Human Flourishing

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, November 6, 2017.)

Phil Torres has just published an important new book: Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. Torres is the founding Director of the Project for Future Human Flourishing, which aims to both understand and mitigate existential threats to humanity. Astronomer Royal of the United Kingdom Martin Rees writes the book’s foreword, where he states that the book “draws attention to issues our civilization’s entire fate may depend on.” (13) We would do well to take this statement seriously—our lives may depend on it.

The book is a comprehensive survey of existential risks such as asteroid impacts, climate change, molecular nanotechnology, and machine superintelligence. It argues that avoiding an existential catastrophe should be among our highest priorities, and it offers strategies for doing so. But are we especially likely to go extinct today? Is today a particularly perilous time? While Steven Pinker, in his book The Better Angels of Our Nature, argues that we live in the most peaceful time in human history, Torres replies, “we might also live in the most dangerous period of human history ever. The fact is that our species is haunted by a growing swarm of risks that could either trip us into the eternal grave of extinction or irreversibly catapult us back into the Stone Age.” (21) I think Torres has it right.

While we have lived in the shadow of nuclear annihilation for more than 70 years, the number of existential risk scenarios are increasing. How great a threat do we face? About 20% of the experts surveyed by the Future of Humanity Institute believe we will go extinct by the end of this century. Rees is even more pessimistic, arguing that we have only a 50% of surviving the century. And the doomsday clock reflects such warnings; it currently rests at two-and-a-half minutes to midnight. Compare all this to your chance of dying in an airplane crash or being killed by terrorists—the chance of either is exceedingly small.

Torres uses the Oxford philosopher Nick Bostrom’s definition of existential risk:

An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development. (27)

Thus we can differentiate between total annihilation and existential risks that prevent us from achieving post-humanity. The latter type of risk includes: permanent technological stagnation; flawed technological realization; and technological maturity and subsequent ruination. Bostrom also distinguishes risks in terms of scope—from personal to trans-generational—and intensity—from imperceptible to terminal. Existential risks are both trans-generational and terminal.

As Torres notes, these risks are singular events that happen only once. Thus strategies to deal with them must be anticipatory, not reactionary, and this makes individual and governmental action to deal with such risks unlikely. Furthermore, the reduction of risks is a global public good, precisely the kind of goods the market is poor at providing. So while future generations would pay astronomical sums to us to increase their chance of living happily in the future, we wouldn’t necessarily benefit from our efforts to save the future.

But why should we care about existential risks? Consider that while a pandemic killing 100 million would be a tragedy, as would the death of any subsequent 100 million people, the death of the last 100 million people on earth would be exponentially worse. Civilization is only a few thousand years old, and we may have an unimaginably long and bright future ahead of us, perhaps as post-humans. If so, total annihilation would be unimaginably tragic, ending a civilization perhaps destined to conquer both the stars and themselves. Thus, the expected value of the future is astronomically high, a concept that Torres calls “the astronomical value thesis.” Torres conveys this point with a striking image.

the present moment …. is a narrow foundation upon which an extremely tall skyscraper rests. The entire future of humanity resides in this skyscraper, towering above us, stretching far beyond the clouds. If this foundation were to fail, the whole building would come crashing to the ground. Since this would be astronomically bad according to the above thesis, it behooves us to do everything possible to ensure that the foundation remains intact. The future depends crucially on the decisions we make today … and this is a moral burden that everyone should feel pressing down on their shoulders. (42)

As to why we should value future persons, Torres argues that considerations of one’s place in time have as little to do with moral worth as considerations of space—moral worth does not depend on what country you live in. Furthermore, discounting future lives is counter-intuitive from a moral point of view. Is a life now really worth the lives of a billion or a trillion future ones? It seems not. Clearly, living persons have no special claim to moral worth, and thus they should do what they can to reduce the possibility of catastrophe.

Next Torres addresses how cognitive biases distort thinking about the future—most people only think a few years in advance. Moreover, throughout history, humans have thought their generation was the last one. Even today, more than 40% of US Christians think that Jesus will probably or definitely return in their lifetimes, and many more Muslims believe the Mahdi will do so too. And, since these apocalyptic scenarios have not yet occurred, one might be skeptical of scientific worries about global catastrophic risks. The difference is that reason and evidence ground scientific concerns about an apocalypse, as opposed to being based in religious faith. We should heed the former and ignore the latter. However, Torres is aware that we live in an anti-intellectual age, especially in America, so reasonable concerns often go unheeded, and superstition rules the day.

Torres also hopes that understanding the etiology of existential risk will help us minimize the chance of catastrophe. To better understand causal risks Torres  distinguishes:

natural risks—super volcanoes, pandemics, asteroids, etc.
anthropogenic risks—nuclear weapons, nanotechnology, artificial intelligence, etc.
unintended risks—climate change, environmental damage, physics experiments, etc.
other risks—geoengineering, bad governance, unforeseen risks, etc., and
context risks—some combination of any of the above.

Next Torres proposes strategies for mitigating catastrophic threats. He divides these strategies as follows: 1) agent-oriented; 2) tool-oriented; and 3) other options. Agent-oriented strategies refer mostly to cognitive and moral enhancement of individuals, but also with reducing environmental triggers, creating friendly AI, and improving social conditions. Tool-oriented strategies focus on reducing the destructive power of our existing tools, or altogether relinquishing future technologies that pose existential risks, or developing defensive technologies to deal with potential risks. Other strategies include space colonization, tracking near-earth objects, stratospheric geoengineering, and creating subterranean, aquatic, or extraterrestrial bunkers.

His discussion of cognitive and moral enhancements is particularly illuminating. Cognitive enhancements, especially radical ones like nootropics, machine-brain interfaces, genetic engineering and embryo selection, seem promising. Smart beings would be less likely to do stupid things, like destroy themselves, and the cognitively enhanced might discover threats from phenomena that unenhanced beings could never discern. The caveat is that smarter individuals are better at completing their nefarious plans, and cognitive enhancements would expedite the development of new technologies, perhaps making our situation more perilous.

Similar concerns surround the issue of biological moral enhancements. Why not augment the moral dispositions of empathy, caring, and justice through genetic engineering, neural implants or mostropics? One problem is that the unenhanced may prove to be a great threat to the morally enhanced, so the system may only be safe if everyone is enhanced.  Another problem is that the morally enhanced may become even more fervent in their pursuit of justice, at the expense of those who have a different view of what is just. In fact, concerns about justice often motivate immoral acts. So we can’t be sure that moral bioenhancements are the answer either.

My own view is that we will not survive without radical cognitive and moral enhancement. Reptilian brains and twenty-first-century technology are a toxic brew, and there is nothing sacrosanct about remaining modified monkeys. We should transform ourselves as soon as possible, otherwise, we will almost certainly be annihilated. This I believe is our only hope. Yes this risky, but there is no risk-free way to proceed.

Torres concludes by considering multiple a priori arguments which purportedly demonstrate that we considerably underestimate the possibility of our annihilation. I find these arguments compelling. Still, Torres doesn’t want to give in to pessimism. Instead, he recommends an active optimism which recognizes risks and tries to eliminate them. So while we may be intellectually pessimistic about the future, we can still work to save the world. As Torres concludes: “The invisible hand of time inexorably pushes us forward, but the direction in which we move is not entirely outside of our control.” (223)


This is a work of extraordinary depth and breadth, and it is carefully and conscientiously crafted. Its arguments are philosophically sophisticated, and often emotionally moving as well. Torres’ concern with preserving a future for our descendants is transparent and sincere, and readers come away from the work convinced that the problems of existential risk are of utmost significance. After all, existence is the prerequisite for … everything.

Yet reading the work fills me with sadness and despair too. For a possible, unimaginably glorious future seems to depend on the most reckless, narcissistic, uninformed, and vile among us. The future seems to rest primarily in the hands of those ignorant of both the delicate foundations of civilization that separate us from a warlike state of nature and the fragility of an ecosystem and biosphere that shield us from the cold, dark, emptiness of space. But, as Torres counsels, we must not give in to pessimism, and our optimism must not be passive. Instead, our desire to save the world must inspire action.

For in the end what keeps us going is the hope that the future might be better than the past. That, if anything, is what gives our lives meaning. If we are not as links in a golden chain leading onward and upward toward higher states of being and consciousness, then what is the point of our little lives? But to be successful in this quest, we must both survive and flourish, which is what Torres urges us to do. Let us hope we listen.

Review of Michael Bess’, Our Grandchildren Redesigned: Life in the Bioengineered Society of the Near Future

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, March 23, 2016.)

Vanderbilt University’s Michael Bess has written an extraordinarily thoughtful new book: Our Grandchildren Redesigned: Life In The BioEngineered Society Of The Near Future. The first part of the book introduces the reader to the technologies that will enhance the physical, emotional, and intellectual abilities of our children and grandchildren: pharmaceuticals, bioelectronics, genetics, nanotechnology, robotics, artificial intelligence, synthetic biology, and virtual reality.

In the second part of the book, Bess sets out the pro and cons of enhancement. The arguments against bioenhancement are that doing so: 1) plays god or interferes with nature; 2) destroys the qualities that make us human; 3) subverts dignity by commodifying human traits; 4) displays hubris and robs life of its meaning; and 5) rejects the limitations that define humanity. In these multiple ways enhancement leads to disaster. The arguments for bioenhancement are that doing so: 1) continues the long historical process of controlling ourselves and our world; 2) expresses our natural desires for new capabilities and richer experiences; 3) rejects the legacy of blind evolution and advocates directing the evolutionary process; 4) will reduce suffering and other constraints on our being; and 5) pursues our potential to be more than we are now, which is what gives life meaning.

Bess argues that the differences between the pro and anti-enhancement camps reflect the tension between conservative and romantic reactions to the Enlightenment. Thinkers like Voltaire, Diderot, Locke, and Kant emphasized progress and perfectibility combined with an optimism about human social and moral evolution. Progress could continue indefinitely, as humans used reason to unlock their inner potential. But conservatives like Edmund Burke saw human nature as limited and more fixed. Instead of progressive social evolution, they saw recurring patterns of greed and violence. (The motive for conservatism that Bess omits in my view, is religious opposition to future technologies.)

Bess suggests a via media between these two visions. Change, innovation and novelty characterize human nature as does the desire for continuity, preservation, and order. Wisdom combines both: “hope …  tempered by humility … an attitude of openness to the future, chastened by the sobering lessons of past experience. The resulting moral maxim would be: embrace innovation, but proceed critically, incrementally, and cautiously in adopting it; explore new possibilities, but remain acutely cognizant of the historical track record as you go.”  Bess refers to his view as “chastened optimism.” (78)

This leads to various forms of enhancement considered on a case-by-case basis. But what moral framework should we use to make these assessments? SInce human beings differ regarding their moral beliefs, Bess argues that the best we can do is combine the ancient concept of human flourishing with today’s positive psychology and the “capabilities approach” in economic theory. Together these two fields have reached a consensus about the personal traits and social conditions that contribute to human flourishing, and Bess believes that this provides a framework for assessing enhancement technologies. The key factors in human flourishing from the individual perspective are: security; dignity; autonomy; personal fulfillment; authenticity; and pursuit of practical wisdom. From a societal perspective, the key factors are: fairness; interpersonal connectedness; civic engagement; and transcendence. This framework helps us answer questions about whether a particular enhancement will or will not contribute to human flourishing.

Other questions will also arise. Who gets enhanced? Will enhancements create a new caste system? What of those who reject enhancement? Bess thinks it is unlikely that first world democracies would tolerate a biological class system, and that violence may accompany the desire for universal access to enhancement technologies. As for those who reject these technologies, it is unclear whether the non-modified will be able to live peaceably beside the modified. But when large numbers of individuals choose to adopt bioenhancement, there will be tremendous pressure on the non-modified humans to augment their own capabilities, or they will be at a distinct disadvantage. And, given enough time, the modified and non-modified will be different species.

In the third part of the book explores the more ethereal effects enhancements will have on individual humans. Questions will arise like: Do pharmaceuticals enhance our experiences by disconnecting us from reality? Do enhancements mechanize the self by eliminating the messy and unpredictable aspects of human experience? And, if the answer to such questions is yes, then are enhancements worth the price?

Similar questions arise regarding moral enhancement. For example, suppose we can give people a “morality pill” to increase the likelihood that they will make ethical choices. Such a pill wouldn’t have to completely override free will; rather it could increase the proclivity toward altruism. Bess says that we should reject this pill because intention is a large part of what makes an act moral, and the pill interferes with intentions. He believes that free will is worth the price of whatever negative outcomes follow from it. I think that this is a very large price to pay for an idea, free will, that may be illusory anyway. Still, Bess maintains that moral enhancement, to the extent it undermines free will, removes moral meaning from the world. Personally, I wouldn’t care about discarding the idea moral meaning if a better world results. No doubt I am revealing my utilitarian preferences.

Other problems relating to human identity include: the possible monitoring and sharing of our intimate thoughts; the development of better virtual reality; and the extension of human lifespans. In addition, enhancement technologies will bring about unforeseen consequences. What will be the future of sex, food, privacy, the arts, and war? No doubt the future will be weird in ways that are, at present, inconceivable. But Bess thinks we should be a scared. “If you think your iPhone is a transformative device, just wait til they turn on your brain-machine interface.” (174)

The last section of the book explores the ethical questions raised by the pursuit of human enhancement. How far should we go with enhancements? What modifications should embrace and which should we reject? What is generally better, modest or radical enhancements? What sorts of creatures do we want to become, and what sort do we to avoid becoming? Will we even have a say in determining such matters?

Bess doubts that we can “just say no” to these technologies, for even if we did some would pursue them in a black market or in countries more receptive to such technologies. Thus complete relinquishment of enhancement technologies is a non-starter. So the real question is whether we want to pursue enhancements at a low-level, increasing today’s capabilities; at a mid-level, capabilities beyond today’s levels but still recognized as human; or at a high level, capabilities we would classify us as transhuman or posthuman.

It is the transhumanist vision that Bess especially fears.  He argues that you cannot have a radically expanded cognitive architecture with transforming your identity. Such a consciousness would no longer be anything like the consciousness it used to be. Thus, to transform ourselves in this manner would be to terminate ourselves and become a new kind of sentient being. But we should not do this, Bess says, because of the potential for posthumans to harm others. “Until we know a great deal more than we do today about what such entities would be like … it would be the height of folly and irresponsibility to proceed with the project of creating them … The potential rewards are too uncertain, and the risks are far too great.” Furthermore, the societal consequences of some of us becoming posthuman might tear the fabric of civilization apart.

Here I think Bess’ arguments are less convincing. The transhumanist admits that the human species, as it is, must die in order for something better to replace it. But, the transhumanist would say, this is worth the risk because without radical transformation the species will almost certainly die out. Given the many extinction scenarios that accompany our journey into the future, the prospects for our continued existence seem meager. In that case, even huge gambles are justified. And, if we turn our back on enhancements, we will almost certainly go extinct. The rewards of enhancements may be uncertain, but the risks of pursuing them are no greater than if we do nothing or only do a little.

Bess admits that the temptation to pursue radical enhancements will be great, but he counsels restraint. He hopes that as we adapt to low-level changes, we can gradually relax the constraints on mid-level and high-level ones.  He admits that enforcing these moratoriums would be difficult, and international cooperation would be hard to achieve, but arms control provides a model of how this might be accomplished. Still, Bess says, trying to control technologies that may spell our doom is worth the effort.

Bess’ book is one of the most thoughtful meditations on the future that I have read. Moreover, the book is carefully and conscientiously crafted and meticulously argued. He is also impartial, giving a fair hearing to contradictory arguments, and wrestling fairly with the ideas as he encounters them. In the end, I would situate Bess’ views a bit toward the conservative side of the argument. While he is optimistic that we can muddle our way through the coming storm, which demands a large dose of optimism indeed, I sense more fear than excitement in his words. I think he overestimates how good life is now, and underestimates how good it could be.

Bess concludes that in the future: “the most potent deed of all will still take the form of a smile, a silent nod of empathy, a hand gently laid on someone’s arm. The merest act of kindness will still remain the Ultimate Enhancement.” This is touching, and it reminds us that remaking the world demands more than just engineering. But Iet us hope that Bess doesn’t mean this literally. Let us hope that in the future we can do more for human suffering than smile, nod, and touch. Let us hope that someday there will be more than just kindness to ameliorate the reality of our descendants.

Review of Paul & Cox’s, Beyond Humanity: Cyberevolution and Future Minds

Gregory Scott Paul (1954 – ) is a freelance researcher, author and illustrator who works in paleontology, sociology and theology. Earl D. Cox is the founder of Metus Systems Group and an independent researcher. Their book, Beyond Humanity: Cyberevolution and Future Mindsis an assault on the mindset of those who oppose their view of scientific progress.

Paul and Cox argue that the universe, as well as all life and mind within it, have evolved over time from the bottom up. However, genes now have little to do with our evolution—science and technology move the accelerating rate of evolution. In the course of that evolution a general pattern emerges—more change in less time. While it took nature a long time to produce a bio-brain, technology will produce a cyber-brain much faster.

Despite its promises people are ambivalent about science and technology (SciTech). They believe it will improve their lives, yet it has contributed to the death of millions. Its success has, in some sense, backfired. To be completely accepted SciTech must solve the problems of suffering and death which inevitably leads to questions about human nature. When taking a good look at human nature, the authors conclude that there is good news—we have brains that produce self-aware, conscious thought which is itself connected with wonderful auditory and visual systems. However, our bodies need sleep, demand exercise, lust for fatty foods, and have limited mobility and strength.

The bad news continues if we consider the limited memories and storage capacity of our brain. We upload information slowly; often cannot control our underdeveloped emotions; are easily conditioned by all sorts of irrationalities as children; have difficulty unlearning old falsehoods as adults; don’t know how our brains work; often cannot change unwanted behavioral patterns; and brain chemicals control our moods—suggesting that we are much less free than we admit. Moreover, when individual minds join they are particularly destructive, often killing each other at astonishing rates. We are also vulnerable to: brainwashing, pain, sun, insects, viruses, trauma, broken bones, disease, infection, organ failure, paralysis, miniscule DNA glitches, cancer, depression, and psychosis. We degrade and suffer pain as we age, and we die without a backup system since evolution perpetuates our DNA not our minds. On the whole, this is not a pretty picture.

Disease and aging can be thought of as a war which matches our brains and computers versus the RNA and DNA computers of microbes and diseased cells. What is the best way to win this war? Regeneration from our DNA would only regenerate the body—the mind would still have died—so it is not a wholly promising approach. The way around this limitation is to have a nanocomputer within your brain that receives downloads from your conscious mind. If the mind storage unit receives continuous downloads you can always be brought back after death—you would be immortal. But why stop there? Why not just make an indestructible cyber-body and cyber-brain? Why not become immortal cyber-beings?

This all leads to questions about us becoming gods. The authors argue that the existence of gods is a science and engineering project—we can create minds as powerful as those of our imaginary gods with sufficient technology. Of course supernaturalism opposes this project, but SciTech will win the struggle, just as it has historically dismantled other supernatural superstitions one by one. Science will defeat supernaturalism by explaining it, by providing in reality what religions supply only in the imagination. When science conquers death and suffering, religion will die; religions fundamental reason for being—comforting our fear of death—will become irrelevant. As for the custodians of religion, the theologians, the authors issue a stern warning:

Theologians are like a group of Homo erectus huddling around a fire, arguing over who should mate with whom, and which clan should live in the green valley, while paying no mind to the mind-boggling implications of the first Homo sapiens … Theologians of the world … the affairs you devote so much attention to are in danger of having as much meaning as the sacrifices offered to Athena … science and technology may be about to deliver … minds [that] will no longer be weak and vulnerable to suffering, and they will never die out. The gods will soon be dead, but they will be replaced with real minds that will assume the power of gods, gods that may take over the universe and even make new universes. It will be the final and greatest triumph of science and technology over superstition.[i] 

Summary – We should proceed beyond humanity, overcoming the religious impulses which are the last vestige of superstition.


[i] Gregory Paul and Earl Cox, Beyond Humanity: CyberEvolution and Future Minds (Rockland, MA.: Charles River Media, 1996), 415.

Summary of Michio Kaku’s, Visions: How Science Will Revolutionize the 21st Century

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 23, 2016.)

Michio Kaku (1947 – ) is the Henry Semat Professor of Theoretical Physics at the City College of New York of City University of New York. He is the co-founder of string field theory and a popularizer of science. He earned his Ph.D. in physics from the University of California-Berkeley in 1972.

In his book, Visions: How Science Will Revolutionize the 21st CenturyKaku sets out an overall picture of what is happening in science today that will revolutionize our future.[i] He begins by noting the three great themes of 20th-century science—the atom, the computer, and the gene. The revolutions associated with these themes ultimately aim at a complete understanding of matter, mind, and life. Progress toward reaching our goals has been stunning—in just the past few years more scientific knowledge has been created than in all previous human history. We no longer need to be passive observers of nature, we can be its active directors; we are moving from discovering nature’s laws to being the masters of those laws.

The quantum revolution spawned the other two revolutions. Until 1925 no one understood the world of the atom; now we have an almost complete description of matter. The basic postulates of that understanding are: 1) energy is not continuous but occurs in discrete bundles called “quanta;” 2) sub-atomic particles have both wave and particle characteristics; and 3) these wave/particles obey Schrodinger’s wave equation which determines the probability that certain events will occur. With the standard model, we can predict the properties of things from quarks to supernovas. We now understand matter and we may be able to manipulate it almost at will in this century.

The computer revolution began in the 1940s. At that time computers were crude but subsequent development of the laser in the next decade started an exponential growth. Today there are tens of millions of transistors in the area the size of a fingernail. As microchips become ubiquitous, life will change dramatically. We used to marvel at intelligence; in the future, we may create and control it.  

The bio-molecular revolution began with the unraveling of the double helix in the 1950s. We found that our genetic code was written on the molecules within the cells—DNA. The techniques of molecular biology allow us to read the code of life like a book. With the owner’s manual for human beings, science and medicine will be irrevocably altered. Instead of watching life we will be able to direct it almost at will.  

Hence we are moving from the unraveling stage to the mastery stage in our understanding of nature. We are like aliens from outer space who land and view a chess game. It takes a long time to unravel the rules and merely knowing the rules doesn’t make one a grandmaster. We are like that. We have learned the rules of matter, life, and mind but are not yet their masters. Soon we will be.

What really moves these revolutions is their interconnectivity, the way they propel each other. Quantum theory gave birth to the computer revolution via transistors and lasers; it gave birth to the bio-molecular revolution via x-ray crystallography and the theory of chemical bonding. While reductionism and specialization paid great dividends for these disciplines, intractable problems in each have forced them back together, calling for a synergy of the three. Now computers decipher genes, while DNA research makes possible new computer architecture using organic molecules. Kaku calls this cross-fertilization—advances in one science boost the others along—and it keeps the pace of scientific advance accelerating.

In the next decade, Kaku expects to see an explosion in scientific activity that will include growing organs and curing cancer. By the middle of the 21st century, he expects to see progress in slowing aging, as well as huge advances in nanotechnology, interstellar travel, and nuclear fusion. By the end of the century, we will create new organisms, and colonize space. Beyond that we will see the visions of Kurzweil and Moravec come to pass—we will extend life by growing new organs and bodies, manipulating genes, or by merging with computers.

Where is all this leading? One way to answer is by looking at the labels astrophysicists attach to hypothetical civilizations based on ways they utilize energy—labeled Type I, II, and III civilizations. Type I civilizations control terrestrial energy, modify weather, mine oceans, and extract energy from planet’s core. Type II civilizations have mastered stellar energy, use their sun to drive machines and explore other stars. Type III – manage interstellar energy, since they have exhausted their star’s energy. Energy is available on a planet, its star and in its galaxy, while the type of civilization corresponds to that civilizations power over those resources.

Based on a growth rate of about 3%  a year in our ability to control resources, Kaku estimates that we might expect to become a Type I civilization in a century or two, a type II civilization in about 800 years, and a type III civilization in about ten thousand years. At the moment, however, we are a Type 0 civilization which uses the remains of dead plants and animals to power our civilization. (And change our climate dramatically.) By the end of the 22nd century, Kaku predicts we will be close to becoming a Type 1 civilization and take our first steps into space. Agreeing with Kurzweil and Moravec, Kaku believes this will lead to a form of immortality when our technology replaces our brains, preserving them in robotic bodies or virtual realities. Evolution will have replaced us, just as we replaced all that died in the evolutionary struggle so that we could live. Our job is to push evolution forward.

Summary – Knowledge of the atom, the gene, and the computer will lead to a mastery of matter, life, and mind.


[i] Michio Kaku, Visions: How Science Will Revolutionize the 21st Century (New York: Anchor, 1998).