Category Archives: Book Reviews-Futurism

Review of Michael Bess’, Our Grandchildren Redesigned: Life in the Bioengineered Society of the Near Future


(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, March 23, 2016.)

Vanderbilt University’s Michael Bess has written an extraordinarily thoughtful new book: Our Grandchildren Redesigned: Life In The BioEngineered Society Of The Near Future. The first part of the book introduces the reader to the technologies that will enhance the physical, emotional, and intellectual abilities of our children and grandchildren: pharmaceuticals, bioelectronics, genetics, nanotechnology, robotics, artificial intelligence, synthetic biology, and virtual reality.

In the second part of the book Bess sets out the pro and cons of enhancement. The arguments against bioenhancement are that doing so: 1) plays god or interferes with nature; 2) destroys the qualities that make us human; 3) subverts dignity by commodifying human traits; 4) displays hubris and robs life of its meaning; and 5) rejects the limitations that define humanity. In these multiple ways enhancement leads to disaster. The arguments for bioenhancement are that doing so: 1) continues the long historical process of controlling ourselves and our world; 2) expresses our natural desires for new capabilities and richer experiences; 3) rejects the legacy of blind evolution and advocates directing the evolutionary process; 4) will reduce suffering and other constraints on our being; and 5) pursues our potential to be more than we are now, which is what gives life meaning.

Bess argues that the differences between the pro and anti-enhancement camps reflect the tension between conservative and romantic reactions to the Enlightenment. Thinkers like Voltaire, Diderot, Locke, and Kant emphasized progress and perfectibility combined with an optimism about human social and moral evolution. Progress could continue indefinitely, as humans used reason to unlock their inner potential. But conservatives like Edmund Burke saw human nature as limited and more fixed. Instead of progressive social evolution, they saw recurring patterns of  greed and violence. (The motive for conservatism that Bess omits in my view, is religious opposition to future technologies.)

Bess suggests a via media between these two visions. Change, innovation and novelty characterize human nature as does the desire for continuity, preservation and order. Wisdom combines both: “hope …  tempered by humility … an attitude of openness to the future, chastened by the sobering lessons of past experience. The resulting moral maxim would be: embrace innovation, but proceed critically, incrementally, and cautiously in adopting it; explore new possibilities, but remain acutely cognizant of the historical track record as you go.”  Bess refers to his view as “chastened optimism.” (78)

This leads to various forms of enhancement considered on a case-by-case basis. But what moral framework should we use to make these assessments? SInce human beings differ regarding their moral beliefs, Bess argues that the best we can do is combine the ancient concept of human flourishing with today’s positive psychology and the “capabilities approach” in economic theory. Together these two fields have reached a consensus about the personal traits and social conditions that contribute to human flourishing, and Bess believes that this provides a framework for assessing enhancement technologies. The key factors in human flourishing from the individual perspective are: security; dignity; autonomy; personal fulfillment; authenticity; and pursuit of practical wisdom. From a societal perspective the key factors are: fairness; interpersonal connectedness; civic engagement; and transcendence. This framework helps us answer questions about whether a particular enhancement will or will not contribute to human flourishing.

Other questions will also arise. Who gets enhanced? Will enhancements create a new caste system? What of those who reject enhancement? Bess thinks it is unlikely that first world democracies would tolerate a biological class system, and that violence may accompany the desire for universal access to enhancement technologies. As for those who reject these technologies, it is unclear whether the non-modified will be able to live peaceably beside the modified. But when large numbers of individuals choose to adopt bioenhancement, there will be tremendous pressure on the non-modified humans to augment their own capabilities, or they will be at a distinct disadvantage. And, given enough time, the modified and non-modified will be different species.

In the third part of the book explores the more ethereal effects enhancements will have on individual humans. Questions will arise like: Do pharmaceuticals enhance our experiences by disconnecting us from reality? Do enhancements mechanize the self by eliminating the messy and unpredictable aspects of human experience? And, if the answer to such questions is yes, then are enhancements worth the price?

Similar questions arise regarding moral enhancement. For example, suppose we can give people a “morality pill” to increase the likelihood that they will make ethical choices. Such a pill wouldn’t have to completely override free will; rather it could increase the proclivity toward altruism. Bess says that we should reject this pill because intention is a large part of what makes an act moral, and the pill interferes with intentions. He believes that free will is worth the price of whatever negative outcomes follow from it. I think that this is a very large price to pay for an idea, free will, that may be illusory anyway. Still Bess maintains that moral enhancement, to the extent it undermines free will, removes moral meaning from the world. Personally, I wouldn’t care about discarding the idea moral meaning if a better world results. No doubt I am revealing my utilitarian preferences.

Other problems relating to human identity include: the possible monitoring and sharing of our intimate thoughts; the development of better virtual reality; and the extension of human lifespans. In addition, enhancement technologies will bring about unforeseen consequences. What will be the future of sex, food, privacy, the arts, and war? No doubt the future will be weird in ways that are, at present, inconceivable. But Bess thinks we should be a scared. “If you think your iPhone is a transformative device, just wait til they turn on your brain-machine interface.” (174)

The last section of the book explores the ethical questions raised by the pursuit of human enhancement. How far should we go with enhancements? What modifications should embrace and which should we reject? What is generally better, modest or radical enhancements? What sorts of creatures do we want to become, and what sort do we to avoid becoming? Will we even have a say in determining such matters?

Bess doubts that we can “just say no” to these technologies, for even if we did some would pursue them in a black market or in countries more receptive to such technologies. Thus complete relinquishment of enhancement technologies is a non-starter. So the real question is whether we want to pursue enhancements at a low-level, increasing today’s capabilities; at a mid-level, capabilities beyond today’s levels but still recognized as human; or at a high level, capabilities we would classify us as transhuman or posthuman.

It is the transhumanist vision that Bess especially fears.  He argues that you cannot have a radically expanded cognitive architecture with transforming your identity. Such a consciousness would no longer be anything like the consciousness it used to be. Thus, to transform ourselves in this manner would be to terminate ourselves and become a new kind of sentient being. But we should not do this, Bess says, because of the potential for posthumans to harm others. “Until we know a great deal more than we do today about what such entities would be like … it would be the height of folly and irresponsibility to proceed with the project of creating them … The potential rewards are too uncertain, and the risks are far too great.” Furthermore, the societal consequences of some of us becoming posthuman might tear the fabric of civilization apart.

Here I think Bess’ arguments are less convincing. The transhumanist admits that the human species, as it is, must die in order for something better to replace it. But, the transhumanist would say, this is worth the risk because without radical transformation the species will almost certainly die out. Given the many extinction scenarios that accompany our journey into the future, the prospects for our continued existence seem meager. In that case even huge gambles are justified. And, if we turn our back on enhancements, we will almost certainly go extinct. The rewards of enhancements may be uncertain, but the risks of pursuing them are no greater than if we do nothing or only do a little.

Bess admits that the temptation to pursue radical enhancements will be great, but he counsels restraint. He hopes that as we adapt to low-level changes, we can gradually relax the constraints on mid-level and high-level ones.  He admits that enforcing these moratoriums would be difficult, and international cooperation would be hard to achieve, but arms control provides a model of how this might be accomplished. Still, Bess says, trying to control technologies that may spell our doom is worth the effort.

Bess’ book is one of the most thoughtful meditations on the future that I have read. Moreover, the book is carefully and conscientiously crafted, and meticulously argued. He is also impartial, giving a fair hearing to contradictory arguments, and wrestling fairly with the ideas as he encounters them. In the end, I would situate Bess’ views a bit toward the conservative side of the argument. While he is optimistic that we can muddle our way through the coming storm, which demands a large dose of optimism indeed, I sense more fear than excitement in his words. I think he overestimates how good life is now, and underestimates how good it could be.

Bess concludes that in the future: “the most potent deed of all will still take the form of a smile, a silent nod of empathy, a hand gently laid on someone’s arm. The merest act of kindness will still remain the Ultimate Enhancement.” This is touching, and it reminds us that remaking the world demands more than just engineering. But Iet us hope that Bess doesn’t mean this literally. Let us hope that in the future we can do more for human suffering than smile, nod and touch. Let us hope that someday there will be more than just kindness to ameliorate the reality of our descendents.

Review of Paul & Cox’s, Beyond Humanity: Cyberevolution and Future Minds

Gregory Scott Paul (1954 – ) is a freelance researcher, author and illustrator who works in paleontology, sociology and theology. Earl D. Cox is the founder of Metus Systems Group and an independent researcher. Their book, Beyond Humanity: Cyberevolution and Future Mindsis an assault on the mindset of those who oppose their view of scientific progress.

Paul and Cox argue that the universe, as well as all life and mind within it, have evolved over time from the bottom up. However, genes now have little to do with our evolution—science and technology move the accelerating rate of evolution. In the course of that evolution a general pattern emerges—more change in less time. While it took nature a long time to produce a bio-brain, technology will produce a cyber-brain much faster.

Despite its promises people are ambivalent about science and technology (SciTech). They believe it will improve their lives, yet it has contributed to the death of millions. Its success has, in some sense, backfired. To be completely accepted SciTech must solve the problems of suffering and death which inevitably leads to questions about human nature. When taking a good look at human nature, the authors conclude that there is good news—we have brains that produce self-aware, conscious thought which is itself connected with wonderful auditory and visual systems. However, our bodies need sleep, demand exercise, lust for fatty foods, and have limited mobility and strength.

The bad news continues if we consider the limited memories and storage capacity of our brain. We upload information slowly; often cannot control our underdeveloped emotions; are easily conditioned by all sorts of irrationalities as children; have difficulty unlearning old falsehoods as adults; don’t know how our brains work; often cannot change unwanted behavioral patterns; and brain chemicals control our moods—suggesting that we are much less free than we admit. Moreover, when individual minds join they are particularly destructive, often killing each other at astonishing rates. We are also vulnerable to: brainwashing, pain, sun, insects, viruses, trauma, broken bones, disease, infection, organ failure, paralysis, miniscule DNA glitches, cancer, depression, and psychosis. We degrade and suffer pain as we age, and we die without a backup system since evolution perpetuates our DNA not our minds. On the whole, this is not a pretty picture.

Disease and aging can be thought of as a war which matches our brains and computers versus the RNA and DNA computers of microbes and diseased cells. What is the best way to win this war? Regeneration from our DNA would only regenerate the body—the mind would still have died—so it is not a wholly promising approach. The way around this limitation is to have a nanocomputer within your brain that receives downloads from your conscious mind. If the mind storage unit receives continuous downloads you can always be brought back after death—you would be immortal. But why stop there? Why not just make an indestructible cyber-body and cyber-brain? Why not become immortal cyber-beings?

This all leads to questions about us becoming gods. The authors argue that the existence of gods is a science and engineering project—we can create minds as powerful as those of our imaginary gods with sufficient technology. Of course supernaturalism opposes this project, but SciTech will win the struggle, just as it has historically dismantled other supernatural superstitions one by one. Science will defeat supernaturalism by explaining it, by providing in reality what religions supply only in the imagination. When science conquers death and suffering, religion will die; religions fundamental reason for being—comforting our fear of death—will become irrelevant. As for the custodians of religion, the theologians, the authors issue a stern warning:

Theologians are like a group of Homo erectus huddling around a fire, arguing over who should mate with whom, and which clan should live in the green valley, while paying no mind to the mind-boggling implications of the first Homo sapiens … Theologians of the world … the affairs you devote so much attention to are in danger of having as much meaning as the sacrifices offered to Athena … science and technology may be about to deliver … minds [that] will no longer be weak and vulnerable to suffering, and they will never die out. The gods will soon be dead, but they will be replaced with real minds that will assume the power of gods, gods that may take over the universe and even make new universes. It will be the final and greatest triumph of science and technology over superstition.[i] 

Summary – We should proceed beyond humanity, overcoming the religious impulses which are the last vestige of superstition.

___________________________________________________________________

[i] Gregory Paul and Earl Cox, Beyond Humanity: CyberEvolution and Future Minds (Rockland, MA.: Charles River Media, 1996), 415.

Summary of Michio Kaku’s, Visions: How Science Will Revolutionize the 21st Century

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 23, 2016.)

Michio Kaku (1947 – ) is the Henry Semat Professor of Theoretical Physics at the City College of New York of City University of New York. He is the co-founder of string field theory and a popularizer of science. He earned his PhD in physics from the University of California-Berkeley in 1972.

In his book, Visions: How Science Will Revolutionize the 21st CenturyKaku sets out an overall picture of what is happening in science today that will revolutionize our future.[i] He begins by noting the three great themes of 20th century science—the atom, the computer, and the gene. The revolutions associated with these themes ultimately aim at a complete understanding of matter, mind, and life. Progress toward reaching our goals has been stunning—in just the past few years more scientific knowledge has been created than in all previous human history. We no longer need to be passive observers of nature, we can be its active directors; we are moving from discover of nature’s laws to their masters.

The quantum revolution spawned the other two revolutions. Until 1925 no one understood the world of the atom; now we have an almost complete description of matter. The basic postulates of that understanding are: 1) energy is not continuous but occurs in discrete bundles called “quanta;” 2) sub-atomic particles have both wave and particle characteristics; and 3) these wave/particles obey Schrodinger’s wave equation which determines the probability that certain events will occur. With the standard model we can predict the properties of things from quarks to supernovas. We now understand matter and we may be able to manipulate it almost at will in this century.

The computer revolution began in the 1940s. At that time computers were crude but subsequent development of the laser in the next decade started an exponential growth. Today there are tens of millions of transistors in the area the size of a fingernail. As microchips become ubiquitous, life will change dramatically. We used to marvel at intelligence; in the future we may create and control it.  

The bio-molecular revolution began with the unraveling of the double helix in the 1950s. We found that our genetic code was written on the molecules within the cells—DNA. The techniques of molecular biology allow us to read the code of life like a book. With the owner’s manual for human beings, science and medicine will be irrevocably altered. Instead of watching life we will be able to direct it almost at will.  

Hence we are moving from the unraveling stage to the mastery stage in our understanding of nature. We are like aliens from outer space who land and view a chess game. It takes a long time to unravel the rules and merely knowing the rules doesn’t make one a grand master. We are like that. We have learned the rules of matter, life, and mind but are not yet their masters. Soon we will be.

What really moves these revolutions is their interconnectivity, the way they propel each other. Quantum theory gave birth to the computer revolution via transistors and lasers; it gave birth to the bio-molecular revolution via x-ray crystallography and the theory of chemical bonding. While reductionism and specialization paid great dividends for these disciplines, intractable problems in each have forced them back together, calling for synergy of the three. Now computers decipher genes, while DNA research makes possible new computer architecture using organic molecules. Kaku calls this cross-fertilization—advances in one science boost the others along—and it keeps the pace of scientific advance accelerating.

In the next decade Kaku expects to see an explosion in scientific activity that will include growing organs and curing cancer. By the middle of the 21st century he expects to see progress in slowing aging, as well as huge advances in nanotechnology, interstellar travel, and nuclear fusion. By the end of the century we will create new organisms, and colonize space. Beyond that we will see the visions of Kurzweil and Moravec come to pass—we will extend life by growing new organs and bodies, manipulating genes, or by merging with computers.

Where is all this leading? One way to answer is by looking at the labels astrophysicists attach to hypothetical civilizations based on ways they utilize energy—labeled Type I, II, and III civilizations. Type I civilizations control terrestrial energy, modify weather, mine oceans, and extract energy from planet’s core. Type II civilizations have mastered stellar energy, use their sun to drive machines and explore other stars. Type III – manage interstellar energy, since they have exhausted their stars energy. Energy is available on a planet, its star and in its galaxy, while the type of civilization corresponds to that civilizations power over those resources.

Based on a growth rate of about 3%  a year in our ability to control resources, Kaku estimates that we might expect to become a Type I civilization in a century or two, a type II civilization in about 800 years, and a type III civilization in about ten thousand years. At the moment, however, we are a Type 0 civilization which uses the remains of dead plants and animals to power our civilization. (And change our climate dramatically.) By the end of the 22nd century Kaku predicts we will be close to becoming a Type 1 civilization, and take our first steps into space. Agreeing with Kurzweil and Moravec, Kaku believes this will lead to a form of immortality when our technology replaces our brains, preserving them in robotic bodies or virtual realities. Evolution will have replaced us, just as we replaced all that died in the evolutionary struggle so that we could live. Our job is to push evolution forward.

Summary – Knowledge of the atom, the gene, and the computer will lead to a mastery of matter, life, and mind.

_______________________________________________________________________

[i] Michio Kaku, Visions: How Science Will Revolutionize the 21st Century (New York: Anchor, 1998).

Four Recent Books About The Rise of the Machines

Prototype humanoid robots at the Intelligent Robotics Laboratory in Osaka, Japan

Prototype humanoid robots at the Intelligent Robotics Laboratory in Osaka, Japan

(This article was reprinted in Humanity+ Magazine, May 5, 2015)

There has been a lot of discussion about the rise of intelligent machines in the last year. Here are 4 recent books about the subject with a brief description of each.

Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat:

At the heart of film maker Barrat’s book is the prophecy of the British mathematician I.J. Good, a colleague of Alan Turing. Good reasoned that once machines became more intelligent than humans, then the machines would design other machines leading to an intelligence explosion which would leave humans far behind. Is this true? What Barrat finds is that almost half of experts in the field expected intelligent machines within 15 years, and a large majority expected them shortly thereafter.  Barrett concludes that this intelligence explosion will lead almost immediately to the singularity, although we have no idea what these machines will do.

Eclipse of Man: Human Extinction and the Meaning of Progress, by Charles T Rubin:

The political philosopher Rubin’s book explores the roots of our desire to use technology to alter the human condition. This urge has aided humans greatly in the past, but Rubin believes that technologically-minded idealists regard humanity as a problem. This is a mistake, he believes, and allowing machines to make our decisions is problematic. Instead of improving us, our technology might supplant us; it would be like a hostile alien invader.

Smarter Than Us: The Rise of Machine Intelligence, by Stuart Armstrong:

Armstrong is a fellow of the Future of Humanity Institute at Oxford who has thought hard about how superintelligence could be made to be “friendly.” He argues that it would be difficult to communicate with alien beings that have computer minds. We might ask it to rid the planet of violence, and it would rid the planet of us! The point is that values are hard to explain, since they are based on, among other things, common sense and unstated assumptions. To turn those values into programming code would be extraordinarily challenging, and to avoid catastrophe, we could not make mistakes.

In Our Own Image: Will Artificial Intelligence Save or Destroy Us?, by George Zarkadakis:

Most of our ideas about what it would be like to live with superintelligences comes from science fiction, says the AI researcher George Zarkadakis. There can be little doubt that science fiction stories and metaphors have influenced us. As a result, we tend to anthropomorphize in order to make sense of our technology. We imagine robots like Schwarzenegger’s Terminator; we imagine robots and superintelligences with human qualities. But intelligence machines won’t be human, they will not share our evolutionary history, they will not have brains like ours. So who knows their goals and values; who knows how they will regard humans. Perhaps they will have no need for us.

All these books worry that intelligent machines might destroy us, even if only inadvertently. Moreover, many AI researchers aren’t even concerned about the problem of creating friendly AIs. In fact, a large part of AI research is dedicated to developing robots for war—to developing unfriendly AI. Surely things might go wrong if we create mostly machines that kill humans. All of these authors believe that we should be worried.

 

Review of Damien Broderick’s, The Spike

Review of Damien Broderick’s: The Spike: How Our Lives Are Being Transformed By Rapidly Advancing Technologies (reprinted in Humanity+ Magazine, April 21, 2014)

Broderick (B) argues that the future is opaque primarily because of the impending Singularity. “I use the term “singularity” in the sense of a place where a model of physical reality fails. … In mathematics, singularities arises when quantities go infinite; in cosmology, a black hole is the physical, literal expression of that relativistic effect .” Trends in computer and other sciences will converge somewhere between 2030 and 2100 to bring about a future unknown to us. We simply can’t know what’s beyond that time since things will begin to change so radically. The most important reason for the singularity will be the creation of superhuman intelligences, after which humanity itself will morph into `transhuman’, … and then `posthuman’. B argues “that we are on the edge of change comparable to the rise of human life on Earth.” The basic cause of these changes are superintelligences. Soon artificial intelligence (AI) will arrive and our knowledge will no longer be limited by ape brains and senses. Then change will happen so fast that the upward slope of change will be nearly vertical—a singularity or spike. “The Spike is a kind of black hole in the future, created by runaway change and accelerating computer power.” How might all of this play out? B considers some alternative views of the future:

[A i] No Spike, because the sky is falling – In the late 20th century, people feared nuclear war—now we seem more worried about ozone holes, pollution and killer asteroids. In the longer term, consider the sun’s and our planet’s mortality, and the dynamics that will kill everything on the planet. Eventually the whole universe will cease to be.  But be optimistic. Suppose we survive as a species and as individuals. That doesn’t mean there must be a Spike since AI and nanotechnology may prove tougher than we think to make or maybe these technologies will be suppressed or their inventors killed. So we may survive but progress halted which leads to option:

[A ii] No Spike, steady as she goes – This forks into a variety of alternative future histories, including:

[A ii a] Nothing much ever changes ever again – This is what most people assume unless forced to think hard. This belief is comforting—that things will pretty much stay the same—but its also an obvious illusion. Think about it; change isn’t going to just stop. So this leads to another option:

[A ii b] Things change slowly (haven’t they always?) – No. Things change very quickly and the pace of change is increasing. Moreover human nature itself will increasingly be changed. So maybe:

[A iii] Increasing computer power will lead to human-scale AI, and then stall – Perhaps there is a technical barrier to improvement and natural selection has not led to super-intelligence yet. So AI research might get to human level intelligence and then just hit the barrier. But why should technology run out of steam in this way? Another option:

[A iv] Things go to hell, and if we don’t die we’ll wish we had – Technology contributes to exploiting the planet’s resources and polluting the environment. At present only the rich nations do this but what will happen when the Third World catches up?

B now considers the more likely scenarios:

“I assert that all of these No Spike options are of low probability, unless they are brought forcibly into reality by some Luddite demagogue using our confusions and fears against our own best hopes for local and global prosperity. If I’m right, we are then pretty much on course for an inevitable Spike. We might still ask: what … is the motor that will propel technological culture up its exponential curve?” Here are some paths to the Spike:

[B i] Increasing computer power will lead to human-scale AI, and then will swiftly self-bootstrap to incomprehensible superintelligence.

This is the `classic’ model of the singularity, and it may be the way it happens if we can extrapolate from Moore’s Law  as do Kurzweil, Moravec, Kaku … and others. Kurzweil expects a Spike around 2099, with fusion between human and machine, uploads more numerous than the embodied, immortality, etc. Moravec expects humanlike competence in cheap computers around 2039 and a singularity within 50 years after that. The superstring physicist Michio Kaku believes humans will achieve a Type I civilization, “with planetary governance and technology able to control weather” very soon and a Type II civilization with command of the entire solar system in 800 to 2500 years. Ralph Merkle, a pioneer in nanotechnology believes we will need nanotech to get to AI. But “the imperatives of the computer hardware industry will create nanoassemblers by 2020.” After that the Spike should be immanent. The mathematician Vernor Vinge believes the Singularity could be here in the next 20 years. Eliezer Yudkowsky of the Singularity Institute thinks that “once we have a human-level AI able to understand and redesign its own architecture, there will be a swift escalation into a Spike.”

[B ii] Increasing computer power will lead to direct augmentation of human intelligence and other abilities.

Why not just use the brain we’ve already got? As we learn more about neuroscience, it should be possible to augment the brain.  B thinks that “neuroscience and computer science will combine to map the processes and algorithms of the naturally evolved brain, and try to emulate it in machines. Unless there actually is a mysterious non-replicable spiritual component, a soul, we’d then expect to see a rapid transition to self-augmenting machines …”

[B iii] Increasing computer power and advances in neuroscience will lead to rapid uploading of human minds.

If [B ii] turns out to be easier than [B i], then rapid uploading technologies should follow shortly. “Once the brain/mind can be put into a parallel circuit with a machine as complex as a human cortex … we might expect a complete, real-time emulation of the scanned brain to be run inside the machine that’s copied it. Again, unless the `soul’ fails to port over along with the information and topological structure, you’d then find your perfect twin … dwelling inside the device … perhaps your upload twin would inhabit a cyberspace reality …Once personality uploading is shown to be possible and … enjoyable, we can expect … some people to copy themselves into cyberspace.” This looks like a Spike.

[B iv] Increasing connectivity of the Internet will allow individuals or small groups to amplify the effectiveness of their conjoined intelligence.

“Routine disseminated software advances will create … ever smarter and more useful support systems for thinking, gathering data, writing new programs—and the outcome will be a … surge into AI. … ” This is the Internet will just wake up scenario and B thinks it unlikely.

[B v] Research and development of microelectromechanical systems (MEMS) and fullerene-based devices will lead to industrial nanoassembly, and thence to `anything boxes’.

This is the path predicted by Drexler’s Foresight Institute and NASA, as well as by conservative chemists and scientists working in MEMS.

[B vi] Research and development in genomics (the Human Genome Project, etc) will lead to new `wet’ biotechnology, lifespan extension, and ultimately to transhuman enhancements.

“Biology, not computing! is the slogan. After all, bacteria, ribosomes, viruses, cells for that matter, already operate beautifully at the micro- and even the nano-scales. … Exploring those paths will require all the help molecular biologists can get from advanced computers, virtual reality displays, and AI adjuncts. … we can reasonably expect those paths to track right into the foothills of the Spike.” We just discovered DNA 50 years ago and now have the whole genome sequenced. It won’t be long until we have a complete understanding of the way the genes express themselves in tissues, organs, and behavior. Probably in the next 50 years.

[C] The Singularity happens when we go out and make it happen.

“A self-improving seed AI could run glacially slowly on a limited machine substrate. The point is, so long as it has the capacity to improve itself, at some point it will do so convulsively, bursting through any architectural bottlenecks to design its own improved hardware, maybe even build it … what determines the arrival of the Singularity is just the amount of effort invested in getting the original seed software written and debugged …”

In the end B thinks it unlikely we’ll stop progress any time soon. There may be technical obstacles but history shows humans usually find a way around impediments. The biggest obstacle may be social protests.

“We’ve seen the start of a new round of protests … aimed at genetically engineered foods and work in cloning and genomics, but not yet targeted at longevity or computing research. It will come, inevitably. We shall see strange bedfellows arrayed against the machineries of major change. The only question is how effective its impact will be…. Cultural objections to AI might emerge, as venomous as yesterday’s and today’s attacks on contraception and abortion rights, or anti-racist struggles. If opposition to the Spike, or any of its contributing factors, gets attached to one or more influential religions, that might set back or divert the current … Despite these possible impediments to the arrival of the Spike, I suggest that while it might be delayed, almost certainly it’s not going to be halted. If anything, the surging advances I see every day coming from labs around the world convince me that we already are racing up the lower slopes of its curve into the incomprehensible … We will live forever; or we will all perish most horribly; our minds will emigrate to cyberspace, and start the most ferocious overpopulation race ever seen on the planet; or our machines will transcend and take us with them, or leave us in some peaceful backwater where the meek shall inherit the Earth. Or something else, something far weirder and… unimaginable …”