The Spike: How Our Lives Are Being Transformed By Rapidly Advancing Technologies
Broderick (B) argues that the future is opaque primarily because of the impending Singularity. “I use the term “singularity” in the sense of a place where a model of physical reality fails. … In mathematics, singularities arises when quantities go infinite; in cosmology, a black hole is the physical, literal expression of that relativistic effect .” Trends in computer and other sciences will converge somewhere between 2030 and 2100 to bring about a future unknown to us. We simply can’t know what’s beyond that time since things will begin to change so radically. The most important reason for the singularity will be the creation of superhuman intelligences, after which humanity itself will morph into `transhuman’, … and then `posthuman’. B argues “that we are on the edge of change comparable to the rise of human life on Earth.” The basic cause of these changes are superintelligences. Soon artificial intelligence (AI) will arrive and our knowledge will no longer be limited by ape brains and senses. Then change will happen so fast that the upward slope of change will be nearly vertical—a singularity or spike. “The Spike is a kind of black hole in the future, created by runaway change and accelerating computer power.” How might all of this play out? B considers some alternative views of the future:
[A i] No Spike, because the sky is falling – In the late 20th century, people feared nuclear war—now we seem more worried about ozone holes, pollution and killer asteroids. In the longer term, consider the sun’s and our planet’s mortality, and the dynamics that will kill everything on the planet. Eventually the whole universe will cease to be. But be optimistic. Suppose we survive as a species and as individuals. That doesn’t mean there must be a Spike since AI and nanotechnology may prove tougher than we think to make or maybe these technologies will be suppressed or their inventors killed. So we may survive but progress halted which leads to option:
[A ii] No Spike, steady as she goes – This forks into a variety of alternative future histories, including:
[A ii a] Nothing much ever changes ever again – This is what most people assume unless forced to think hard. This belief is comforting—that things will pretty much stay the same—but its also an obvious illusion. Think about it; change isn’t going to just stop. So this leads to another option:
[A ii b] Things change slowly (haven’t they always?) – No. Things change very quickly and the pace of change is increasing. Moreover human nature itself will increasingly be changed. So maybe:
[A iii] Increasing computer power will lead to human-scale AI, and then stall – Perhaps there is a technical barrier to improvement and natural selection has not led to super-intelligence yet. So AI research might get to human level intelligence and then just hit the barrier. But why should technology run out of steam in this way? Another option:
[A iv] Things go to hell, and if we don’t die we’ll wish we had – Technology contributes to exploiting the planet’s resources and polluting the environment. At present only the rich nations do this but what will happen when the Third World catches up?
B now considers the more likely scenarios:
“I assert that all of these No Spike options are of low probability, unless they are brought forcibly into reality by some Luddite demagogue using our confusions and fears against our own best hopes for local and global prosperity. If I’m right, we are then pretty much on course for an inevitable Spike. We might still ask: what … is the motor that will propel technological culture up its exponential curve?” Here are some paths to the Spike:
[B i] Increasing computer power will lead to human-scale AI, and then will swiftly self-bootstrap to incomprehensible superintelligence.
This is the `classic’ model of the singularity, and it may be the way it happens if we can extrapolate from Moore’s Law as do Kurzweil, Moravec, Kaku … and others. Kurzweil expects a Spike around 2099, with fusion between human and machine, uploads more numerous than the embodied, immortality, etc. Moravec expects humanlike competence in cheap computers around 2039 and a singularity within 50 years after that. The superstring physicist Michio Kaku believes humans will achieve a Type I civilization, “with planetary governance and technology able to control weather” very soon and a Type II civilization with command of the entire solar system in 800 to 2500 years. Ralph Merkle, a pioneer in nanotechnology believes we will need nanotech to get to AI. But “the imperatives of the computer hardware industry will create nanoassemblers by 2020.” After that the Spike should be immanent. The mathematician Vernor Vinge believes the Singularity could be here in the next 20 years. Eliezer Yudkowsky of the Singularity Institute thinks that “once we have a human-level AI able to understand and redesign its own architecture, there will be a swift escalation into a Spike.”
[B ii] Increasing computer power will lead to direct augmentation of human intelligence and other abilities.
Why not just use the brain we’ve already got? As we learn more about neuroscience, it should be possible to augment the brain. B thinks that “neuroscience and computer science will combine to map the processes and algorithms of the naturally evolved brain, and try to emulate it in machines. Unless there actually is a mysterious non-replicable spiritual component, a soul, we’d then expect to see a rapid transition to self-augmenting machines …”
[B iii] Increasing computer power and advances in neuroscience will lead to rapid uploading of human minds.
If [B ii] turns out to be easier than [B i], then rapid uploading technologies should follow shortly. “Once the brain/mind can be put into a parallel circuit with a machine as complex as a human cortex … we might expect a complete, real-time emulation of the scanned brain to be run inside the machine that’s copied it. Again, unless the `soul’ fails to port over along with the information and topological structure, you’d then find your perfect twin … dwelling inside the device … perhaps your upload twin would inhabit a cyberspace reality …Once personality uploading is shown to be possible and … enjoyable, we can expect … some people to copy themselves into cyberspace.” This looks like a Spike.
[B iv] Increasing connectivity of the Internet will allow individuals or small groups to amplify the effectiveness of their conjoined intelligence.
“Routine disseminated software advances will create … ever smarter and more useful support systems for thinking, gathering data, writing new programs—and the outcome will be a … surge into AI. … ” This is the Internet will just wake up scenario and B thinks it unlikely.
[B v] Research and development of microelectromechanical systems (MEMS) and fullerene-based devices will lead to industrial nanoassembly, and thence to `anything boxes’.
This is the path predicted by Drexler’s Foresight Institute and NASA, as well as by conservative chemists and scientists working in MEMS.
[B vi] Research and development in genomics (the Human Genome Project, etc) will lead to new `wet’ biotechnology, lifespan extension, and ultimately to transhuman enhancements.
“Biology, not computing! is the slogan. After all, bacteria, ribosomes, viruses, cells for that matter, already operate beautifully at the micro- and even the nano-scales. … Exploring those paths will require all the help molecular biologists can get from advanced computers, virtual reality displays, and AI adjuncts. … we can reasonably expect those paths to track right into the foothills of the Spike.” We just discovered DNA 50 years ago and now have the whole genome sequenced. It won’t be long until we have a complete understanding of the way the genes express themselves in tissues, organs, and behavior. Probably in the next 50 years.
[C] The Singularity happens when we go out and make it happen.
“A self-improving seed AI could run glacially slowly on a limited machine substrate. The point is, so long as it has the capacity to improve itself, at some point it will do so convulsively, bursting through any architectural bottlenecks to design its own improved hardware, maybe even build it … what determines the arrival of the Singularity is just the amount of effort invested in getting the original seed software written and debugged …”
In the end B thinks it unlikely we’ll stop progress any time soon. There may be technical obstacles but history shows humans usually find a way around impediments. The biggest obstacle may be social protests.
“We’ve seen the start of a new round of protests … aimed at genetically engineered foods and work in cloning and genomics, but not yet targeted at longevity or computing research. It will come, inevitably. We shall see strange bedfellows arrayed against the machineries of major change. The only question is how effective its impact will be…. Cultural objections to AI might emerge, as venomous as yesterday’s and today’s attacks on contraception and abortion rights, or anti-racist struggles. If opposition to the Spike, or any of its contributing factors, gets attached to one or more influential religions, that might set back or divert the current … Despite these possible impediments to the arrival of the Spike, I suggest that while it might be delayed, almost certainly it’s not going to be halted. If anything, the surging advances I see every day coming from labs around the world convince me that we already are racing up the lower slopes of its curve into the incomprehensible … We will live forever; or we will all perish most horribly; our minds will emigrate to cyberspace, and start the most ferocious overpopulation race ever seen on the planet; or our machines will transcend and take us with them, or leave us in some peaceful backwater where the meek shall inherit the Earth. Or something else, something far weirder and… unimaginable …”