Martin Rees is one of the most distinguished theoretical astrophysicists in the world. He has held some of the most honored positions in science, among which is his current title–England’s Astronomer Royal. It has been more than ten years since Rees published, and I first read, Our Final Hour: A Scientist’s Warning: How Terror, Error, and Environmental Disaster Threaten Humankind’s Future in this Century—On Earth and Beyond.(Interestingly, the book’s title when published in England was Our Final Century: … I have heard they changed the title to make the issue more dramatic for American audiences.) I thought it was time to revisit it.
Rees begins by acknowledging technology shock: “21st century science may alter human beings themselves—not just how they live.”1 He accepts the common wisdom that the next 100 years will see changes that dwarf those of the past 1000 years, yet he is skeptical about the validity of specific predictions. He gives numerous examples of forecasts that were wrong, of forecasts that were nearly impossible to make because a particular technology seemingly came out of nowhere, and of the forecasts that were never made—x-rays, nuclear energy, antibiotics, jet aircraft, computers, transistors, the internet, and more.
Despite these failed forecasts Rees insists: “Over an entire century, we cannot set limits on what science can achieve, so we should leave our minds open, or at least ajar, to concepts that now seem on the wilder shores of speculative thought. Superhuman robots are widely predicted for mid-century. Even more astonishing advances could eventually stem from fundamentally new concepts in basic science that haven’t yet even been envisioned and which we as yet have no vocabulary to describe.”2 In this context Rees argues that nanotechnology will enable computing power to progress according to Moore’s law for the next few decades, by which time computers will match the processing power of the human brain.
Rees, a most sober prognosticator, accepts as reasonable speculative claims concerning the malleability of our physical and psychic selves; there is a real possibility that our descendants will be immortal post-humans. With the caveat that present trends continue unimpeded, there are good reasons to think that some living now may live forever. He also accepts the plausibility of superintelligence: “A superintelligent machine could be the last invention humans ever make. Once machines have surpassed human intelligence, they could themselves design and assemble a new generation of even more intelligent ones. This could then repeat itself, with technology running towards a cusp, or ‘singularity, at which the rate of innovation runs away towards infinity.”3 Nonetheless he thinks such ideas exist on the fringes of science, bordering on science fiction, and he is not convinced that a singularity awaits our species even if science continues to advance unimpeded.
Thus I see Rees as forging a middle path. He recognizes the immense potential of scientific knowledge to transform reality, taking even the most fantastic predictions seriously, but he cautions us that some predictions are unlikely to ever come true. This brings us full circle back to the beginning of the book. Many forecasts will fail, many are impossible to make, and many things we don’t forecast will come to be. One of the main reasons for this, Rees says, is that technology doesn’t always proceed as fast as it would if there were only technical barriers to be overcome. There may be social, religious, political, ethical, or economic considerations that impede swift development of new technologies. In short, its hard to predict … especially the future!
Of course any prediction about the future comes with the caveat that we don’t destroy ourselves. Rees takes such extinction scenarios seriously. “Throughout most of human history, the worst disasters have been inflicted by environmental forces—floods, earthquakes, volcanoes, and hurricanes—and by pestilence. But the greatest catastrophes of the 20th century were directly induced by human agency…”4 What Rees has in mind are the nearly 200 million persons were killed by war, massacre, persecution, famine, etc. in the 20th century alone. (Despite such somber statistics Steven Pinker has argued that we live more peacefully than ever before in human history. For more see his book: The Better Angels of Our Nature: Why Violence Has Declined.
Rees lists multiple extinction scenarios: global nuclear war; nuclear mega-terror; biothreats (the use of chemical and biological weapons); laboratory errors (for example, accidentally create a new virulent smallpox virus); nanotech “grey goo” (nanobots out of control that consume all organic matter—although this has recently been downplayed by Eric Drexler and others); particle physics experiments gone awry; environmental or climate change, asteroid impacts; and super-eruptions from Earth that block the sun. He notes that most of the threats to our survival come from us. Rees himself has wagered $1000 on the following proposition: “That by the year 2020 an instance of bio-error or bio-terror will have killed a million people.”5
In the end Rees argues that one of two fates will befall humankind: 1) they will go extinct; or 2) they or their descendants will expand throughout space. Which will happen is unknown; after all it is hard to predict the future. Yet Rees books serves as a reminded that the human future is largely up to us.
1. Our Final Hour, 9.
2. Our Final Hour, 16.
3. Our Final Hour, 19.
4. Our Final Hour, 25.
5. Our Final Hour, 74.