Phil Torres has just published an important new book: Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. Torres is the founding Director of the Project for Future Human Flourishing, which aims to both understand and mitigate existential threats to humanity. Astronomer Royal of the United Kingdom Martin Rees writes the book’s foreword, where he states that the book “draws attention to issues our civilization’s entire fate may depend on.” (13) We would do well to take this statement seriously—our lives may depend on it.
The book is a comprehensive survey of existential risks such as asteroid impacts, climate change, molecular nanotechnology, and machine superintelligence. It argues that avoiding an existential catastrophe should be among our highest priorities, and it offers strategies for doing so. But are we especially likely to go extinct today? Is today a particularly perilous time? While Steven Pinker, in his book The Better Angels of Our Nature, argues that we live in the most peaceful time in human history, Torres replies, “we might also live in the most dangerous period of human history ever. The fact is that our species is haunted by a growing swarm of risks that could either trip us into the eternal grave of extinction or irreversibly catapult us back into the Stone Age.” (21) I think Torres has it right.
While we have lived in the shadow of nuclear annihilation for more than 70 years, the number of existential risk scenarios are increasing. How great a threat do we face? About 20% of the experts surveyed by the Future of Humanity Institute believe we will go extinct by the end of this century. Rees is even more pessimistic, arguing that we have only a 50% of surviving the century. And the doomsday clock reflects such warnings; it currently rests at two-and-a-half minutes to midnight. Compare all this to your chance of dying in an airplane crash or being killed by terrorists—the chance of either is exceedingly small.
Torres uses the Oxford philosopher Nick Bostrom’s definition of existential risk:
An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development. (27)
Thus we can differentiate between total annihilation and existential risks that prevent us from achieving post-humanity. The latter type of risk includes: permanent technological stagnation; flawed technological realization; and technological maturity and subsequent ruination. Bostrom also distinguishes risks in terms of scope—from personal to trans-generational—and intensity—from imperceptible to terminal. Existential risks are both trans-generational and terminal.
As Torres notes, these risks are singular events that happen only once. Thus strategies to deal with them must be anticipatory, not reactionary, and this makes individual and governmental action to deal with such risks unlikely. Furthermore, the reduction of risks is a global public good, precisely the kind of goods the market is poor at providing. So while future generations would pay astronomical sums to us to increase their chance of living happily in the future, we wouldn’t necessarily benefit from our efforts to save the future.
But why should we care about existential risks? Consider that while a pandemic killing 100 million would be a tragedy, as would the death of any subsequent 100 million people, the death of the last 100 million people on earth would be exponentially worse. Civilization is only a few thousand years old, and we may have an unimaginably long and bright future ahead of us, perhaps as post-humans. If so, total annihilation would be unimaginably tragic, ending a civilization perhaps destined to conquer both the stars and themselves. Thus, the expected value of the future is astronomically high, a concept that Torres calls “the astronomical value thesis.” Torres conveys this point with a striking image.
the present moment …. is a narrow foundation upon which an extremely tall skyscraper rests. The entire future of humanity resides in this skyscraper, towering above us, stretching far beyond the clouds. If this foundation were to fail, the whole building would come crashing to the ground. Since this would be astronomically bad according to the above thesis, it behooves us to do everything possible to ensure that the foundation remains intact. The future depends crucially on the decisions we make today … and this is a moral burden that everyone should feel pressing down on their shoulders. (42)
As to why we should value future persons, Torres argues that considerations of one’s place in time have as little to do with moral worth as considerations of space—moral worth does not depend on what country you live in. Furthermore, discounting future lives is counter-intuitive from a moral point of view. Is a life now really worth the lives of a billion or a trillion future ones? It seems not. Clearly, living persons have no special claim to moral worth, and thus they should do what they can to reduce the possibility of catastrophe.
Next Torres addresses how cognitive biases distort thinking about the future—most people only think a few years in advance. Moreover, throughout history, humans have thought their generation was the last one. Even today, more than 40% of US Christians think that Jesus will probably or definitely return in their lifetimes, and many more Muslims believe the Mahdi will do so too. And, since these apocalyptic scenarios have not yet occurred, one might be skeptical of scientific worries about global catastrophic risks. The difference is that reason and evidence ground scientific concerns about an apocalypse, as opposed to being based in religious faith. We should heed the former and ignore the latter. However, Torres is aware that we live in an anti-intellectual age, especially in America, so reasonable concerns often go unheeded, and superstition rules the day.
Torres also hopes that understanding the etiology of existential risk will help us minimize the chance of catastrophe. To better understand causal risks Torres distinguishes:
natural risks—super volcanoes, pandemics, asteroids, etc.
anthropogenic risks—nuclear weapons, nanotechnology, artificial intelligence, etc.
unintended risks—climate change, environmental damage, physics experiments, etc.
other risks—geoengineering, bad governance, unforeseen risks, etc., and
context risks—some combination of any of the above.
Next Torres proposes strategies for mitigating catastrophic threats. He divides these strategies as follows: 1) agent-oriented; 2) tool-oriented; and 3) other options. Agent-oriented strategies refer mostly to cognitive and moral enhancement of individuals, but also with reducing environmental triggers, creating friendly AI, and improving social conditions. Tool-oriented strategies focus on reducing the destructive power of our existing tools, or altogether relinquishing future technologies that pose existential risks, or developing defensive technologies to deal with potential risks. Other strategies include space colonization, tracking near-earth objects, stratospheric geoengineering, and creating subterranean, aquatic, or extraterrestrial bunkers.
His discussion of cognitive and moral enhancements is particularly illuminating. Cognitive enhancements, especially radical ones like nootropics, machine-brain interfaces, genetic engineering and embryo selection, seem promising. Smart beings would be less likely to do stupid things, like destroy themselves, and the cognitively enhanced might discover threats from phenomena that unenhanced beings could never discern. The caveat is that smarter individuals are better at completing their nefarious plans, and cognitive enhancements would expedite the development of new technologies, perhaps making our situation more perilous.
Similar concerns surround the issue of biological moral enhancements. Why not augment the moral dispositions of empathy, caring, and justice through genetic engineering, neural implants or mostropics? One problem is that the unenhanced may prove to be a great threat to the morally enhanced, so the system may only be safe if everyone is enhanced. Another problem is that the morally enhanced may become even more fervent in their pursuit of justice, at the expense of those who have a different view of what is just. In fact, concerns about justice often motivate immoral acts. So we can’t be sure that moral bioenhancements are the answer either.
My own view is that we will not survive without radical cognitive and moral enhancement. Reptilian brains and twenty-first-century technology are a toxic brew, and there is nothing sacrosanct about remaining modified monkeys. We should transform ourselves as soon as possible, otherwise, we will almost certainly be annihilated. This I believe is our only hope. Yes this risky, but there is no risk-free way to proceed.
Torres concludes by considering multiple a priori arguments which purportedly demonstrate that we considerably underestimate the possibility of our annihilation. I find these arguments compelling. Still, Torres doesn’t want to give in to pessimism. Instead, he recommends an active optimism which recognizes risks and tries to eliminate them. So while we may be intellectually pessimistic about the future, we can still work to save the world. As Torres concludes: “The invisible hand of time inexorably pushes us forward, but the direction in which we move is not entirely outside of our control.” (223)
Reflections
This is a work of extraordinary depth and breadth, and it is carefully and conscientiously crafted. Its arguments are philosophically sophisticated, and often emotionally moving as well. Torres’ concern with preserving a future for our descendants is transparent and sincere, and readers come away from the work convinced that the problems of existential risk are of utmost significance. After all, existence is the prerequisite for … everything.
Yet reading the work fills me with sadness and despair too. For a possible, unimaginably glorious future seems to depend on the most reckless, narcissistic, uninformed, and vile among us. The future seems to rest primarily in the hands of those ignorant of both the delicate foundations of civilization that separate us from a warlike state of nature and the fragility of an ecosystem and biosphere that shield us from the cold, dark, emptiness of space. But, as Torres counsels, we must not give in to pessimism, and our optimism must not be passive. Instead, our desire to save the world must inspire action.
For in the end what keeps us going is the hope that the future might be better than the past. That, if anything, is what gives our lives meaning. If we are not as links in a golden chain leading onward and upward toward higher states of being and consciousness, then what is the point of our little lives? But to be successful in this quest, we must both survive and flourish, which is what Torres urges us to do. Let us hope we listen.
I think that there’s a complicating factor here that needs to be addressed: seldom does a single event trigger a globally devastating result. For example, the worst event in terms of destruction of life was the Permian Extinction 250 million years ago, which wiped out a huge chunk of life. Because it was so long ago, they have not yet been able to pin down the cause (although there are certainly plenty of hypotheses from which to choose). But one recent development is the evidence that the extinction did not happen quickly — it appears to have been spread out over thousands of years.
This is due to a weakness of complex systems. An ecosystem, as we all have been told, is a delicate balance of many different factors. What is seldom mentioned is that most ecosystems are pretty robust in responding to slow change. For example, the end of the last Ice Age produced drastic increases in temperature, but there was no mass extinction; even close to the receding glacial front, the ecosystem evolved to make optimal use of the resources and climate.
But complex systems do not respond well to rapid change; if change happens too quickly, the entire system can collapse.
Moreover, complex systems in complex environments tend to evolve in such as a fashion as to more tightly integrate every element of the system. In a tropical jungle, the behavior of this insect affects the growth of that plant, which in turn alters the population of this herbivore, which ultimately affects that carnivore. If the system is stable, the system makes more and more interconnections.
We see this happening on a much faster scale with economies. The whole course of economic development is increased productivity through greater specialization, which is only possible through either population growth or tighter economic links. Sure enough, the world economy is globalizing, which in turn is increasing the total wealth of humanity.
This economic system is extremely complex and highly interconnected, which means that it is extremely vulnerable to perturbations. For example, suppose that God sent the Angel of Death down (as punishment for electing Mr. Trump) to decimate America. One in every ten Americans is randomly struck down. The result would be the complete collapse of American society, requiring intervention by foreigners.
How so? Simply reducing economic output by 10% can cause stupendous problems in highly specialized economies. If you just happen to lose the only guy in the factory who knows how to adjust the machine that makes Widget 2746A/6x, then all of a sudden you don’t have any Widget 2746A/6x. They turn out to be crucial components of the pressurization systems of airliners, so all of a sudden, production of airliners grinds to a halt. If it were just Widget 2746A/6x, they could treat it as an emergency and get it up and running, but this kind of thing is happening throughout the economy. Every single supply chain has a few broken links, and that brings down ALL of the supply chains.
Even in team operations, the cost of losing one member can be catastrophic. The 23 fellows who physically check the oil refinery to insure that it’s safe lose just two fellows, but one of them had noted a problem the day before and failed to report it, because it was at the end of his shift and he figured he’d report it the next day. So nobody else knows about that leaky valve in the secondary condensation tower. Kaboom! Big fire, people killed — and one more refinery drops off line.
In many ways, the effect is rather like radiation sickness. If you drill a human being with a zillion gamma rays, those gamma rays will tear through the cells, ripping up proteins or the water molecules. The H+ and OH- ions resulting from this wander through the cell until they bump into some complex protein and change it. Oh, darn, we just lost another cell.
But we have zillions of cells in our bodies; the loss of one cell is no big deal. But what happens when the body is hit by so many gamma rays that a significant portion of its cells are killed? The body sickens and dies.
My point is that even a disaster that doesn’t seem so bad could still trigger a collapse.
But it gets worse. I am fairly certain that anthropogenic global warming — AGW — will destroy civilization. But it won’t do so by drowning us in rising seas or baking us in searing heat. The real problem will come from the differential damages and differential demands for compensation.
Some countries will be literally wiped out by AGW. The low-elevation islands all over the world will be destroyed in hurricanes. Those islands will have to be abandoned. Low-lying deltas, including some of our best agricultural lands, will be ruined by saltwater intrusion.
So Country A is getting clobbered by AGW and prepares a geoengineering scheme to fix the problem. Geoengineering involves a number of untested schemes such as injecting tiny particles into the stratosphere, seeing oceans with iron to encourage algae blooms, etc. These are all considered to be dangerous because we really have no idea what their secondary effects will be.
But for Country A, this is an existential threat, so they proceed. But their chosen geoengineering scheme has the effect of creating droughts in Country B. So now Country B wants compensation.
Add to this the fact that people in developing nations are ALREADY demanding that developed nations — especially the USA — to compensate them for damages because those developed nations were primarily responsible for the net carbon emissions over history.
Now we have demands all over the globe, with every country considering itself a victim, and everybody suffering. In a situation like this, it won’t be long before the missiles start flying.
So, was civilization destroyed by AGW or nuclear weapons?
Unfortunatly I don’t think the past is a good guide to the present and future. As we live in the Anthropocene we can destroy things quickly. At any rate, the problem of existential risk is frightening, and makes me feel powerless and forlorn.