Daily Archives: October 21, 2017

Review of Phil Torres’ “Morality, Foresight & Human Flourishing

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, November 6, 2017.)

Phil Torres has just published an important new book: Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. Torres is the founding Director of the Project for Future Human Flourishing, which aims to both understand and mitigate existential threats to humanity. Astronomer Royal of the United Kingdom Martin Rees writes the book’s foreword, where he states that the book “draws attention to issues our civilization’s entire fate may depend on.” (13) We would do well to take this statement seriously—our lives may depend on it.

The book is a comprehensive survey of existential risks such as asteroid impacts, climate change, molecular nanotechnology, and machine superintelligence. It argues that avoiding an existential catastrophe should be among our highest priorities, and it offers strategies for doing so. But are we especially likely to go extinct today? Is today a particularly perilous time? While Steven Pinker, in his book The Better Angels of Our Nature, argues that we live in the most peaceful time in human history, Torres replies, “we might also live in the most dangerous period of human history ever. The fact is that our species is haunted by a growing swarm of risks that could either trip us into the eternal grave of extinction or irreversibly catapult us back into the Stone Age.” (21) I think Torres has it right.

While we have lived in the shadow of nuclear annihilation for more than 70 years, the number of existential risk scenarios are increasing. How great a threat do we face? About 20% of the experts surveyed by the Future of Humanity Institute believe we will go extinct by the end of this century. Rees is even more pessimistic, arguing that we have only a 50% of surviving the century. And the doomsday clock reflects such warnings; it currently rests at two-and-a-half minutes to midnight. Compare all this to your chance of dying in an airplane crash or being killed by terrorists—the chance of either is exceedingly small.

Torres uses the Oxford philosopher Nick Bostrom’s definition of existential risk:

An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development. (27)

Thus we can differentiate between total annihilation and existential risks that prevent us from achieving post-humanity. The latter type of risk includes: permanent technological stagnation; flawed technological realization; and technological maturity and subsequent ruination. Bostrom also distinguishes risks in terms of scope—from personal to trans-generational—and intensity—from imperceptible to terminal. Existential risks are both trans-generational and terminal.

As Torres notes, these risks are singular events that happen only once. Thus strategies to deal with them must be anticipatory, not reactionary, and this makes individual and governmental action to deal with such risks unlikely. Furthermore, the reduction of risks is a global public good, precisely the kind of goods the market is poor at providing. So while future generations would pay astronomical sums to us to increase their chance of living happily in the future, we wouldn’t necessarily benefit from our efforts to save the future.

But why should we care about existential risks? Consider that while a pandemic killing 100 million would be a tragedy, as would the death of any subsequent 100 million people, the death of the last 100 million people on earth would be exponentially worse. Civilization is only a few thousand years old, and we may have an unimaginably long and bright future ahead of us, perhaps as post-humans. If so, total annihilation would be unimaginably tragic, ending a civilization perhaps destined to conquer both the stars and themselves. Thus, the expected value of the future is astronomically high, a concept that Torres calls “the astronomical value thesis.” Torres conveys this point with a striking image.

the present moment …. is a narrow foundation upon which an extremely tall skyscraper rests. The entire future of humanity resides in this skyscraper, towering above us, stretching far beyond the clouds. If this foundation were to fail, the whole building would come crashing to the ground. Since this would be astronomically bad according to the above thesis, it behooves us to do everything possible to ensure that the foundation remains intact. The future depends crucially on the decisions we make today … and this is a moral burden that everyone should feel pressing down on their shoulders. (42)

As to why we should value future persons, Torres argues that considerations of one’s place in time have as little to do with moral worth as considerations of space—moral worth does not depend on what country you live in. Furthermore, discounting future lives is counter-intuitive from a moral point of view. Is a life now really worth the lives of a billion or a trillion future ones? It seems not. Clearly, living persons have no special claim to moral worth, and thus they should do what they can to reduce the possibility of catastrophe.

Next Torres addresses how cognitive biases distort thinking about the future—most people only think a few years in advance. Moreover, throughout history, humans have thought their generation was the last one. Even today, more than 40% of US Christians think that Jesus will probably or definitely return in their lifetimes, and many more Muslims believe the Mahdi will do so too. And, since these apocalyptic scenarios have not yet occurred, one might be skeptical of scientific worries about global catastrophic risks. The difference is that reason and evidence ground scientific concerns about an apocalypse, as opposed to being based in religious faith. We should heed the former and ignore the latter. However, Torres is aware that we live in an anti-intellectual age, especially in America, so reasonable concerns often go unheeded, and superstition rules the day.

Torres also hopes that understanding the etiology of existential risk will help us minimize the chance of catastrophe. To better understand causal risks Torres  distinguishes:

natural risks—super volcanoes, pandemics, asteroids, etc.
anthropogenic risks—nuclear weapons, nanotechnology, artificial intelligence, etc.
unintended risks—climate change, environmental damage, physics experiments, etc.
other risks—geoengineering, bad governance, unforeseen risks, etc., and
context risks—some combination of any of the above.

Next Torres proposes strategies for mitigating catastrophic threats. He divides these strategies as follows: 1) agent-oriented; 2) tool-oriented; and 3) other options. Agent-oriented strategies refer mostly to cognitive and moral enhancement of individuals, but also with reducing environmental triggers, creating friendly AI, and improving social conditions. Tool-oriented strategies focus on reducing the destructive power of our existing tools, or altogether relinquishing future technologies that pose existential risks, or developing defensive technologies to deal with potential risks. Other strategies include space colonization, tracking near-earth objects, stratospheric geoengineering, and creating subterranean, aquatic, or extraterrestrial bunkers.

His discussion of cognitive and moral enhancements is particularly illuminating. Cognitive enhancements, especially radical ones like nootropics, machine-brain interfaces, genetic engineering and embryo selection, seem promising. Smart beings would be less likely to do stupid things, like destroy themselves, and the cognitively enhanced might discover threats from phenomena that unenhanced beings could never discern. The caveat is that smarter individuals are better at completing their nefarious plans, and cognitive enhancements would expedite the development of new technologies, perhaps making our situation more perilous.

Similar concerns surround the issue of biological moral enhancements. Why not augment the moral dispositions of empathy, caring, and justice through genetic engineering, neural implants or mostropics? One problem is that the unenhanced may prove to be a great threat to the morally enhanced, so the system may only be safe if everyone is enhanced.  Another problem is that the morally enhanced may become even more fervent in their pursuit of justice, at the expense of those who have a different view of what is just. In fact, concerns about justice often motivate immoral acts. So we can’t be sure that moral bioenhancements are the answer either.

My own view is that we will not survive without radical cognitive and moral enhancement. Reptilian brains and twenty-first-century technology are a toxic brew, and there is nothing sacrosanct about remaining modified monkeys. We should transform ourselves as soon as possible, otherwise, we will almost certainly be annihilated. This I believe is our only hope. Yes this risky, but there is no risk-free way to proceed.

Torres concludes by considering multiple a priori arguments which purportedly demonstrate that we considerably underestimate the possibility of our annihilation. I find these arguments compelling. Still, Torres doesn’t want to give in to pessimism. Instead, he recommends an active optimism which recognizes risks and tries to eliminate them. So while we may be intellectually pessimistic about the future, we can still work to save the world. As Torres concludes: “The invisible hand of time inexorably pushes us forward, but the direction in which we move is not entirely outside of our control.” (223)


This is a work of extraordinary depth and breadth, and it is carefully and conscientiously crafted. Its arguments are philosophically sophisticated, and often emotionally moving as well. Torres’ concern with preserving a future for our descendants is transparent and sincere, and readers come away from the work convinced that the problems of existential risk are of utmost significance. After all, existence is the prerequisite for … everything.

Yet reading the work fills me with sadness and despair too. For a possible, unimaginably glorious future seems to depend on the most reckless, narcissistic, uninformed, and vile among us. The future seems to rest primarily in the hands of those ignorant of both the delicate foundations of civilization that separate us from a warlike state of nature and the fragility of an ecosystem and biosphere that shield us from the cold, dark, emptiness of space. But, as Torres counsels, we must not give in to pessimism, and our optimism must not be passive. Instead, our desire to save the world must inspire action.

For in the end what keeps us going is the hope that the future might be better than the past. That, if anything, is what gives our lives meaning. If we are not as links in a golden chain leading onward and upward toward higher states of being and consciousness, then what is the point of our little lives? But to be successful in this quest, we must both survive and flourish, which is what Torres urges us to do. Let us hope we listen.