A colleague recently introduced me to Christopher Mason‘s new book, The Next 500 Years: Engineering Life to Reach New Worlds.Mason begins,
The fundamental thesis of this book is that the same innate, biological capacities of ingenuity and creation that have enabled humans to build rockets to reach other planets will also be needed for designing and engineering the organisms that will sustainably inhabit those planets.
The missions to other planets, as well as ideas for planetary-scale engineering, are a necessary duty for humanity and a logical consequence of our unique cognitive and technological capabilities. … As far as we know, humans alone possess an awareness of the possibility of our entire species extinction and of the Earth’s finite rife span. Thus, we are the only ones who can actively assess the risks of (and prevent) extinction, not only for ourselves but for all other organisms as well. This is unusual. Most duties in life are chosen, yet there is one that is not. “Extinction awareness”—and the need to avoid extinction—is the only duty that is activated the moment it is understood.
This gives us an awesome responsibility, power, and opportunity to become the universe’s shepherds and guardians of all life-forms—quite literally a duty to the universe—to preserve life. … This duty is not only for us, but for any species or entities who can engineer themselves to avoid the end of the universe. Even if our species does not survive, this duty is passed on to the next sentience, which will undoubtedly arise.
According to Mason, since the earth cannot survive the death of our sun, we must journey to the stars to have any chance of fulfilling our duty to preserve life. And this implies we will eventually have to engineer life in order to survive on other worlds. We must direct and engineer life or we will not survive. Mason hopes that we can begin by sending genetically engineered humans to establish an outpost on Mars and perhaps launch beyond our solar system by 2500. Eventually, our descendants will have to alter the structure of the universe itself to ensure the survival of life.
Mason’s proposals follow from what he calls “deontogenic ethics,” which is based on the following assumptions: 1) only some species or entities have an awareness of extinction; 2) existence is essential for any other goal/idea to be accomplished; thus 3) to accomplish any goal or idea, sentient species need to ensure their own existence and that of all other species that enable their survival.
This further implies, according to Mason, that “any act that consciously preserves the existence of life’s molecules (currently nucleic acid-based) across time is ethical. Anything that does not is unethical. Thus, preserving the existence of life is the highest duty …”
Brief Reflections
Any reader of this blog knows that I agree we should do our best to avoid destroying ourselves or being destroyed by one of many existential risks. And I believe wholeheartedly that we must enhance our moral and intellectual nature. Leaving the planet at some point will thus be necessary and only enhanced beings will have much chance of survival then.
While I do share worries about the potential pitfalls of using technology to enhance and transform human beings, I have fewer worries than most. I certainly understand that AI, robotics, genetic engineering, nanotechnology, etc. may lead to some terrible outcomes but from my point of view, we are in the position of a football team that needs to make a “hail mary” pass. I have no idea if I’m right about this but if we don’t act dramatically then something will destroy us and soon—pandemics, asteroids, nuclear war, climate change, etc. We must evolve quickly and radically to have any chance of survival.
Still, proceeding recklessly may bring about some kind of unimaginable hell, so we should be careful, but I really think our situation is so desperate that we must take big risks. Certainly, if we do nothing we are doomed. The chances we destroy ourselves are so great—Martin Rees thinks it’s about 50/50 in the next 100 years—that I think we need to take chances. Perhaps I’m too reckless or my former career as a poker player is influencing me too much. But there is no risk-free way to proceed and our survival depends on making dramatic changes to the nature we inherited from millions of years of evolution. At any rate, surviving and flourishing aren’t possible unless we transform ourselves. About that, I am fairly certain.
Regarding deontogenic ethics, I’d argue that survival is a necessary but not a sufficient condition in my ethics. There are fates worse than death. Perhaps I’m too sensitive to human suffering or I’ve read too much Schopenhauer but I sometimes wonder if it would be better if we went extinct. (And I’ve had a wonderful life.) I don’t say this lightly, nor to parade some pessimistic romantic sensibilities, but rather to acknowledge that the true horror of human existence can make you wonder if life is worth it.
To sum up, while I certainly want sentience to survive and flourish, I wouldn’t say that survival is an absolute duty since, again, there are fates worse than death, both for individuals and species. So I disagree that anything that preserves life is good and anything that doesn’t is bad. That’s too strong of a claim. That’s why I’m a proponent of voluntary active euthanasia for human beings. (Cows, pigs, chickens, fish, etc., if they had the ability, should have chosen extinction long ago. Perhaps all non-human animals would be better off dead. Perhaps human beings would be too.)
I hope our posthuman descendants survive and flourish but if a living hell awaits us, then I hope no sentient beings are a part of that. Here’s to hoping that sentience lives long and prospers. Still …
But as for certain truth, no man has known it,
Nor shall he know it, neither of the gods
Nor yet of all the things of which I speak.
For even if by chance he were to utter
The final truth, he would himself not know it:
For all is but a woven web of guesses. ~ Xenophanes
An ethic that attributes excessive value to the preservation of life is flawed. Technology has granted us an extension of our lifespan so today we can experience a massive increase in a whole raft of diseases formerly rare, overwhelming pension systems and healthcare, and increased social dependence. THIS is an immediate issue, more so than genetic survival, nuclear war, climate change, or death of the sun. How do we address excessive ageing? Technology offers cell therapy to rectify physical degeneration, a higher quality and even further extension of life, and MAYBE a population decline. Reaching out to the universe to SAVE mankind, is to ignore that mankind is not ready and must first learn to solve an inherent flaw in mankind’s genetics that elevates to “know” (technology/ the future/utopia), over to “survive” (the gene/the present/pragmatic).
Now you’re cooking with gas! Will let technically-educated readers make comprehensive comments, but will say that not only is there, as Rees says, a 50-50 existential threat (or perhaps 51-50) from ourselves but the supervolcano under Yellowstone is slightly—no imminent eruption—ominous; though we cannot predict anything much decades from now. Which means certain threats might be worse than we think they are.
But even more to be highlighted is how we’d have to pretty much change the entire way we think and act. Just for starters you mentioned cows, chicken and fish—reminding me of how eating meat is not a positive for the biosphere. And killing animals often involves great pain for them—unlike human euthanasia. Such is only a random example of what could be changed. We’d have to change so much that we’d no longer be ourselves as we know ourselves to be.
Not even Scandinavia is genuinely civilized: it is less barbaric, yet still far from being truly civilized. And think on how violent the three superpowers are, America, China, Russia. When a guy walks into a supermarket and kills ten people, it is another mass murder that we are somewhat desensitized regarding. We know there’ll be more.
Thirty years ago, when the Soviet Union ended I asked a student what would happen. He replied simply but effectually that ‘stupid’ people wouldn’t know what’s going on, and would become violent.
maybe we should kill everyone when they reach 30?
There definitely is something in a small number of us that must go where no man has gone before, to forsake the comfort and security of the known for the adventure and curiosity of the unknown, because it is there–terra incognita.
Kinkaide’s objections are valid, except when he writes about mankind not being ready to reach out to the universe. Not mankind, that is too broad. But some of mankind could, some of whom would go on suicide missions. We’ve had some casualties already—three were killed on Earth in their spacecraft.
And not merely humans; the ultimate purpose is to not be the murderous ape we have the gall to term ‘humane’. My guess is that if Mars is colonized, first robots would have to terraform the planet; augmented humans plus post humans would arrive afterwards. Robots and posthumans could also mine asteroids for rare substances.
What Kinkaide writes about biological progress (let’s call it that) is true, yet such is the price of a Darwinistic world. A world of great loss but some gain, it has always been that way for better and worse. We can’t be objective about such or perhaps we’d starve ourselves to death in sacrifice for others.
That’s so true
thanks for the comments.