Summary of “Longtermism” and “Existential Risk”

Phil Torres recently published “The Dangerous Ideas of “Longtermism” and “Existential Risk” in Current Affairs. The essay’s thesis is that so-called rationalists have created a disturbing secular religion that appears to address humanity’s deepest problems but actually pursues the social preferences of elites. Here is a brief summary of the article. (I’ve published reviews of Torres’ past books here and here.)

Torres begins by noting that Skype co-founder Jaan Tallinn minimizes the risk of climate change. Why? Maybe he doesn’t think it will affect a wealthy person like himself. Perhaps. But most likely, Torres argues that a moral worldview called “longtermism” informs Tallin’s thinking.

Longtermism doesn’t just propose we care about future generations. Rather it insists that we should fulfill our potential which includes “replacing humanity with a superior “posthuman” species, colonizing the universe, and ultimately creating an unfathomably huge population of conscious beings who likely will live “inside high-resolution computer simulations.” Existential risks then are events that destroy this transhumanist future.

Furthermore, “longtermism” has become one of the main ideas promoted by the “Effective Altruism” (EA) movement. EA argues that altruism should not be driven by trying to feel good but rather by data showing what actually does good. For example, computer science research may help avoid an AI-generated disaster and ultimately bring about far more value than say, lifting a million people out of poverty today. (The utilitarian nature of EA is obvious.)

Now in this light consider climate change. It may cause great harm in the next few decades and centuries but if it is the very far future that matters then this near-term suffering pales in comparison to the potential good in the far future. As Nick Bostrom writes of the Holocaust or World Wars or the Black Death: “Tragic as such events are to the people immediately affected, in the big picture of things … even the worst of these catastrophes are mere ripples on the surface of the great sea of life.”

Now for longtermists this implies “that even the tiniest reductions in “existential risk” are morally equivalent to saving the lives of literally billions of living, breathing, actual people.” Torres finds this morally reprehensible.

To make this concrete, imagine Greaves and MacAskill in front of two buttons. If pushed, the first would save the lives of 1 million living, breathing, actual people. The second would increase the probability that 1014 currently unborn people come into existence in the far future by a teeny-tiny amount. Because, on their longtermist view, there is no fundamental moral difference between saving actual people
and bringing new people into existence, these options are morally equivalent.

And, according to longtermists, this is why we shouldn’t worry too much about climate change as long as we survive to fulfill our potential. (I think most futurists worrying about the short-term implications of climate change regardless of whether there is a runaway greenhouse scenario. I also think that it is difficult to know where to put your resources. How much to mitigate the effects of climate change and how much to other scientific research? It is hard to know the optimal strategy.)

However, Torres finds all this appalling. He sees “longtermism as an immensely dangerous ideology.” It is a secular religion that worships future value and comes complete with its own doctrine of salvation—that we will live forever as posthumans. Moreover, numerous EAs have argued, “that we should care more about people in rich countries than poor countries.” This is primarily because the workers in rich countries are more innovative and productive.

So while climate change is bad, longtermists generally worry more about AI. In fact, many longtermists not only believe

believe that superintelligent machines pose the greatest single hazard to human survival, but they seem convinced that if humanity were to create a “friendly” superintelligence whose goals are properly “aligned” with our “human goals,” then a new Utopian age of unprecedented security and flourishing would suddenly commence.

The idea is that friendly superintelligence could eliminate or reduce all existential risks.  As Bostrom writes “One might believe … the new civilization would [thus] have vastly improved survival prospects since it would be guided by superintelligent foresight and planning.” Other futurists share this view—advancing AI is the best way to bring about a good future.

But Torres objects to the potential “for a genocidal catastrophe in the name of realizing astronomical amounts of far-future “value.” He also objects to those who believe that longtermism “is so important that they have little tolerance for dissenters.” These true believers minimize both past and present suffering in the name of dogma. And he especially objects to “the ends justify the means” ethics that underlies all this. Again, he is not saying we shouldn’t care about the future, but that he doesn’t want to

genuflect before the altar of “future value” or “our potential,” understood in techno-Utopian terms of colonizing space, becoming posthuman, subjugating the natural world, maximizing economic productivity, and creating massive computer simulations …

Avoiding the untold suffering that climate change will cause “requires immediate action from the Global North. Meanwhile, millionaires and billionaires under the influence of longtermist thinking are focused instead on superintelligent machines that they believe will magically solve the mess that, in large part, they themselves have created.”

Brief Reply

I’m not convinced by Torres’ argument. I am a transhumanist who believes that we must use future technologies to have any chance at survival and flourishing for ourselves and our descendants. I agree we shouldn’t minimize past or future suffering, and we should do everything possible to ameliorate or eliminate it, but it seems to me that we can both mitigate the effects of climate change and advance scientific research in AI simultaneously.

I would also say that we would live in a better world if most of the GDP produced in the world was invested in scientific research, including a massive worldwide effort to subsidize the education of future scientists, pay them extraordinarily well, and educate the populace in scientific matters. As I’ve said many times in this blog … we either evolve or we will die.

(Note – Torres has further explained these themes in a new essay “Against Longtermism.”)

Liked it? Take a second to support Dr John Messerly on Patreon!
Become a patron at Patreon!

3 thoughts on “Summary of “Longtermism” and “Existential Risk”

  1. Hmm, I would think you would mostly agree with Torres. I think his main point is the ability of a Longtermist to justify horrendous actions in the present for the exclusive benefit of those in the far future, which we can’t be sure will even come to pass.

    For example, a Longtermist might argue we should divert all food and resources away from Africa and toward Western Europe (uh-oh), so that they never experience shortages, thus producing more “high-level” workers, thus getting us closer to AI, space-traveling humans, etc., and whatever the end goal is.

    The pitfalls of this are immoral I would argue and something we should be weary of.

    I think that’s a distinct ideology from desiring that our goals and taxes go to more scientific R&D and education and fewer bombs, bankers, wealthy, etc.

  2. Asteroids might have to be mined to obtain resources necessary for these techno-possibilities. Don’t know: we’d have to ask experts—everyone at Reason and Meaning is a scientific layman.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.