How ‘Longtermism’ Is Helping the Tech Elite Justify Ruining the World
“It’s as if they want to build a car that goes fast enough to escape from its own exhaust.”
These words were the takeaway from a meeting that Douglas Rushkoff, who describes himself as a “Marxist media theorist,” had with five extremely powerful people in tech who were looking to survive the impending climate catastrophe. Rushkoff’s account of this in Survival of the Richest reveals something sinister: that righ-ranking elites in tech genuinely know what’s coming — but rather than stopping it, they’re planning on saving themselves from it in the form of underground luxury bunkers and armed guards comprised of Navy SEALs.
The people who are almost directly responsible for the world’s biggest problems today — the climate crisis, eroding institutions of democracy, and the sale of people’s own attention for profit — don’t find themselves to be accountable. Not only that, accountability, or even fixing today’s problems, isn’t even a desirable goal for them. At least not, according to ‘longtermism’ — the philosophy undergirding much of tech’s trajectory and, if we’re not careful, our own destruction as a race.
It’s an idea that, from its forward-looking scope, seems ambitious and futuristic upon first glance. But it’s one that believes in a future only for a few people — self-appointed representatives of humanity — at the cost of all the rest. And when billionaires begin to shack up underground or shoot off into space in a bid to colonize other planets, they’re not doing it for humanity as a whole; they’re doing it for a humanity that consists exclusively of their own ilk.
Stephen Hawking famously declared just a few years ago that “we are at the most dangerous moment in the development of humanity.” Even if we’re hardly doing anything about it, most are in some form of agreement that things are looking bleak, and we’re already seeing the effects of climate change in countries that haven’t done a lot to bolster it. But if longtermism has its way, this is nothing more than a blip in humanity’s record. What makes the philosophy so dangerous is its ethical foundation, summed up by one of its early theoreticians Nick Bostrom: “a non-existential disaster causing the breakdown of global civilisation is, from the perspective of humanity as a whole, a potentially recoverable setback.”
Related on The Swaddle:
What Is the Environmental Cost of Space Tourism?
Bostrom’s work is heartily endorsed by tech giants with the resources and capacity to not only outrun any of the world’s current crises, but also irreversibly influence the direction of our species as a whole. There are two key concepts in Bostrom’s argument: potential, and existential risk. Potential is what longtermists understand to be humanity’s capacity on a cosmic scale, a trillion years into the future. Our potential is as vast as the universe itself. An existential risk, according to the longtermist ethic, is one that threatens to wipe out humanity and with it, humanity’s potential. This is the most tragic outcome and one that has to be avoided at all costs. Now it’s possible that a few people — say, 15% of the world’s population — survive climate change. That doesn’t wipe out our potential even if it wipes out an unfathomable number of people — and so, according to longtermism, isn’t an existential risk.
“The case for longtermism rests on the simple idea that future people matter…Just as we should care about the lives of people who are distant from us in space, we should care about people who are distant from us in time,” wrote William MacAskill, the public face of longtermism. His book was endorsed by Elon Musk, who cited MacAskill’s philosophy as a “close match” for his own. Musk also happens to be one of the biggest players in the privatized space race, and his vision to colonize Mars is one that is increasingly no longer a semi-ironic joke.
Longtermism’s roots is in a philosophy called effective altruism. It’s one that “Effective altruism, which used to be a loose, Internet-enabled affiliation of the like-minded, is now a broadly influential faction, especially in Silicon Valley, and controls philanthropic resources on the order of thirty billion dollars,” notes a profile of MacAskill in The New Yorker.
There’s a web of influential figures writing the script of longtermism from various think tanks — together, they comprise an enterprise that’s worth more than 40 billion dollars. Among others, some advocate for sex redistribution, others say that saving lives in rich countries is more important than saving lives in poor countries, as philosopher Émile P. Torres reported in Salon. Longtermism’s utopia is a future where human beings are engineered to perfection — leading to the creation of posthumans who possess only the best and most superior of traits with no flaws at all. This is an idea rooted in eugenics, and it fuels the most civilizationally cynical ideas of who gets to be considered superior, and who qualifies as inferior enough to be flushed out of our collective gene pool. It’s important to note that what holds all of these ideas together is the benign-sounding idea of longtermism — and it’s even creeping into the United Nations. “The foreign policy community in general and the … United Nations in particular are beginning to embrace longtermism,” noted one UN Dispatch.
But if it wasn’t already clear why the ideas themselves are dangerous, the people formulating them make it clear whose interests are at stake, and whose aren’t. “…contributors to fast-growing fields like the study of ‘existential risk’ or ‘global catastrophic risk’ are overwhelmingly white… Bostrom idealizes a future in which the continued evolution of ‘(post)humanity’ culminates in a form of ‘technological maturity’ that adheres to mainstream norms of white maleness: deeply disembodied, unattached to place, and dominant over, or independent from, ‘nature’,” note scholars Audra Mitchell and Aadita Chaudhury, who work in the areas of human ethics, ecology science, and technology.
Related on The Swaddle:
What the ‘Soulless Zuckerberg’ Memes Say About Our Relationship With Tech
Tech overlords figuring out ways to survive what they know to be coming — and euphemistically refer to as an “event” — isn’t just a short-sighted way out of the mess they themselves are complicit in. It’s all part of the long game — perhaps the longest one we’ve ever envisioned.
Nick Bostrom enjoys considerable ideological heft. As the chair of Oxford’s Future of Humanity Institute (FHI), he is one among a growing group of philosophers who have their sights set on our future in terms of how much more we can think, accomplish, build and discover on a scale previously thought to be unthinkable. “Anders Sandberg, a research fellow at the Future of Humanity Institute, told me that humans might be able to colonise a third of the now-visible universe, before dark energy pushes the rest out of reach. That would give us access to 100 billion galaxies, a mind-bending quantity of matter and energy to play with,” wrote Ross Andersen, who investigated the philosophies actively shaping the future of our civilization.
Tech is key to achieving this kind of potential, which is why people in tech are so heavily invested (and investing) in the idea. At the heart of the ethical deliberations is the cost-benefit analysis: how much is it okay to lose for the sake of ensuring the potential of future people? Maybe even post-people? It’s the “greater good” dilemma that has been used to justify devastating wars and policy decisions already: “Now imagine what might be ‘justified’ if the ‘greater good’ isn’t national security but the cosmic potential of Earth-originating intelligent life over the coming trillions of years?” Torres asks.
“…the crucial fact that longtermists miss is that technology is far more likely to cause our extinction before this distant future event than to save us from it,” they add.
But the lack of transparency, the inordinate resources, and the power concentrated in the hands of the overwhelmingly cis, white, techno-optimistic men who wield the world’s future in their hands stands in the way of recognizing this crucial fact.
Leave a Comment