Category Archives: Futurism – Warnings

Survival of the Richest

Robots revolt in R.U.R., a 1920 play

Professor and media theorist Douglas Rushkoff recently penned an article that went viral,
“Survival of the Richest.” It outlines how the super wealthy are preparing for doomsday. Here is a recap followed by a brief commentary.

Rushkoff was recently invited to deliver a speech for an unusually large fee, about half his academic salary, on “the future of technology.” He expected a large audience but, upon arrival, he was ushered into a small room with a table surrounded by five wealthy men. But they weren’t interested in the future of technological innovation. Instead, they wanted to know things like where they should move to avoid the coming climate crisis, whether mind uploading will work and, most prominently, how to “maintain authority over [their] security force after the event?”

The Event. That was their euphemism for the environmental collapse, social unrest, nuclear explosion, unstoppable virus, or Mr. Robot hack that takes everything down.

This single question occupied us for the rest of the hour. They knew armed guards would be required to protect their compounds from the angry mobs. But how would they pay the guards once money was worthless? What would stop the guards from choosing their own leader? The billionaires considered using special combination locks on the food supply that only they knew. Or making guards wear disciplinary collars of some kind in return for their survival. Or maybe building robots to serve as guards and workers — if that technology could be developed in time.

That’s when it hit me: At least as far as these gentlemen were concerned, this was a talk about the future of technology. Taking their cue from Elon Musk colonizing Mars, Peter Thiel reversing the aging process, or Sam Altman and Ray Kurzweil uploading their minds into supercomputers, they were preparing for a digital future that had a whole lot less to do with making the world a better place than it did with transcending the human condition altogether and insulating themselves from a very real and present danger of climate change, rising sea levels, mass migrations, global pandemics, nativist panic, and resource depletion. For them, the future of technology is really about just one thing: escape.

Rushkoff continues by expressing his disdain for transhumanism,

The more committed we are to this [transhuman] view of the world, the more we come to see human beings as the problem and technology as the solution. The very essence of what it means to be human is treated less as a feature than bug. No matter their embedded biases, technologies are declared neutral. Any bad behaviors they induce in us are just a reflection of our own corrupted core. It’s as if some innate human savagery is to blame for our troubles.

Ultimately, according to the technosolutionist orthodoxy, the human future climaxes by uploading our consciousness to a computer or, perhaps better, accepting that technology itself is our evolutionary successor. Like members of a gnostic cult, we long to enter the next transcendent phase of our development, shedding our bodies and leaving them behind, along with our sins and troubles.

The mental gymnastics required for such a profound role reversal between humans and machines all depend on the underlying assumption that humans suck. Let’s either change them or get away from them, forever.

It is such thinking that leads the tech billionaires to want to escape to Mars, or at least New Zealand. But “the result will be less a continuation of the human diaspora than a lifeboat for the elite.”

For his part, Rushkoff suggested to his small audience that the best way to survive and flourish after “the event,” would be to treat other people well now. Better act to avoid social instability, environmental collapse and all the rest than to figure out how to deal with them in the future. Their response?

They were amused by my optimism, but they didn’t really buy it. They were not interested in how to avoid a calamity; they’re convinced we are too far gone. For all their wealth and power, they don’t believe they can affect the future. They are simply accepting the darkest of all scenarios and then bringing whatever money and technology they can employ to insulate themselves — especially if they can’t get a seat on the rocket to Mars.

But for Rushkoff:

We don’t have to use technology in such antisocial, atomizing ways. We can become the individual consumers and profiles that our devices and platforms want us to be, or we can remember that the truly evolved human doesn’t go it alone.

Being human is not about individual survival or escape. It’s a team sport. Whatever future humans have, it will be together.

Reflections – I don’t doubt that many wealthy and powerful people would willingly leave the rest of us behind, or enslave or kill us all—a theme endorsed by Ted Kaczynski in The Unabomber Manifesto: Industrial Society and Its Future. But notice that these tendencies toward evil have existed independent of technology or any transhumanist philosophy—history is replete with examples of cruelty and genocide.

So the question is whether we can create a better world without radically transforming human beings. I doubt it. As I’ve said many times our apelike brains—characterized by territoriality, aggression, dominance hierarchies, irrationality, superstition, and cognitive biases—in combination with 21st-century technology is a lethal combination. And that’s why, in order to survive the many existential risks now confronting us and to have descendants who flourish, we should (probably) embrace transhumanism.

So while there are obvious risks associated with the power that science and technology afford, they are our best hope as we approach many of these “events.” So if we don’t want our planet to circle our sun lifeless for the next few billion years, if we believe that conscious life is really worthwhile, then we must work quickly to transform both our moral and intellectual natures. Otherwise at most only a few will survive.

Summary of Jaron Lanier’s “Who Owns the Future?”

Lanier blowing into a woodwind instrument with several chambers

Jaron Lanier‘s recent book, Who Owns the Future? discusses the role that technology plays in both eliminating job and increasing income inequality. Early in that book, Lanier quotes from Aristotle’s Politics: “If every instrument could accomplish its own work … if … the shuttle would weave and the plectrum touch the lyre without a hand to guide them, chief workmen would not want servants, nor masters slaves.”

In other words, Aristotle saw that the human condition largely depends on what machines can and cannot do, and we can imagine that machines will do much more of our work in the future. How then would Aristotle respond to today’s technology? Would he advocate for a new economic system that met the basic needs of everyone, including those who no longer needed to work; or would he try to eliminate those who didn’t own the machines that run society? 

Surely this question has a modern ring. If, as Lanier suggests, only those close to the computers that run society have good incomes, then what happens to the rest of us? What happens to the steel mill and auto factory workers, to the butchers and bank tellers, and, increasingly, to the accountants, professors, lawyers, engineers, and physicians when artificial intelligence improves? (Lanier discusses how this will come about in his book.)

Lanier worries that automata, especially AI and robotics, create a situation where we don’t have to pay others. Why pay for maid service if you have a robotic maid, or for software engineers if computers are self-programming? Aristotle used music to illustrate the point. He said that it was terrible to enslave people to make music (playing instruments in his time was undesirable and labor intensive) but we need music so someone must be enslaved. If we had machines to make music or could get by without it, that would be better. Music was an interesting choice because now so many want to play it for a living, although almost no one makes money for their music through internet publicity. People may be followed online for their music or their blog, but they rarely get paid for it.

So what do we do? Should we eliminate or ignore the apparently unnecessary people? Should we retire to the country or the gated community where our apparent safety is ensured by a global military empire and their paid mercenaries? Where the first victims of society sleep on street corners, populate our prisons, endure unemployment, or involuntarily join our voluntary armies? (Remember technology will eventually replace the accountants, attorneys, professors and software engineers too!) Or should we recognize how we benefit from each other, from our diverse temperaments and talents, and from the safety and sustenance we can enjoy together?

So a question we now face is: what happens to the extra people—which will soon be almost all of us—when technology does all the work or the remaining work is unpaid? Are the rest of us killed or must we slowly starve? Surprisingly Lanier thinks these questions are misplaced. After all, human intelligence and human data drive the machines. So the issue is how to think about the work that machines can’t do.

I think that Lanier is on to something. We can think of the non-automated work as anything from essential to frivolous to harmful. If we think of it as frivolous, then so too are the people who produce it. If we don’t care about human expression in art, literature, music, theatre, sport or philosophy, then why care about the people who produce it.

But even if machines write better music or poetry or blogs than human beings, we can still value human generated effort. Even if machines did all of society’s work we can still share the wealth with people who want to think and write and play music. Perhaps people just enjoy these activities. No human being plays chess as well as the best supercomputers, but people still enjoy playing chess; I don’t write as well as Carl Sagan did, but I still enjoy it.

I’ll go further. Suppose someone wants to sit on the beach, surf, ski, golf, smoke marijuana, or watch TV. What do I care? Maybe a society of contented people doing what they wanted would be better than one driven by the Protestant work ethic. A society of stoned, TV watching, skiers, golfers, and surfers would probably be a happier one than the one we live in now. (In fact, the happiest countries are those with strong social safety nets, the ones with generous vacation and leave policies.) And people in countries with strong social safety nets still write music and books, do science, volunteer, and visit their grandchildren. They aren’t drug addicts!

This is what I envision. A society where machines do all the work that humans don’t want to do and humans would express themselves however they like, without harming others. A society much more like Denmark and Norway, and much less like Alabama and Mississippi. Yes, I believe that all persons are entitled to the minimal amount it takes to live a decent human life. All of us would benefit from such an arrangement, as we all have much to contribute. I’ll leave with some words inspiring words from Eliezer Yudkowsky:

There is no evil I have to accept because ‘there’s nothing I can do about it’. There is no abused child, no oppressed peasant, no starving beggar, no crack-addicted infant, no cancer patient, literally no one that I cannot look squarely in the eye. I’m working to save everybody, heal the planet, solve all the problems of the world.

What Are Slaughterbots?

The above video was produced by

While lethal, fully autonomous weapons systems, or killer robots, aren’t yet able to select and attack targets without human control, a number of countries are developing such devices. And a number of organizations including The Future of Life InstituteHuman Rights Watch, and the International Committee for Robot Arms Control, have all warned against their development.

For those interested, you can sign an open letter against weaponizing AI at: http://autonomousweapons.org/

Also, Professor Stuart Russell of the computer science department at UC-Berkeley gave a TED Talk a few months ago which explored the issue:

And the science fiction writer Daniel Suarez explored the same theme in this TED talk:

Finally, consider that the political party in the USA most associated with toughness and defense is the same one that is anti-science and anti-intellectual. It doesn’t promote our security to undermine the science and technology that is the source of USA military power. If other countries develop AI, robots, and autonomous weapons first, then nuclear weapons may be obsolete. So it is counterproductive for a country that wants to dominate others or defend itself to make it almost impossible for bright foreign students to get HB1 visas. Of course, the primary enemies of the USA today are domestic ones.

Summary of Bill Joy’s, “Why the future doesn’t need us,”

Bill joy.jpg

Bill Joy (1954 – ) is an American computer scientist who co-founded Sun Microsystems in 1982 and served as chief scientist at the company until 2003. His now famous Wired magazine essay, “Why the future doesn’t need us,” (2000) sets forth his deep concerns over the development of modern technologies.[i] 

Joy traces his concern to a discussion he had with Ray Kurzweil at a conference in 1998. He had read an early draft of Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence and found it deeply disturbing. Subsequently, he encountered arguments by the Unabomber Ted Kaczynski. Kaczynski argued that if machines do all of society’s work, as they inevitably will, then we can: a) let the machines make all the decisions; or b) maintain human control over the machines.

If we choose “a” then we are at the mercy of our machines. It is not that we would give them control or that they would take control, rather, we might become so dependent on them that we would have to accept their commands. Needless to say, Joy doesn’t like this scenario. If we choose “b” then control would be in the hands of an elite, and the masses would be unnecessary. In that case, the tiny elite: 1) would exterminate the masses; 2) reduce their birthrate so they slowly became extinct; or 3) become benevolent shepherds to the masses. The first two scenarios entail our extinction, but even the third option is bad. In this last scenario, the elite would see to it that all physical and psychological needs of the masses are met, while at the same time engineering the masses to sublimate their drive for power. In this case, the masses might be happy, but they would not be free.

Joy finds these arguments both convincing and troubling. About this time Joy read Hans Moravec’s Robot: Mere Machine to Transcendent Mind where he found predictions similar to Kurzweil’s. Joy found himself especially concerned by Moravec’s claim that technological superiors always defeat technological inferiors, as well as his claim that humans will become extinct as they merge with the robots. Disturbed, Joy consulted other computer scientists who basically agreed with these predictions.

Joy’s worries focus on the transforming technologies of the 21st century—genetics, nanotechnology, and robotics (GNR). What is particularly problematic about them is that they have the potential to self-replicate. This makes them inherently more dangerous than 20th-century technologies—nuclear, biological, and chemical weapons—which were expensive to build and require rare raw materials. By contrast, 21st-century technologies allow for small groups or individuals to bring about massive destruction. Joy accepts that we will soon achieve the computing power necessary to implement some of the scenarios envisioned by Kurzweil and Moravec, but worries that we overestimate our design abilities. Such hubris may lead to disaster.

For example, robotics is primarily motivated by the desire to be immortal—by downloading ourselves into them. But Joy doesn’t believe that we will be human after the download or that the robots would be our children. As for genetic engineering, it will create new crops, plants, and eventually new species including many variations of human species, but Joy fears that we do not know enough to conduct such experiments. And nanotechnology confronts the so-called “gray goo” problem—self-replicating nanobots out of control. In short, we may be on the verge of killing ourselves! Is it not arrogant, he wonders, to design a robot replacement species when we so often make design mistakes?

Joy concludes that we ought to relinquish these technologies before it’s too late. Yes, GNR may bring happiness and immortality, but should we risk the survival or the species for such goals? Joy thinks not.

Summary – Genetics, nanotechnology, and robotics are too dangerous to pursue; we should abandoned them. For a critique of these views see my peer-reviewed piece “Critique of Bill Joy’s ‘Why the Future Doesn’t Need Us.’

________________________________________________________

[i] Bill Joy, “Why The Future Doesn’t Need Us,” Wired Magazine, April 2000.

Summary of Jaron Lanier’s, “One Half A Manifesto”

Jaron Lanier (1960 – ) is a pioneer in the field of virtual reality who left Atari in 1985 to found VPL Research, Inc., the first company to sell VR goggles and gloves. In the late 1990s Lanier worked on applications for Internet2, and in the 2000s he was a visiting scholar at Silicon Graphics and various universities. More recently he has acted as an advisor to Linden Lab on their virtual world product Second Life, and as “scholar-at-large” at Microsoft Research where he has worked on the Kinect device for Xbox 360.

Lanier’s “One Half A Manifesto” opposes what he calls “cybernetic totalism,” the view of Kurzweil and others which proposes to transform the human condition more than any previous ideology. The following beliefs characterize cybernetic totalism.

  1. That cybernetic patterns of information provide the ultimate and best way to understand reality.
  2. That people are no more than cybernetic patterns.
  3. That subjective experience either doesn’t exist, or is unimportant because it is some sort of peripheral effect.
  4. That what Darwin described in biology, or something like it, is in fact also the singular, superior description of all creativity and culture.
  5. That qualitative, as well as quantitative aspects of information systems, will be accelerated by Moore’s Law. And
  6. That biology and physics will merge with computer science (becoming biotechnology and nanotechnology), resulting in life and the physical universe becoming mercurial; achieving the supposed nature of computer software. Furthermore, all of this will happen very soon! Since computers are improving so quickly they will overwhelm all the other cybernetic processes, like people, and fundamentally change the nature of what’s going on in the familiar neighborhood of Earth at some moment when a new “criticality” is achieved—maybe in about the year 2020. To be a human after that moment will be either impossible or something very different from we now can know.[i]

Lanier responds to each belief in detail. A summary of those responses are as follows:

  1. Culture cannot be reduced to memes, and people cannot be reduced to cybernetic patterns.
  2. Artificial intelligence is a belief system, not a technology.
  3. Subjective experience exists, and it separates humans from machines.
  4. Darwin provides the “algorithm for creativity” which explains how computers will become smarter than humans. However, that nature didn’t require anything “extra” to create people doesn’t mean that computers will evolve on their own.
  5. There is little reason to think that software is getting better, and no reason at all to think it will get better at a rate like hardware.

The sixth belief, the heart of cybernetic totalism, terrifies Lanier. Yes, computers might kill us, preserve us in a matrix, or be used by evil humans to do harm to the rest of us. It is deviations of this latter scenario that most frightens Lanier for it is easy to imagine that a wealthy few would become a near godlike species, while the rest of us remain relatively the same. And Lanier expects immortality to be very expensive, unless software gets much better. For example, if you were to use biotechnology to try to make your flesh into a computer, you would need excellent software without glitches to achieve such a thing. But this would be extraordinarily costly.

Lanier grants that there will indeed be changes in the future, but they should be brought about by humans not by machines. To do otherwise is to abdicate our responsibility. Cybernetic totalism, if left unchecked, may cause suffering like so many other eschatological visions have in the past. We ought to remain humble about implementing our visions.

Summary – Cybernetic totalism is philosophically and technologically problematic.

____________________________________________________________________

[i] Jaron Lanier, “One Half A Manifesto”