Summary of Bill Joy’s, “Why the future doesn’t need us,”

Bill joy.jpg

Bill Joy (1954 – ) is an American computer scientist who co-founded Sun Microsystems in 1982, and served as chief scientist at the company until 2003. His now famous Wired magazine essay, “Why the future doesn’t need us,” (2000) sets forth his deep concerns over the development of modern technologies.[i] 

Joy traces his concern to a discussion he had with Ray Kurzweil at a conference in 1998. Taken aback by Kurzweil’s predictions, he read an early draft of The Age of Spiritual Machines: When Computers Exceed Human Intelligence, and found it deeply disturbed. Subsequently he encountered arguments by the Unabomber Ted Kaczynski’s. Kaczynski argued that if machines do all the work, as they inevitably will, then we can: a) let the machines make all the decisions; or b) maintain human control over the machines.

If we choose “a” then we are at the mercy of our machines. It is not that we would give them control or that they would take control, rather, we might become so dependent on them that we would have to accept their commands. Needless to say, Joy doesn’t like this scenario. If we choose “b” then control would be in the hands of an elite, and the masses would be unnecessary. In that case the tiny elite: 1) would exterminate the masses; 2) reduce their birthrate so they slowly became extinct; or 3) become benevolent shepherds to the masses. The first two scenarios entail our extinction, but even the third option is no good. In this last scenario the elite would see to it that all physical and psychological needs of the masses are met, while at the same time engineering the masses to sublimate their drive for power. In this case the masses might be happy, but they would not be free.

Joy finds these arguments convincing and deeply troubling. About this time Joy read Moravec’s book where he found more of the same kind of predictions. He found himself especially concerned by Moravec’s claim that technological superiors always defeat the inferiors, as well as his contention that humans will become extinct as they merge with the robots. Disturbed, Joy consulted other computer scientists who basically agreed with these technological predictions but were themselves unconcerned. Joy was stirred to action.

Joy’s concerns focuses on the transforming technologies of the 21st century—genetics, nanotechnology, and robotics (GNR). What is particularly problematic about them is that they have the potential to self-replicate. This makes them inherently more dangerous than 20th century technologies—nuclear, biological, and chemical weapons—which were expensive to build and require rare raw materials. By contrast, 21st century technologies allow for small groups or individuals to bring about massive destruction. Joy accepts that we will soon achieve the computing power to implement some of the dreams of Kurzweil and Moravec, worrying nevertheless that we overestimate our design abilities. Such hubris may lead to disaster.

Robotics is primarily motivated by the desire to be immortal—by downloading ourselves into them. (The terms uploading and downloading are used interchangeably.) But Joy doesn’t believe that we will be human after the download or that the robots would be our children. As for genetic engineering, it will create new crops, plants, and eventually new species including many variations of human species, but Joy fears that we do not know enough to conduct such experiments. And nanotechnology confronts the so-called “gray goo” problem—self-replicating nanobots out of control. In short, we may be on the verge of killing ourselves! Is it not arrogant, he wonders, to design a robot replacement species when we so often make design mistakes?

Joy concludes that we ought to relinquish these technologies before it’s too late. Yes, GNR may bring happiness and immortality, but should we risk the survival or the species for such goals? Joy thinks not.

Summary – Genetics, nanotechnology, and robotics are too dangerous to pursue. We should relinquish them.


[i] Bill Joy, “Why The Future Doesn’t Need Us,” Wired Magazine, April 2000.

5 thoughts on “Summary of Bill Joy’s, “Why the future doesn’t need us,”

  1. If the West doesn’t develop these technologies, the Chinese will.
    Should we care whether the philosophical seeds of the new world order are capitalist or statist?

  2. You are correct; someone will develop these technologies. And when the genie is out of the bottle it is very hard to put it back. Better to figure out how to utilize and control them.

  3. it may be arrogant, but its not the end of the design line. we are just creating first generation a.i. – machines as smart as people. those machines will be the ones making the new robot overlords.

  4. probably won’t be us vs. them; rather we’ll incorporate new technology into our bodies and become cyborgs. JGM

  5. I think its going to take a lot less privacy than we want and it will need a lot more accountability and careful thought than we are used to for the human race to muddle through the application of knowledge in these fields so that we reach a preferred and happy future. I also think we can do it.

Leave a Reply

Your email address will not be published. Required fields are marked *