Bill Joy (1954 – ) co-founded Sun Microsystems, served as the company’s chief scientist until 2003, and played an integral role in the early development of BSD UNIX while a graduate student at Berkeley. He also wrote one of the most celebrated pieces about the negative implications of future technology in Wired Magazine in April 2000. Although I disagree vehemently with its conclusions, it is required reading for anyone interested in the promises and perils of the future. Here is a brief summary of its main argument.
Bill Joy and Ray Kurzweil talked at a conference in 1998. Joy was taken aback by Kurzweil’s predictions, especially about mind uploading. However, since Kurzweil is a well-respected thinker, Joy read an early draft of Kurzweil’s book, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, and was deeply disturbed.
Joy begins his article by introducing us to the Unabomer’s Argument which can be summarized as follows: If machines do all the work in society then we could: a) let the machines make all the decisions; or b) maintain human control over the machines. If “a” comes to be then we can make no predictions since the machines will have control. It’s not that we would give them control or that they would take control, but we might become so dependent on them that we would have to accept their commands. Needless to say, Joy doesn’t like this scenario.
What about “b”? If human maintain control, that control would be in the hands of an elite—the masses would be unnecessary. In that case, this tiny elite: 1) would exterminate the masses; 2) reduce their birthrate so they slowly became extinct; or 3) become benevolent shepherds to the masses. They will see to it that all physical and psychological needs are met and the masses will be engineered to sublimate their drive for power and be happy instead. The masses would be happy, “but they will most certainly not be free.”
Joy finds the Unabomber’s argument convincing given that there are so many unintended consequences of technology. About this time Joy reads Hans Moravec’s book, Robot: Mere Machine to Transcendent Mind, and finds similar thinking. In Moravec, he is especially concerned by the claim that technological superiors always defeat the inferiors. Moravec says humans will become extinct as they merge with the robots and become “Ex-humans.” Disturbed, Joy consults Danny Hillis and asks if this is all really going to happen. Hillis says that such things probably will happen but that we will get used to it. Joy is disturbed.
All this gets Joy to thinking plagues, grey goo, and the Borg of Star Trek. Why aren’t people concerned about this Joy wonders? He thinks we get used to scientific advances but have not thought through what genetics, nanotechnology, and robotics (GNR) will mean in the 21st century. Most importantly these technologies are self-replicative. Sure these technologies have promise but they are much more dangerous than 20th-century technologies—nuclear, biological, and chemical weapons. These were expensive to build and required rare raw materials. But 21st-century technologies will enable small groups or individuals to bring about mass destruction.
Joy spends the next few pages recounting his education at the University of Michigan and Berkeley, and how he became rich with Unix. So he is no Luddite. He is convinced that the computing power will be here by 2030 to implement some of the dreams of Kurzweil and Moravec, which leads to a complete redesigning of the world. But Joy worries that we overestimate our design abilities, and with such powerful technologies this may lead to our ruin. Why then do we pursue such technologies?
Joy argues that robotics is motivated by two main desires: 1) having machines do our work for us, and 2) achieving immortality by downloading ourselves into them. But Joy doesn’t believe we will be human after the downloads or that the robots would be our “mind children.” (The title of one of Moravec’s books.) Genetic engineering will create new crops, plants, and new species including many variations of human species. But Joy fears that this could easily go astray, for example, we might create a new plague. And nanotechnology has its “gray goo” problem—self-replicating nanobots out of control.
So it’s “the power of destructive self-replication in GNR that should give us pause.” We are on the verge of killing ourselves, which might be common to species that reach our level of technological sophistication. Joy thinks it arrogant to design a robot replacement species when we much less trivial technologies often fail.
Joy also talks at length about the development of nuclear weapons. The weapons were built and then a kind of momentum occurred leading us, over the ensuing decades, to the brink of nuclear disaster. They still threaten us with extinction. And solutions like moving into space, nuclear defense shields, or nanotechnology shields won’t work since every new defensive system simply brings on another round of offensive capability. In fact, the side effects of defense shields may be as dangerous as what they were designed to protect against.
This all leads to Joy’s fundamental conclusion: “the only realistic alternative I see is relinquishment.” We shouldn’t open Pandora’s box, we shouldn’t let our technology take control of us, especially since we have “no plan, no control, no brakes.” We still have a chance to stop pursuing the course, but soon it will be too late. The arms race provides a precedent for how to stop or at least slow down the advancement of dangerous technologies. We did begin to sign treaties, ban and reduce weapons. We realized that we were all in peril. Verification of bans against GNR will be difficult, but Joy thinks it possible. Yes GNR may bring happiness and immortality, but should we risk the survival or the species for such goals? Eternity, liberty, and equality are worthwhile goals but another utopian vision is based on fraternity and altruism. For an ethical basis for the future Joy looks to the Dalai Lama who advocates love, compassion, and universal responsibility. It is not material progress or the pursuit of knowledge that will ultimately make us happy.
Joy claims that he may be “morally obligated” to stop developing software. This leaves him “not angry but at least a bit melancholic. Henceforth, for me, progress will be somewhat bittersweet.”
Bill Joy’s comments and beliefs seem to stem from a fundamental fear of the unknown. He is prophesizing doom for humanity based on nothing more than what he sees as human ineptitude, greed, and inability to control machines.
I’m not sure if he has simply seen too many terminator movies, or just thinks that lowly of humanity in general. Either way it doesn’t make for a convincing argument to stop progressing in the fields of GNR. I would love to ask Mr. Joy if he feels that space exploration should be stopped on the basis us contacting Extraterrestrial Life.
Building an argument out of fear of the unknown is shortsighted and ultimately useless. Instead of moving forward he would have us halt in place (seemingly to his benefit I might add). There is no downside for him, just the rest of humanity.
Why shouldn’t we create technology to make our lives easier? Create a world based on knowledge and robotics? Remove the mundane tasks from our everyday lives?
Seems like a pretty great future to me.