Summary of Bill Joy’s celebrated piece from Wired Magazine Issue 8.04 ; Apr 2000
Joy & Kurzweil talk at a conference in 98 at the bar of a hotel! J was taken aback by K’s predictions, since K is well respected. He read an early draft of K’s book and was deeply disturbed.
The Unabomer’s Argument – If machines do all the work then we could: a) let the machines make all the decisions; or b) maintain human control over the machines. If “a” comes to be then we can make no predictions since the machines will have control. (It’s not that we would give them control or that they would take control, but we might become so dependent on them that we would have to accept their commands.) Needless to say, J doesn’t like this machines take over scenario.
Now what about “b”? If human maintain control, that control would be in the hands of an elite—the masses would be unnecessary. In that case this tiny elite: 1) would exterminate the masses; 2) reduce their birthrate so they slowly became extinct; or 3) become benevolent shepherds to the masses. They will see to it that all physical and psychological needs are met and the masses will be engineered to sublimate their drive for power and be happy instead. The masses would be happy, “but they will most certainly not be free.”
Joy (J) finds the UB’s argument convincing since there are so many unintended consequences of technology. In addition, systems are so complex that changes may as a ripple effect. About this time J reads Moravec’s robot book and finds the same kind of stuff. In M he is especially concerned by the claim that tech superiors always defeat the inferiors (looks bad for us.) M says humans will become extinct as they merge with the robots and become “Ex-humans.” Disturbed, J consults Danny Hillis and asks if this is all really going to happen. H says it will but we’ll get used to it. J is disturbed.
All this gets J to thinking plagues, grey goo, and the Borg of Star Trek. Why aren’t people concerned about this J wonders? He thinks we get used to scientific advance but have not thought thru what genetics, nanotechnology, and robotics GNR will mean in the 21st century. The big thing is these things can self-replicate. Sure these techs have promise but they are much more dangerous than 20th century techs (NBC—nuclear, biological, and chemical weapons) These were expensive to build, required rare raw materials, etc. But 21st century techs will enable small groups or individuals to bring about mass destruction.
Joy spends the next few pages recounting his education at the U of Michigan and grad school at Berkeley, how he became rich with Unix, etc. All of this to show he is no Luddite. But the point of all this is that J is sincerely worried that the computer, physical, and biological sciences may bring us to ruin. He says he is convinced that the computer power will be here by 2030 to implement some of the dreams of K and M. Ultimately all of this leads to a complete redesigning of the world. But J worries that we overestimate our design abilities.
Robotics is all about having the machines do our work for us and that we will achieve immortality by downloading ourselves into them. But J doesn’t believe we will be human after the downloads or that the robots would be our children. Genetic engineering will create new crops, plants, and eventually new species including many variations of human species. Joy has many fears about genetics but especially how easy it would be to mess up and create some new plague. And nanotechnology has its “gray goo” problem—self-replicating nanobots out of control.
So it’s “the power of destructive self-replication in GNR that should give us pause.” He thinks we are on the verge of killing ourselves and this might be common to species that reach the level of power and intelligence we have. He thinks it arrogant of us to design a robot replacement species when we mess up with things not nearly as dangerous.
J next talks at length about the development of nuclear weapons. The weapons were built and then a kind of momentum occurred leading, over the ensuing decades, to a continual build up putting us at the brink of nuclear disaster. He believes there is less than a 50% chance of making it thru the next century [I’m reading between the lines.]. And solutions like moving into space, nuclear defense shields, and nanotechnology shields won’t work, since every new defensive system simply brings on another round of offensive capability. And the side effects of defense shields may be as dangerous as what they were designed to protect against. Thus “the only realistic alternative I see is relinquishment.”
The next few pages tell us not to open this Pandora’s box, not to let our tech take control of us. Especially since we have “no plan, no control, no brakes.” We still have a chance to stop pursuing the course, but soon it will be too late. And we do have a precedent for stopping all this stuff—the arms race. We did begin to sign treatises and ban and reduce weapons because we realized that we were all at peril. Verification of bans against GNR will be difficult, but J thinks it possible. Yes GNR may bring happiness and immortality, but should we risk the survival or the species for such goals? Eternity, liberty, and equality are worthwhile goals but another utopian vision is based on fraternity (altruism.) For an ethical basis for the future J looks to the Dalai Lama who advocates love, compassion, and universal responsibility. It is not material progress or the pursuit of knowledge that will ultimately make us happy.
J continues to speak passionately for his position and thinks he may be “morally obligated to stop this [software dev] work. All of this leaves him “not angry but at least a bit melancholic. Henceforth, for me, progress will be somewhat bittersweet.”