Bill Joy (1954 – ) is an American computer scientist who co-founded Sun Microsystems in 1982 and served as chief scientist at the company until 2003. His now famous Wired magazine essay, “Why the future doesn’t need us,” (2000) sets forth his deep concerns over the development of modern technologies.[i]
Joy traces his worries to a discussion he had with Ray Kurzweil at a conference in 1998. He had read an early draft of Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence and found it deeply disturbing. Subsequently, he encountered arguments by the Unabomber Ted Kaczynski. Kaczynski argued that if machines do all of society’s work, as they inevitably will, then we can: a) let the machines make all the decisions; or b) maintain human control over the machines.
If we choose “a” then we are at the mercy of our machines. It is not that we would give them control or that they would take control, rather, we might become so dependent on them that we would have to accept their commands. Needless to say, Joy doesn’t like this scenario. If we choose “b” then control would be in the hands of an elite, and the masses would be unnecessary. In that case, the tiny elite: 1) would exterminate the masses; 2) reduce their birthrate so they slowly became extinct; or 3) become benevolent shepherds to the masses. The first two scenarios entail our extinction, but even the third option is bad. In this last scenario, the elite would fulfill all physical and psychological needs of the masses, while at the same time engineering the masses to sublimate their desire for power. In this case, the masses might be happy, but they wouldn’t be free.
Joy finds these arguments both convincing and troubling. About this time Joy read Hans Moravec’s book Robot: Mere Machine to Transcendent Mind where he found predictions similar to Kurzweil’s. Joy was especially concerned by Moravec’s claim that technological superiors always defeat technological inferiors, as well as his claim that humans will become extinct as they merge with the robots. Disturbed, Joy consulted other computer scientists who, for the most part, agreed with these predictions.
Joy’s worries focus on the transforming technologies of the 21st century—genetics, nanotechnology, and robotics (GNR). What is particularly problematic about them is their potential to self-replicate. This makes them inherently more dangerous than 20th-century technologies—nuclear, biological, and chemical weapons—which are expensive to build and require rare raw materials. By contrast, 21st-century technologies allow for small groups or individuals to bring about massive destruction. Joy also argues that we will soon achieve the computing power necessary to implement some of the scenarios envisioned by Kurzweil and Moravec, but worries that we overestimate our design abilities. Such hubris may lead to disaster.
For example, robotics is primarily motivated by the desire to be immortal—by downloading ourselves into them. But Joy doesn’t believe that we will be human after the download or that the robots would be our children. As for genetic engineering, it will create new crops, plants, and eventually new species including many variations of human species, but Joy fears that we don’t know enough to safely conduct such experiments. And nanotechnology confronts the so-called “gray goo” problem—self-replicating nanobots out of control. In short, we may be on the verge of killing ourselves! Is it not arrogant, he wonders, to design a robot replacement species when we so often make design mistakes?
Joy concludes that we ought to relinquish these technologies before it’s too late. Yes, GNR may bring happiness and immortality, but should we risk the survival or the species for such goals? Joy thinks not.
Summary – Genetics, nanotechnology, and robotics are too dangerous to pursue; we should abandon them. (I think Joy’s call for relinquishment is unrealistic. For more see my peer-reviewed essay “Critique of Bill Joy’s ‘Why the Future Doesn’t Need Us.’“)
________________________________________________________
[i] Bill Joy, “Why The Future Doesn’t Need Us,” Wired Magazine, April 2000.
(Note, This essay was originally published on this blog on February 15, 2016.)
Seems like scenario “b2” comes close to what I propose for a bridge to the future … full-throttle automation of commerce, with progressive taxation of the wealthy used to fund a UBI, and make the UBI more generous for the childless, thus effectively bribing the poor to be infertile … whether the remaining human elite merge with machines or become their pets may depend on accidents of history
I don’t share Mr. Joy’s technological pessimism–my pessimism is of a different sort. I have no fear that machines will take over the world, a la “Terminator”. My concern is that Homo Sapiens is a Pleistocene hunter-gatherer, not a civilized creature, and is unable to cope with the challenges of modern societies. People are too stupid to understand issues like climate change, pandemics, or economics.
Oh, and the idea of downloading our consciousness into a machine is preposterous, because the mind is not differentiable from the body. The integration of mind and body is much too tight to permit a disembodied mind to retain its sanity. More to the point, designing a machine capable of replicating the capabilities of the mind is at a fundamental level impossible. Asking a human brain to design an artificial human brain is rather like asking a person to elevate himself by pulling himself up by his bootstraps.
Harrumph and Diddly-Doodle! 😄
I mostly agree with your first paragraph. Regarding the second paragraph, you obviously know a lot more about hardware, software, AI, etc. than I do but I know that the many if not most of the computer scientists take uploading very seriously. I think the basic idea is to map every neuron, synapse, dendrite, etc. and then copy that mental program onto another substrate. Again I’m not qualified to answer the question of whether this is possible but many computer scientists, especially AI researchers, do think it is. Of course, they may be mistaken. If I had to hazard a guess, as an amateur who has done nothing more than read Kurzweil, Moravec, and a few others, I’d say that if we have oceans of time for future innovation virtually anything is possible. And if we can do something like this then we have another way of perhaps gaining immortality.
Mr. Crawford’s second paragraph is generally correct. To believe otherwise is no different than believing in the comforting supernatural fantasies of a religion.
You and Mr. Crawford may both be correct. But, given the caveat that I’m no AI expert, many experts (Ray Kurzweil, Marvin Minsky, Randal A. Koene, Nick Bostrom, Michio Kaku, and others) attest to its possibility. And even skeptics like Kenneth Miller don’t reject the idea in principle.