Critique of Bill Joy’s “Why the future doesn’t need us”


In his well-known piece, “Why the future doesn’t need us,” Bill Joy argues that 21st century technologies—genetic engineering, robotics, and nanotechnology (GNR)—will extinguish human beings as we now know them, a prospect he finds deeply disturbing. I find his arguments deeply flawed and critique each of them in turn.

(Published in ACM SIGCAS Computers and Society, Volume 32: Issue 6, June 2003.)

Joy’s unintended consequences argument cites a passage by the Unabomber Ted Kaczynski. According to Joy, the key to this argument is the notion of unintended consequences, which is “a well-known problem with the design and use of technology…” Independent of the strength of Kaczynski’s anti-technology argument—which I also find flawed—it is hard to quibble about the existence of unintended consequences.1 And it is easy to see why. The consequences of an action are in the future relative to that action and, since the future is unknown, some consequences are unknown. Furthermore, it is self-evident that an unknown future and unknown consequences are closely connected.

However, the strongest conclusion that Joy should draw from the idea of unintended consequences is that we should carefully choose between courses of action; and yet he draws the stronger conclusion that we ought to cease and desist in the research, development, and use of 21st-century technologies. But he cannot draw this stronger conclusion without contradiction if, as he thinks, many unknown, unintended consequences result from our choices. And that’s because he can’t know that abandoning future technologies will produce the intended effects.

Thus the idea of unintended consequences doesn’t help Joy’s case since it undermines the justification for any course of action. In other words, the fact of unintended consequences tells us nothing about what we ought to choose, and it certainly doesn’t give us any reason to abandon technology. Of course, Joy might reply that new, powerful technologies make unintended consequences more dangerous than in the past, but as I’ve just shown, he cannot know this. It may well be that newer technologies will lead to a safer world.

Joy’s big fish eat little fish argument quotes robotics pioneer Hans Moravec: “Biological species almost never survive encounters with superior competitors.” Analogously, Joy suggests we will be driven to extinction by our superior robotic descendants. But it isn’t obvious that robots will be superior to us and, even if they were, they may be less troublesome than our neighbors next door. In addition, his vision of the future presupposes that robots and humans will remain separate creatures, a view explicitly rejected by robotics expert Rodney Brooks and others. If Brooks is correct, humans will gradually incorporate technology into their own bodies thus eliminating the situation that Joy envisions. In sum, we don’t know that robots will be the bigger fish, that they will eat us even if they are, or that there will even be distinct fishes.

Joy’s mad scientist argument describes a molecular biologist who “constructs and disseminates a new and highly contagious plague that kills widely but selectively.” Now I have no desire to contract a plague, but Joy advances no argument that this follows from GNR; instead, he plays on our emotions by associating this apocalyptic vision with future technology. (In fact, medical science is the primary reason we have avoided plagues.) The images of a mad scientist or Frankenstein may be popular, but scientists are no madder than anyone else and nightmarish describes only one possible future.

Joy’s lack of control argument focuses on the self-replicating nature of GNR. According to Joy, self-replication amplifies the danger of GNR: “A bomb is blown up only once—but one bot can become many, and quickly get out of control.” First of all, bombs replicate, they just don’t replicate by themselves. So Joy’s concern must not be with replication, but with self-replication. So what is it about robotic self-replication that frightens us? The answer is obvious. Robotic self-replication appears to be out of our control, as compared to our own or other humans self-replication.

Specifically, Joy fears that robots might replicate and then enslave us; but other humans can do the same thing. In fact, we may increase our survival chances by switching control to more failsafe robots designed and programmed by our minds. While Joy is correct that “uncontrolled self-replication in these newer technologies runs … a risk of substantial damage in the physical world,” so too does the “uncontrolled self-replication” of humans, their biological tendencies, their hatreds, and their ideologies. Joy’s fears are not well-founded because the lack of control over robotic self-replication is not, prima facie, more frightening than the similar lack of control we exert over other human’s replication.

Furthermore, to what extent do we control our own reproduction?  I’d say not much. Human reproduction results from a haphazard set of cultural, geographical, biological, and physiological circumstances; clearly, we exert less control over when, if, and with whom we reproduce than we suppose. And we certainly don’t choose the exact nature of what’s to be reproduced; we don’t replicate perfectly. We could change this situation thru genetic engineering, but Joy opposes this technology. He would rather let control over human replication remain in the hands of chance—at least chance as determined by the current state of our technology. But if he fears the lack of control implied by robotic self-replication, why not fear that lack of control over our own replication and apply more control to change this situation? In that way, we could enhance our capabilities and reduce the chance of not being needed.

Of course, Joy would reiterate that we ought to leave things as they are now. But why? Is there something perfect or natural about the current state of our knowledge and technology? Or would things be better if we turned the technological clock back to 1950? 1800? or 2000 B.C.? I suggest that the vivid contrast Joy draws between the control we wield over our own replication and the lack of it regarding self-replicating machines is illusory. We now have and may always have more control over the results of our conscious designs and programs, then we do over ourselves or other people whose programs were written by evolution. If we want to survive and flourish then we ought to engineer ourselves with foresight and, at the same time, engineer machines consistent with these goals.

Joy’s easy access argument claims that 20th-century technologies—nuclear, biological, and chemical (NBC)—required access to rare “raw materials and highly protected information,” while 21st-century technologies “are widely within the reach of individuals or small groups.” This means that “knowledge alone will enable the use of them,” a phenomenon that Joy terms: “knowledge-enabled mass destruction (KMD).”

Now it is difficult to quibble with the claim that powerful, accessible technologies pose a threat to our survival. Joy might argue that even if we survived the 21st century without destroying ourselves, what of the 22nd or the 23rd centuries when more accessible and powerful KMD becomes possible? Of course, we could freeze technology, but it is uncertain that this would be either realistic or advisable. Most likely the trend of cultural evolution over thousands of years will continue—we will gain more control and power over reality.

Now is this more threatening than if we stood still? This is the real question that Joy should ask because there are risks no matter what we do. If we remain at our current level of technology we will survive until we self-destruct or are destroyed by universal forces, say the impact of an asteroid or the sun’s exhaustion of its energy. But if we press forward, we may be able to save ourselves. Sure, we must be mindful of the promises and the perils of future technologies, but nothing Joy says justifies his conclusion that: “we are on the cusp of the further perfection of extreme evil…” Survival is a goal, but I don’t believe that abandonment of new technologies will assure this result or even make it more likely; it just isn’t clear that limiting the access to or discovery of knowledge is, or has ever been, the solution to human woes.

Joy’s  poor design abilities argument notes how often we “overestimate our design abilities,” and concludes: “shouldn’t we proceed with great caution?” But he forgets that we sometimes underestimate our design abilities; and sometimes we are too cautious. Go forward with caution, look before you leap—but don’t stand still.

I take the next argument to be his salient one. He claims that scientists dream of building conscious machines primarily because they want to achieve immortality by downloading their consciousness into them. While he accepts this as distinct possibilities, his existential argument asks whether we will still be human after we download: “It seems far more likely that a robotic existence would not be like a human one in any sense that we understand, that the robots would in no sense be our children, that on this path our humanity may well be lost.” The strength of this argument depends on the meaning of: “in any sense,” “no sense,” “humanity,” and “lost.” Let’s consider each in turn.

It is simply false that a human consciousness downloaded into a robotic body would not be human “in any sense.” If our consciousness is well-preserved in the transfer, then something of our former existence would remain, namely our psychological continuity, the part most believe to be our defining feature. And if robotic bodies were sufficiently humanlike—why we would want them to be is another question—then there would be a semblance of physical continuity as well. In fact, such an existence would be very much like human existence now if the technologies were sufficiently perfected. So we would still be human to some, if not a great, extent. However, I believe we would come to prefer an existence with less pain, suffering, and death to our current embodied state; and the farther we distanced ourselves from our former lives the happier we will be.

As to whether robots would “in no sense” be our children, the same kind of argument applies. Whatever our descendants become they will, in some sense, be our children in the same way that we are, in some sense, the children of stars. Again notice that the extent to which we would want our descendants to be like us depends upon our view of ourselves. If we think that we now experience the apex of consciousness, then we should mourn our descendants’ loss of humanity. But if we hold that more complex forms of consciousness may evolve from ours, then we will rejoice at the prospect that our descendants might experience these forms, however non-human-like they may be. But then, why would anyone want to limit the kind of consciousness their descendants’ experience?

As for our “humanity being lost,” this is true in the sense that human nature will evolve beyond its present state, but false in the sense that there will still be a developmental continuity from beings past and present to beings in the future. Joy wants to limit our offspring for the sake of survival, but isn’t mere survival a lowly goal? Wouldn’t many of us prefer death to the infinite boredom of standing still? Wouldn’t we like to evolve beyond humanity? It isn’t obvious that we have achieved the pinnacle of evolution, or that the small amount of space and time we fill satisfies us.

Instead, it is clear that we are deeply flawed and finite—we age, decay, lose our physical and mental faculties, and then perish. A lifetime of memories, knowledge, and wisdom, lost. Oh, that it could be better! Joy’s nostalgic longings for the past and naïve view that we preserve the present are misguided, however, well they may resonate with those who share similar longings or fear the inevitable future. Our descendants won’t desire to be us any more than we do to be our long-ago ancestors. As Tennyson proclaims: “How dull it is to pause, to make an end, To rust unburnish’d, not to shine in use!2

Joy next turns to his other technologies make things worse argument. As for genetic engineering, I know of no reason—short of childish pleas not to play God—to impede our increasing abilities to perfect our bodies, eliminate disease, and prevent deformity. To not do so would be immoral, making us culpable for an untold amount of preventable suffering and death. And even if there are Gods who have endowed us with intelligence, it would hardly make sense that they didn’t mean for us to use it. As for nanotechnology, Joy eloquently writes of how “engines of creation” may transform into “engines of destruction, but again it is hard to see why we or the Gods prefer that we remain ignorant about nanotechnology.

Joy also claims that there is something sinister about the fact that NBC technologies have largely military uses and were developed by governments, while GNR have commercial uses and are being developed by corporations. Unfortunately, Joy gives us no reason whatsoever to share his fear. Are the commercial products of private corporations more likely to cause destruction than the military products of governments? At first glance, the opposite seems more likely to be true, and Joy gives us no reason to reconsider.

Joy’s it’s never been this bad argument asserts: “this is the first moment in the history of our planet when any species by its voluntary actions has become a danger to itself.” But this is false. Homo sapiens have always been a danger to themselves, both by their actions, as in incessant warfare, and by their inaction, as demonstrated by their impotence when facing plague and famine. I also doubt that humans are a greater threat to themselves now than ever before. We have explored and spread ourselves to all parts of the globe, multiplied exponentially, extended our lifespans, created culture, and may soon have the power to increase our chance for survival from both celestial and terrestrial forces. This should be a cause for celebration, not despair. We no longer need be at the mercy of forces beyond our control, we may soon direct our own evolution.

Joy next quotes Carl Sagan’s to the effect that the survival of cultures producing technology depends on “what may and what may not be done.” Joy interprets this insight as the essence of common sense or cultural wisdom. Independent of the question of whether this is a good definition of common sense, Joy assumes that Sagan’s phrase applies to an entire century’s technologies when it is more likely that it applies to only some of it. It is hard to imagine that Sagan, a champion of science, meant for us to forego 21st-century technology altogether.

And I vehemently dispute Joy’s claim that science is arrogant in its pursuits; instead, it is the humblest of human pursuits. Many human pursuits are more arrogant than science, which carefully and conscientiously tries to tease a bit of truth from reality. Its claims are always tentative and amenable to contrary evidence—much more than can be said for most creeds. And what of the charlatans, psychics, cultists, astrologers, and faith-healers? Not to mention the somewhat more respectable priests and preachers. Science humbly does not pretend to know with certainty, much more than can be said about some ignorant people.

And what of his claim that we have no business pursuing robotics and AI when we have “so much trouble …understanding—ourselves?”  The reply to this, trying to understand mind won’t help you understand the mind argument, notes that self-knowledge is the ultimate goal of the pursuit of knowledge. His sentimentally notes that his grandmother “had an awareness of the nature of the order of life, and of the necessity of living with and respecting that order,” but this is hopelessly naïve and belies the facts. Would he have us die poor and young, be food for beasts, defenseless against disease, living lives that were, as Hobbes so aptly put it: “nasty, brutish, and short?” The impotence and passivity implied by respecting the natural order has condemned millions to death.3

In fact, the life that Joy and most of the rest of us enjoy was built on the labors of persons who fought mightily with the natural order and the pain, poverty, and suffering that nature exudes. Where would we be without Pasteur and Fleming and Salk? As Joy points out life may be fragile, but it was more so in a past that was nothing like the idyllic paradise that he imagines.

Joy’s analogy between the nuclear arms race and possible GNR races is also misplaced, inasmuch as the 20th-century arms race resulted as much from a unique historical situation and conflicting ideologies as some unstoppable technological momentum. Evidence for this is to be found in the reduction of nuclear warheads by the superpowers both during and after the cold war. Yes, we need to learn from the past, but its lessons are not necessarily the ones Joy alludes to. Should we not have developed nuclear weapons? Is he sure that the world would be better today had there not been a Manhattan project?

Now it may be that we are chasing our own tails as we try to create defenses for the threats that new technologies pose. Possibly, every countermeasure is as dangerous as the technology for which it was meant to counter. But Joy’s conclusion is curious: “The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.” In the first place, it is unrealistic to believe that we could limit the pursuit of knowledge even if we wanted to and it was a good idea. Second, this “freeze” at current levels of technology does not expunge the danger; the danger exists now.

A basic difficulty with Joy’s article is this: he mistakenly accept the notion that technology rules people rather than the reverse.4 But if we can control our technology, there is another solution to our dilemmas. We can use our technology to change ourselves; to make ourselves more ethical, cautious, insightful, and intelligent. Surely Joy believes that humans make choices, how else could they choose relinquishment? So why not change ourselves, relinquishing not our pursuit of knowledge, but our self-destructive tendencies?

Joy’s hysteria blinds him to the possible fruits of our knowledge and his pessimism won’t allow him to see our knowledge and its applications as key to our salvation. Instead, he appeals to the ethics of the Dalia Lama to save us, as if another religious ethics will offer an escape from the less noble angels of our nature. I know of no good evidence that the prescriptions of religious ethics have, on the whole, increased the morality of the human race. No doubt the contrary case could easily be made. Why not then use our knowledge to gain mastery over ourselves? If we do that, mastery of our technology will take care of itself. Joy’s concerns are legitimate, but his solutions unrealistic. His planned knowledge stoppage condemns human beings to an existence that cannot improve. And if that’s the case, what is the point of life?

I say forego Joy’s pessimism; reject all barriers and limitations to our intelligence, health, and longevity. Be mindful of our past accomplishments, appreciative of all that we are, but be driven passionately and creatively forward by the hope of all that we may become. Therein lies the hope of humankind and their descendants. In the words of Walt Whitman:

This day before dawn I ascended a hill,
and look’d at the  crowded heaven,
And I said to my Spirit,
When we become the enfolders of those orbs,
and the pleasure and knowledge of everything in them,
shall we be fill’d and satisfied then?
And my Spirit said:
No, we but level that lift,
to pass and continue
~ Walt Whitman 


1. Kaczynski argues that machines will either: a) make all the decisions thus rendering humans obsolete; or b) humans will retain control. If b then only an elite will rule in which case they will: 1)quickly exterminate the masses; 2)slowly exterminate the masses; or 3)take care of the masses. However if 3 then the masses will be happy but not free and life would have no meaning. My questions for Kaczynski are these: Does he really think the only way for humans to be happy is in an agricultural paradise? Does he think an agricultural life was a paradise? A hunter-gather life? Are we really less free when we have loosened the chains of our evolutionary heritage, or are we freer? Kaczynski’s vision of a world where one doesn’t work, pursues their own interests while being very happy sounds good to me.

2. from Alfred Lord Tennyson’s Ulysses.

3. I would argue that had the rise of Christianity in the West not stopped scientific advancement for a thousand years until the Renaissance, we might be immortals already.

4. As in Thoreau’s well-known phrase which appears, not surprisingly, on the Luddite home page: “We do not ride on the railroad; it rides upon us.”

5. From Walt Whitman’s “Song of Myself” in Leaves of Grass.


Liked it? Take a second to support Dr John Messerly on Patreon!
Become a patron at Patreon!

6 thoughts on “Critique of Bill Joy’s “Why the future doesn’t need us”

  1. Finagle finagled himself a bit of high humor in Joy’s article:

    “Kaczynski’s dystopian vision describes unintended consequences, a well-known problem with the design and use of technology, and one that is clearly related to Murphy’s law—’Anything that can go wrong, will.’
    (Actually, this is Finagle’s law, which in itself shows that Finagle was right.)”

  2. I think that Joy’s pessimism is very necessary to balance the unbridled optimism of the move by science toward transhumanism. An accounting of what we are gaining versus what we are losing as the transhumanist agenda progresses is essential. I am with Joy on this. The blade of science is aimless when it comes to morality and the issue of unintended consequences. As well, the interests of capital are involved and will always tend toward inequality because capitalism has proved to be a zero-sum model that always eventually creates inequity. Thank you Bill Joy for your courage to stand up and give a traditional humanist perspective on all this!

  3. Good day sir/maam. I just want to ask about the four meaning of existence on why the future don’t need us. What are those four? thank you!

  4. I read Joy’s article in 2000, I think. I was very much shaken by it. I don’t agree with much of what he wrote, actually. The self-replicating nanotechnology example was the one that scared me. Countering something novel is always time consuming. Eventually, we will have the defense for it, of course. But if the time taken to counter it is longer than the time required to wipe out humanity, well, that scares me. Of course, hopefully when that kind of technology is possible, we already have at least a partial counter for it. Still, if the time to wipe out humanity is shorter than the time to get the partial counter to be a full counter, it does not matter, right?

    So, whatever his other arguments in the article, this is the one that I can’t find a reason not to be worried about. How we get to that does not really matter for me. There is a possibility that we can get there. There is a possibility that we might be hit by an extinction event inducing meteor tomorrow, same argument. Sure, but this one will be initiated by us, through technology.

    I have a doctorate in computer science, so I am really not a Luddite.

  5. Thanks for your comments Dr. Fung. You are right that extinction scenarios initiated by humans are particularly troubling since they are seemingly avoidable. And there is no shortage of good scientists worried about AI, robotics, genetic engineering, nanotechnology and other prominent 21st century technologies. I don’t know the answer but I don’t think Joy’s idea of relinquishment will work. We have to tread carefully into the future, but we can’t turn back the clock either.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.