Category Archives: Science & Technology

Summary of Marshall Brain’s “Robotic Nation”

Recent discussion about the effect of technology on employment reminded me of Marshall Brain‘s prescient essays of almost 20 years ago. (“Robotic Nation,” “Robots in 2015,” and “Robotic Freedom“) Here is a summary of the main theses in each essay.

Robotic Nation


Tip of the Iceberg – Technology transforms employment because of
Moore’s Law – Exponential growth is leading to a
The New Employment Landscape – where the equation
Labor = Money – will no longer hold, necessitating new economic models.

Brain believes every fast food meal will be (almost) fully automated soon, and this is just the tip of the iceberg. Right now we interact with automated systems: ATM machines, gas pumps, self-serve checkout, etc. These systems lower cost and prices, but “these systems will also eliminate jobs in massive numbers.” There will be massive unemployment in the next decades as we enter the robotic revolution.

In the next 15 years most retail transactions will be automated and 5 million retail jobs lost. Next, walking, human shaped robots will begin to appear, and by 2025 we may have AI equipped machines  that hear, move, see, and manipulate objects with roughly the ability of humans. Robots will get cheaper and become more human shaped to facilitate their use of cars, elevators, and other objects in the human environment. By 2030 you will buy a $10,000 robot that will clean, vacuum, and mow the lawn. Robotic fast food places will open shortly thereafter, and by 2040 will be completely robotic. By 2055 robots will replace half the American workforce leaving millions unemployed. Restaurants, airports, construction, hospitals, truck drivers and airplane pilots are just some of the jobs and locations that will have mostly robotic workers. These robots will last for years, and need no vacation or sick time.

While robotic vision or image processing is currently a stumbling block, Brain thinks we will make significant progress in this field in the next twenty years. This single improvement will bring catastrophic changes, analogous to the changes brought about by the Wright brothers. Brain applauds these developments. After all, who wants to clean toilets, flip burgers, and drive trucks, activities that waste human potential.

If all this sounds crazy, Brain asks you to consider a prediction of faster than sound aircraft in 1900; a time when there were no radios, model T’s or airplanes.  At that time many thought heavier than air flight was impossible, and predictions to the contrary were often ridiculed. Thus the employment world is changing dramatically and rapidly. Why?

The basic answer is Moore’s Law—CPU power doubles every 18 to 24 months. Computers in 2020 will have the NEC Earth Simulator. By 2100 we may have the power of a million human brains on our desktop. Robots will take your job by 2050 with the marriage of: cheap computers with the power of a human brain; a robotic chassis like Asimo; a fuel cell; and advanced software.

The new employment landscape isn’t so different from the one of 100 years ago, but it will be vastly different once robots that see, hear, and understand language compete with humans for jobs. The 50 million jobs in fast food, delivery, retail, hotels, airports, factories, restaurants, and construction will be lost in the next fifty years. But America can’t deal with 50 million unemployed, and the economy will not create 50 million new jobs. Why?

In the current economy people trade labor for money. But without enough work, people won’t be able to earn money. What then? Brain argues that we should then provide free housing and a guaranteed income. But whatever we do, we had better start thinking about the kind of societal structures needed in a “robotic nation.”

Robots in 2015” 


We Will Replace the Pilots – and then
Robots in Retail – but we won’t
Create New Jobs – which implies
A Race to the Bottom – so
Where Do We Want to Go?

If you went back to 1950 you would find people doing most of the work just like they do in 2000. (Except for ATM machines, robots on the auto assembly line, automated voice answering systems, etc.) But we are on the edge of a robotic nation, where half the jobs will be automated in the near future. Robots will be popular because they save money. For example, if an airline replaces expensive pilots, the money saved will give them a competitive advantage over other airlines. Initially we’ll feel sorry for the pilots, but forget about them when the savings are passed on to us. Other jobs will follow suit. What about new jobs creation? After all, the model T created an automotive industry. Won’t the robotic industry do the same? No. Robots will assemble robots, and engineering and sales jobs will go to those willing to work for less.

The robotic nation will have lots of jobs—for robots! Even now our economy creates few high paying jobs. (For which there is intense competition.) Instead, there will be a “race to the bottom.” A race to pay lower wages and benefits to workers and, if technologically feasible, to eliminate them altogether. Robots will make the minimum wage—which has declined in real dollars for the last forty years—irrelevant; there will be no high paying jobs to replace the lost low-paying ones. So where do we want to go? We are on the brink of massive unemployment unknown in American history, and everyone will suffer because of it. How then do we want the robotic economy to work for the citizens of this nation?

Robotic Freedom

Overall Summary

The Concentration of Wealth – is accelerating bringing about
A Question of Freedom – why not let us be free to create
Harry Potter and the Economy – which leads us to
Stating Goals – to increase human freedom using
Capitalism Supersized – an economy that provides for all and has
The Advantages of Economic Security – which is better for
Everybody – because even high-skilled jobs are vulnerable.

We are on the leading edge of a robotic revolution that is beginning with automated checkout lane, and the pace of this change will accelerate in our lifetimes. Furthermore, the economy will not absorb all these unemployed. So what can we do to adapt to the catastrophic changes that the robotic nation will bring?

People are crucial to the economy. But increasingly there is a concentration of wealth—the rich make more money and the workers make less. With the arrival of robots, all corporate income will go to the shareholders and executives. But this automation of labor—robots will do almost all the work 100 years from now—should allow people to be more creative. So why not design an economy where we abandon the “work or don’t eat” philosophy?

This is a question of freedom. Consider J.K. Rowling, author of the Harry Potter books. Amazingly she wrote them while on welfare and would not have done so without public support. Think how much human potential we lose because people have to work to eat. How much music, art, science, literature, and technology have never been created because people had to work to eat. Consider that Linux and Wikipedia were created by people in their spare time. Why not create an economic model that encourages this kind of productivity, one where we don’t have so many working poor, or people sleeping in the streets? Brain argues that robots give us a chance to transform the human condition.

He also argues that we shouldn’t ban robots because that leads to economic stagnation and lots of toilet cleaning. Instead he states these goals:  raise the minimum wage; reduce the work week; and increase welfare systems to deal with unemployment. What need to completely re-think our economic goals. The primary goal of the economy should be to increase human freedom. We can do this by using robotic workers to free people to: choose their own creative projects, and use their free time as they see fit. We need not be slaves to the sixty hour work week, which is “the antithesis of freedom.”

The remainder of the article offers suggestions (supersize capitalism, guarantee economic security) as to how we would fund a society in which people are free to actualize their potential to be creative without the burden of wage slavery. Now if all this seems unrealistic consider how fanciful our world would be to the slaves and serfs that populated much of human history. Brain says we are all vulnerable to the coming robotic nation, so we should think about a different world. Hopefully it will be one where robotic workers give us the time and the the freedom we all so desperately desire.

Robotic Nation FAQ

Question 1 – Why did you write these articles? What is your goal? Answer – Robots will take over half the jobs by 2030, and this will have disastrous consequences for rich and poor alike. No one wants this. I’d like to plan ahead.

Question 2 – You are suggesting that the switchover to robots will happen quickly, over the course of just 20 to 30 years. Why do you think it will happen so fast? Answer – Consider the analogy to the automobile or computer revolutions. Once things get going, they proceed rapidly. Vision, CPU power, and memory are currently holding robots back—but this will change. Robots will work better and faster than humans by 2030-2040.

Question 3 – In the past technological innovation created more jobs, not less. When horse-drawn plows were replaced by the tractor, security guards by the burglar alarm, craftsman making things by factories making them,  human calculators by computers, etc., it improved productivity and increased everyone’s standard of living. Why do you think that robots will create massive unemployment and other economic problems? Answer – First, no previous technology replaced 50% of the labor pool. Second, robotics won’t create new jobs. The work created by robots will be done by robots. Third, we are creating a second intelligent species which competes with humans for jobs. As the abilities of this new species improves, they will do more of our work. Fourth, past increases in productivity meant more pay and less work, but today worker wages are stagnant. Now productivity gains result in concentration of wealth. This may work itself out in the long run, but in the short run it is devastating.

Question 4 – There is no evidence for what you are saying, no economic foundation for your proposals. Answer – Just Google ‘jobless recovery,’” for the evidence. Automation fuels production increases, but does not create new jobs.

Question 5 – What you are describing is socialism. Why are you a socialist/communist? Answer – Brain responds that he is a capitalist who has started three successful businesses and written a dozen books—he is pro-market. Socialism is the view that centralized governmental planning produces and distributes goods. But Brain argues that by giving consumers a share of the wealth—which they won’t be able to earn with work—we will “enhance capitalism by creating a large, consistent river of consumer spending,” and at the same time provide economic security to all citizens. Communism is usually identified by the loss of freedom and choice, whereas Brain wants people to have “economic freedom for the first time in human history…”

Question 6 – Why do you believe that a $25,000 per year stipend for every citizen is the solution to the problem? Answer – With robots doing all the work, we will finally have an opportunity to do this, which is better for everyone.

Question 7 – Won’t your proposals cause inflation? Answer – Tax rebates, similar to his proposals, don’t cause inflation. Neither do taxes, social security or other programs that redistribute wealth.

Question 7a – OK, maybe it won’t cause inflation. But there is no way to give everyone $25,000 per year. The GDP is only $10 trillion. Answer – Brain argues that we should do this gradually. Remember $150 billion, about what the US spent on the Iraq war in 2003, is $500 for every man, woman, and child in the US. At the moment our government collects about $20,000 per household in taxes each year and so a stipend in that range is feasible.

Question 7b – Is $25,000 enough? Why not more? Answer – “As the economy grows, so should the stipend.”

Question 8 – Won’t robots bring dramatically lower prices? Everyone will be able to buy more stuff at lower prices. Answer – True. But current trends show that most of the wealth will end up in the hands of a few. Also, if you have no wealth it won’t matter that prices are low. For every citizen benefit from the robotic nation, we must distribute the wealth.

Question 9 – Won’t a $25,000 per Year Stipend Create a Nation of Alcoholics? Answer – Brain notes this is a common question since many people assume that if we aren’t forced to do hard labor we’ll just do nothing or drink all day. But he has no idea where this fear comes from (probably from philosophical, moral, and religious ideas promulgated by certain groups.) He dispels the idea with examples: a) he supports his wife who works at home; b) his in-laws are retired and live on a pension and social security; c) he has independently wealthy friends; d) he knows students supported by loans; and e) many receive free education and training. None of these people are lazy or alcoholics! 

Question 9a – Yes, stay-at-home moms and retirees are not alcoholic parasites, but they are exceptions. They also are not productive members of the economy. Society will collapse if we do what you are talking about. Answer – Everyone participates in the economy by spending money. Unless there are people with money there’s no economy. The cycle of getting paid by a paycheck and spending it at businesses who get the money from customers is just that—a cycle—which will stop if people have no money. And giving a stipend won’t stop people from trying to make more money, create, invent or play. Some people will become alcoholics though, just as they do now, but Brain thinks we’ll have less lazy alcoholics if we provide people with enough to live decent lives.

Question 10 – Why not let capitalism run itself? We should eliminate the minimum wage, welfare, child labor laws, the 40-hour work week, antitrust laws, etc. Answer – Because of economic coercion. This economic power is why companies pay wages of a few dollars a week in most parts of the world. Better to have a universal basic income.

Question 11 – Why didn’t you include the whole world in your proposals—why are you U.S. centric? Answer – Ideally, the global economy would adopt these proposals.

Question 12 – I love this idea. How are we going to make it happen? Answer – We should spread the word.


1. These articles in their entirety can be found here.

Social Media and Personal Connection

I had a conversation today a friend who claimed that “social media creates a false sense of connection and drives us further apart.” First of all, I’m not sure what counts as social media. For example, some argue that blogs count as social media, while others disagree. But if social media are “computer-mediated technologies that allow the creating and sharing of information, ideas, career interests and other forms of expression via virtual communities and networks,” then blogs are social media. And I do think that I connect with others through my blog.

At any rate I wouldn’t say that social media create a “false” sense of connection, but rather a “different” sense. In life, we know others to varying degrees. A connection with someone on Facebook or Twitter may typically be shallower than a connection between people who know each other personally, but that doesn’t mean the connection is bad or false. After all, you can have face-to-face relationships which are terrible. Maybe what we should say is that modern technology allows you, in general, to communicate with vastly more people than in the past, but that with the increased quantity probably comes a loss of quality. Still your social media acquaintances are less likely to kill than your friends or family!

What all this got me to thinking about was the role of technology in mediating human connectivity. (Disclaimer, I know nothing about communication theory.) If I Skype or talk on the phone with someone, read a book they wrote or watch a movie about them, I am connecting with them. So I know Bertrand Russell a little bit from reading his books, but not as well as if I had lived with him. And if I read his philosophical writings, I may know him better, in some sense, than people who knew him personally but never read his books. So if he were alive today and was my Facebook friend, I don’t think we should call this a false connection. True it wouldn’t be a deep connection, but it would be better than no connection at all.

Now consider letter writing. There was a time not that long ago when many people had “pen pals,” yesterday’s equivalent of email friends. Email is faster than letter writing, but both allow people to connect in ways that were impossible before we had computers or paper and letter carriers. I often feel that I actually communicate better with others through writing rather than in person. Using the written word allows me to be more clear and precise than oral communication, and eliminates the apprehension that often accompanies direct human interactions.

Thinking about communications reminded me that in graduate school I was fortunate enough to work in the same building with, and read some of the writing of,  Walter Ong SJ (1912 – 2003). Ong was an American Jesuit priest, humanist and communication theorist, and professor of English literature at St. Louis University for many years.

Ong’s  major interest was in exploring how the transition from orality to literacy influenced culture and changed human consciousness. He argued that the invention of writing played a major role in the emergence of individualism by providing the technology to think alone and to pursue intricate studies impossible in oral cultures that rely solely on face-to-face communication and memory. Ong claimed specifically, that the technologies of writing and printing created a new individualistic character, the private author who addresses an indefinite population. Paradoxically, he thought that “there is an inverse relationship between the number of people you are addressing and how alone you have to be.”

So I was introduced long ago to the sense that while technology changes communication, it doesn’t necessarily undermine it and may, in some ways, enhance it. You can easily imagine future technologies that would allow us to communicate even better, perhaps by being able to really feel what it is like to be the other or probe directly into others minds. Obviously Twitter and Facebook are shallow forms of communication, and on the whole they may be detrimental to society and personal relationships. But I reject the idea that technology necessarily leads to a decrease in the quality of human connectivity. In fact on the whole better technology allows for better communication.

Still, I offer a disclaimer, for I am sympathetic with the sentiments Andrew Sullivan expresses in “I Used To Be A Human Being,”

Every minute I was engrossed in a virtual interaction I was not involved in a human encounter. Every second absorbed in some trivia was a second less for any form of reflection, or calm, or spirituality.

So in the end, I’m just not sure about social media, technology and personal connection. Perhaps some of my readers have more ideas.

Summary of “How Technology Hijacks People’s Minds — from a Magician and Google’s Design Ethicist”

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, November 11, 2016.)

I recently read an article in The Atlantic by Tristan Harris, a former Product Manager at Google who studies the ethics of how the design of technology influences people’s psychology and behavior. The piece was titled: “The Binge Breaker” and it covers similar ground to his previous piece “How Technology Hijacks People’s Minds — from a Magician and Google’s Design Ethicist.

Harris is also a leader in the “Time Well Spent” movement which favors “technology designed to enhance our humanity over additional screen time. Instead of a ‘time spent’ economy where apps and websites compete for how much time they take from people’s lives, Time Well Spent hopes to re-structure design so apps and websites compete to help us live by our values and spend time well.”

Harris’ basic thesis is that “our collective tech addiction” results more from the technology itself than “on personal failings, like weak willpower.” Our smart phones, tablets, and computers seize our brains and control us, hence Harris’ call for a “Hippocratic oath” that implores software designers not to exploit “psychological vulnerabilities.” Harris and his colleague Joe Edelman compare “the tech industry to Big Tobacco before the link between cigarettes and cancer was established: keen to give customers more of what they want, yet simultaneously inflicting collateral damage on their lives.”

[I think this analogy is extraordinarily weak. The tobacco industry made a well-documented effort to make their physically deadly products more addictive while there is no compelling evidence of any similarly sinister plot regarding software companies nor or their products deadly. Tobacco will literally kill you while your smart phone will not.]

The social scientific evidence for Harris’ insights began when he was a member of the Stanford Persuasive Technology Lab. “Run by the experimental psychologist B. J. Fogg, the lab has earned a cult-like following among entrepreneurs hoping to master Fogg’s principles of ‘behavior design’—a euphemism for what sometimes amounts to building software that nudges us toward the habits a company seeks to instill.” As a result:

Harris learned that the most-successful sites and apps hook us by tapping into deep-seated human needs … [and] He came to conceive of them as ‘hijacking techniques’—the digital version of pumping sugar, salt, and fat into junk food in order to induce bingeing … McDonald’s hooks us by appealing to our bodies’ craving for certain flavors; Facebook, Instagram, and Twitter hook us by delivering what psychologists call “variable rewards.” Messages, photos, and “likes” appear on no set schedule, so we check for them compulsively, never sure when we’ll receive that dopamine-activating prize.

[Note though that because we may become addicted to technology, and many other things to, doesn’t mean that someone is intentionally addicting you to that thing. For example, you may become addicted to your gym or jogging but that doesn’t mean that the gym or running shoe store has nefarious intentions.]

Harris worked on Gmail’s Inbox app and is “quick to note that while he was there, it was never an explicit goal to increase time spent on Gmail.” In fact,

His team dedicated months to fine-tuning the aesthetics of the Gmail app with the aim of building a more ‘delightful’ email experience. But to him that missed the bigger picture: Instead of trying to improve email, why not ask how email could improve our lives—or, for that matter, whether each design decision was making our lives worse?

[This is an honorable view, but it is extraordinarily idealistic. First of all, improving email does minimally improve our lives, as anyone in the past who waited weeks or months for correspondence would surely attest. If the program works, allows us to communicate with our friends, etc., then it makes our lives a bit better. Of course email doesn’t directly help us obtain beauty, truth, goodness or world peace, if that’s your goal, but that seems to be a lot to ask of an email program! Perhaps then it is a case of lowering our expectations of what a technology company, or any business, is supposed to do. Grocery stores make our lives go better, even if grocers are mostly concerned with profit. I’m not generally a fan of Smith’s “invisible hand,” but sometimes the idea provides insight. Furthermore, if Google or any company tried to improve people’s lives without showing a profit, they would soon go out of business. The only way to ultimately be improve the world is to effect change in the world in which we live, not in some idealistic one that doesn’t exist.]

Harris makes a great point when he notes that “Never before in history have the decisions of a handful of designers (mostly men, white, living in SF, aged 25–35) working at 3 companies”—Google, Apple, and Facebook—“had so much impact on how millions of people around the world spend their attention … We should feel an enormous responsibility to get this right.”

Google responded to Harris’ concerns. He met with CEO Larry Page, the company organized internal Q&A sessions [and] he was given a job that researched ways that Google could adopt ethical design. “But he says he came up against “inertia.” Product road maps had to be followed, and fixing tools that were obviously broken took precedence over systematically rethinking services.” Despite these problems “he justified his decision to work there with the logic that since Google controls three interfaces through which millions engage with technology—Gmail, Android, and Chrome—the company was the “first line of defense.” Getting Google to rethink those products, as he’d attempted to do, had the potential to transform our online experience.”

[This is one of the most insightful things that Harris says. Again, the only way to change the world is to begin with the world you find yourself in, for you really can’t begin in any other place. I agree with what Eric Fromm taught me long ago, that we should be measured by what we are, not what we have. But, on the other hand, if we have nothing we have nothing to give.]

Harris hope is that:

Rather than dismantling the entire attention economy … companies will … create a healthier alternative to the current diet of tech junk food … As with organic vegetables, it’s possible that the first generation of Time Well Spent software might be available at a premium price, to make up for lost advertising dollars. “Would you pay $7 a month for a version of Facebook that was built entirely to empower you to live your life?,” Harris says. “I think a lot of people would pay for that.” Like splurging on grass-fed beef, paying for services that are available for free and disconnecting for days (even hours) at a time are luxuries that few but the reasonably well-off can afford. I asked Harris whether this risked stratifying tech consumption, such that the privileged escape the mental hijacking and everyone else remains subjected to it. “It creates a new inequality. It does,” Harris admitted. But he countered that if his movement gains steam, broader change could occur, much in the way Walmart now stocks organic produce. Even Harris admits that often when your phone flashes with a new text message it hard to resist. It is hard to feel like you are in control of the process.

[There is much to say here. First of all there are many places to spend time well on the internet. I’d like to think that some readers of this blog find something substantive here. I also believe that “mental highjacking,” is a loaded term. It implies an intent on the part of the highjacker that may not be present. Yes Facebook, or something much worse like the sewer of alt-right politics, might highjack our minds, but religious belief, football on TV, reading, stamp collecting, or even compulsive meditating could be construed as highjacking our minds. In the end we may have to respect individual autonomy. A few prefer to read my summaries of the great philosophers, others prefer reading about the latest Hollywood gossip.]

Concluding Reflections – I begin with a disclaimer. I know almost nothing about software product design. But I did teach philosophical issues in computer science for many years in the computer science department at UT-Austin, and I have an abiding interest in philosophy of technology. So let me say a few things.

All technologies have benefits and costs. Air conditioning makes summer endurable, but it has the potential to release hydrofluorocarbons into the air. Splitting the atom unleashes great power, but that power can be used for good or ill. Robots put people out of work, but give people potentially more time to do what they like to do. On balance, I find email a great thing, and in general I think technology, which is applied science, has been the primary force for improving the lives of human beings. So my prejudice is to withhold critique of new technology. Nonetheless, the purpose of technology should be to improve our lives, not make us miserable. Obviously.

Finally, as for young people considering careers, if you want to make a difference in the world I can think of no better place than at any of the world’s high-tech companies. They have the wealth, power and influence to actually change the world if they see fit. Whether they do that or not is up to the people who work there. So if you want to change the world, join in the battle. But whatever you do, given the world as it is, you must take care of yourself. For if you don’t do that, you will not be able to care for anything else either. Good luck.

Critique of Bill Joy’s “Why the future doesn’t need us”

“I’m Glad the Future Doesn’t Need Us: A Critique of Joy’s Pessimistic Futurism”
(Originally published in Computers and Society, Volume 32: Issue 6, June 2003. This article was later reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 24, 2016.)


In his well-known piece, “Why the future doesn’t need us,” Bill Joy argues that 21st century technologies—genetic engineering, robotics, and nanotechnology (GNR)—will extinguish human beings as we now know them, a prospect he finds deeply disturbing. I find his arguments deeply flawed and critique each of them in turn.

Joy’s unintended consequences argument cites a passage by the Unabomber Ted Kaczinski. According to Joy, the key to this argument is the notion of unintended consequences, which is “a well-known problem with the design and use of technology…” Independent of the strength of Kaczynski’s anti-technology argument—which I also find flawed—it is hard to quibble about the existence of unintended consequences.1 And it is easy to see why. The consequences of an action are in the future relative to that action and, since the future is unknown, some consequences are unknown. Furthermore, it is self-evident that an unknown future and unknown consequences are closely connected.

However, the strongest conclusion that Joy should draw from the idea of unintended consequences is that we should carefully choose between courses of action; and yet he draws the stronger conclusion that we ought to cease and desist in the research, development, and use of 21st century technologies. But he cannot draw this stronger conclusion without contradiction if, as he thinks, many unknown, unintended consequences result from our choices. And that’s because he can’t know that abandoning future technologies will produce the intended effects. Thus the idea of unintended consequences doesn’t help Joy’s case, since it undermines the justification for any course of action. In other words, the fact of unintended consequences tells us nothing about what we ought to choose, and it certainly doesn’t give us any reason to abandon technology. Of course Joy might reply that new, powerful technologies make unintended consequences more dangerous than in the past, but as I’ve just shown, he cannot know this. It may well be that newer technologies will lead to a safer world.

Joy’s big fish eat little fish argument quotes robotics pioneer Hans Moravec: “Biological species almost never survive encounters with superior competitors.” Analogously, Joy suggests we will be driven to extinction by our superior robotic descendents. But it isn’t obvious that robots will be superior to us and, even if they were, they may be less troublesome than our neighbors next door. In addition, his vision of the future presupposes that robots and humans will remain separate creatures, a view explicitly rejected by robotics expert Rodney Brooks and others. If Brooks is correct, humans will gradually incorporate technology into their own bodies thus eliminating the situation that Joy envisions. In sum, we don’t know that robots will be the bigger fish, that they will eat us even if they are, or that there will even be distinct fishes.

Joy’s mad scientist argument describes a molecular biologist who “constructs and disseminates a new and highly contagious plague that kills widely but selectively.” Now I have no desire to contract a plague, but Joy advances no argument that this follows from GNR; instead, he plays on our emotions by associating this apocalyptic vision with future technology. (In fact, medical science is the primary reason we have avoided plagues.) The images of mad scientist or Frankenstein may be popular, but scientists are no madder than anyone else and nightmarish describes only one possible future.

Joy’s lack of control argument focuses upon the self-replicating nature of GNR. According to Joy, self-replication amplifies the danger of GNR: “A bomb is blown up only once—but one bot can become many, and quickly get out of control.” First of all, bombs replicate, they just don’t replicate by themselves. So Joy’s concern must not be with replication, but with self-replication. So what is it about robotic self-replication that frightens us? The answer is obvious. Robotic self-replication appears to be out of our control, as compared to our own or other humans self-replication. Specifically, Joy fears that robots might replicate and then enslave us; but other humans can do the same thing. In fact, we may increase our survival chances by switching control to more failsafe robots designed and programmed by our minds. While Joy is correct that “uncontrolled self-replication in these newer technologies runs … a risk of substantial damage in the physical world,” so to does the “uncontrolled self-replication” of humans, their biological tendencies, their hatreds, and their ideologies. Joy’s fears are not well-founded because the lack of control over robotic self-replication is not, prima facie, more frightening than the similar lack of control we exert over other human’s replication.

Furthermore, to what extent do we control our own reproduction?  I’d say not much. Human reproduction results from a haphazard set of cultural, geographical, biological, and physiological circumstances; clearly, we exert less control over when, if, and with whom we reproduce than we suppose. And we certainly don’t choose the exact nature of what’s to be reproduced; we don’t replicate perfectly. We could change this situation thru genetic engineering, but Joy opposes this technology. He would rather let control over human replication remain in the hands of chance—at least chance as determined by the current state of our technology. But if he fears the lack of control implied by robotic self-replication, why not fear that lack of control over our own replication and apply more control to change this situation? In that way, we could enhance our capabilities and reduce the chance of not being needed.

Of course Joy would reiterate that we ought to leave things as they are now. But why? Is there something perfect or natural about the current state of our knowledge and technology? Or would things be better if we turned the technological clock back to 1950? 1800? or 2000 B.C.? I suggest that the vivid contrast Joy draws between the control we wield over our own replication and the lack of it regarding self-replicating machines is illusory. We now have and may always have more control over the results of our conscious designs and programs, then we do over ourselves or other people whose programs were written by evolution. If we want to survive and flourish then we ought to engineer ourselves with foresight and, at the same time, engineer machines consistent with these goals.

Joy’s easy access argument claims that 20th century technologies—nuclear, biological, and chemical (NBC)—required access to rare “raw materials and highly protected information,” while 21st century technologies “are widely within the reach of individuals or small groups.” This means that “knowledge alone will enable the use of them,” a phenomenon that Joy terms: “knowledge-enabled mass destruction (KMD).”

Now it is difficult to quibble with the claim that powerful, accessible technologies pose a threat to our survival. Joy might argue that even if we survived the 21st century without destroying ourselves, what of the 22nd or the 23rd centuries when more accessible and powerful KMD becomes possible? Of course we could freeze technology, but it is uncertain that this would be either realistic or advisable. Most likely the trend of cultural evolution over thousands of years will continue—we will gain more control and power over reality.

Now is this more threatening than if we stood still? This is the real question that Joy should ask because there are risks no matter what we do. If we remain at our current level of technology we will survive until we self-destruct or are destroyed by universal forces, say the impact of an asteroid or the sun’s exhaustion of its energy. But if we press forward, we may be able to save ourselves. Sure, we must be mindful of the promises and the perils of future technologies, but nothing Joy says justifies his conclusion that: “we are on the cusp of the further perfection of extreme evil…” Survival is a goal, but I don’t believe that abandonment of new technologies will assure this result or even make it more likely; it just isn’t clear that limiting the access to or discovery of knowledge is, or has ever been, the solution to human woes.

Joy’s  poor design abilities argument notes how often we “overestimate our design abilities,” and concludes: “shouldn’t we proceed with great caution?” But he forgets that we sometime underestimate our design abilities; and sometimes we are too cautious. Go forward with caution, look before you leap—but don’t stand still.

I take the next argument to be his salient one. He claims that scientists dream of building conscious machines primarily because they want to achieve immortality by downloading their consciousness into them. While he accepts this as distinct possibilities, his existential argument asks whether we will still be human after we download: “It seems far more likely that a robotic existence would not be like a human one in any sense that we understand, that the robots would in no sense be our children, that on this path our humanity may well be lost.” The strength of this argument depends on the meaning of: “in any sense,” “no sense,” “humanity,” and “lost.” Let’s consider each in turn.

It is simply false that a human consciousness downloaded into a robotic body would not be human “in any sense.” If our consciousness is well-preserved in the transfer, then something of our former existence would remain, namely our psychological continuity, the part most believe to be our defining feature. And if robotic bodies were sufficiently humanlike—why we would want them to be is another question—then there would be a semblance of physical continuity as well. In fact, such an existence would be very much like human existence now if the technologies were sufficiently perfected. So we would still be human to some, if not a great, extent. However, I believe we would come to prefer an existence with less pain, suffering, and death to our current embodied state; and the farther we distanced ourselves from our former lives the happier we will be.

As to whether robots would “in no sense” be our children, the same kind of argument applies. Whatever our descendants become they will, in some sense, be our children in the same way that we are, in some sense, the children of stars. Again notice that the extent to which we would want our descendants to be like us depends upon our view of ourselves. If we think that we now experience the apex of consciousness, then we should mourn our descendants’ loss of humanity. But if we hold that more complex forms of consciousness may evolve from ours, then we will rejoice at the prospect that our descendants might experience these forms, however non-human-like they may be. But then, why would anyone want to limit the kind of consciousness their descendants experience?

As for our “humanity being lost,” this is true in the sense that human nature will evolve beyond its present state, but false in the sense that there will still be a developmental continuity from beings past and present to beings in the future. Joy wants to limit our offspring for the sake of survival, but isn’t mere survival a lowly goal? Wouldn’t many of us prefer death to the infinite boredom of standing still? Wouldn’t we like to evolve beyond humanity?  It isn’t obvious that we have achieved the pinnacle of evolution, or that the small amount of space and time we fill satisfies us. Instead it is clear that we are deeply flawed and finite—we age, decay, lose our physical and mental faculties, and then perish. A lifetime of memories, knowledge, and wisdom, lost. Oh, that it could be better! Joy’s nostalgic longings for the past and naïve view that we preserve the present are misguided, however well they may resonate with those who share similar longings or fear the inevitable future. Our descendants won’t desire to be us anymore than we do to be our long ago ancestors. As Tennyson proclaims: “How dull it is to pause, to make an end, To rust unburnish’d, not to shine in use!2

Joy next turns to his other technologies make things worse argument. As for genetic engineering, I know of no reason—short of childish pleas not to play God—to impede our increasing abilities to perfect our bodies, eliminate disease, and prevent deformity. To not do so would be immoral, making us culpable for an untold amount of preventable suffering and death. And even if there are Gods who have endowed us with intelligence, it would hardly make sense that they didn’t mean for us to use it. As for nanotechnology, Joy eloquently writes of how “engines of creation” may transform into “engines of destruction, but again it is hard to see why we or the Gods prefer that we remain ignorant about nanotechnology.

Joy also claims that there is something sinister about the fact that NBC technologies have largely military uses and were developed by governments, while GNR have commercial uses and are being developed by corporations. Unfortunately, Joy gives us no reason whatsoever to share his fear. Are the commercial products of private corporations more likely to cause destruction than the military products of governments? At first glance, the opposite seems more likely to be true, and Joy gives us no reason to reconsider.

Joy’s it’s never been this bad argument asserts: “this is the first moment in the history of our planet when any species by its voluntary actions has become a danger to itself.” But this is false. Homo sapiens have always been a danger to themselves, both by their actions, as in incessant warfare, and by their inaction, as demonstrated by their impotence when facing plague and famine. I also doubt that humans are a greater threat to themselves now than ever before. We have explored and spread ourselves to all parts of the globe, multiplied exponentially, extended our life spans, created culture, and may soon have the power to increase our chance for survival from both celestial and terrestrial forces. This should be a cause for celebration not despair. We no longer need be at the mercy of forces beyond our control, we may soon direct our own evolution.

Joy next quotes Carl Sagan’s to the effect that the survival of cultures producing technology depends on “what may and what may not be done.” Joy interprets this insight as the essence of common sense or cultural wisdom. Independent of the question of whether this is a good definition of common sense, Joy assumes that Sagan’s phrase applies to an entire century’s technologies, when it is more likely that it applies to only some of it. It is hard to imagine that Sagan, a champion of science, meant for us to forego 21st century technology altogether.

And I vehemently dispute Joy’s claim that science is arrogant in its pursuits; instead, it is the humblest of human pursuits. Many human pursuits are more arrogant than science, which carefully and conscientiously tries to tease a bit of truth from reality. Its claims are always tentative and amenable to contrary evidence—much more than can be said for most creeds. And what of the charlatans, psychics, cultists, astrologers, and faith-healers? Not to mention the somewhat more respectable priests and preachers. Science humbly does not pretend to know with certainty, much more than can be said about some ignorant people.

And what of his claim that we have no business pursuing robotics and AI when we have “so much trouble …understanding—ourselves?”  The reply to this, trying to understand mind won’t help you understand the mind argument, notes that self-knowledge is the ultimate goal of the pursuit of knowledge. His sentimentally notes that his grandmother “had an awareness of the nature of the order of life, and of the necessity of living with and respecting that order,” but this is hopelessly naïve and belies the facts. Would he have us die poor and young, be food for beasts, defenseless against disease, living lives that were, as Hobbes so aptly put it: “nasty, brutish, and short?” The impotence and passivity implied by respecting the natural order has condemned millions to death.3 In fact, the life that Joy and most of the rest of us enjoy was built on the labors of persons who fought mightily with the natural order and the pain, poverty and suffering that nature exudes. Where would we be without Pasteur and Fleming and Salk? As Joy points out life may be fragile, but it was more so in a past that was nothing like the idyllic paradise that he imagines.

Joy’s analogy between the nuclear arms race and possible GNR races is also misplaced, inasmuch as the 20th century arms race resulted as much from a unique historical situation and conflicting ideologies as some unstoppable technological momentum. Evidence for this is to be found in the reduction of nuclear warheads by the superpowers both during and after the cold war. Yes, we need to learn from the past, but its lessons are not necessarily the ones Joy alludes to. Should we not have developed nuclear weapons? Is he sure that the world would be better today had there not been a Manhattan project?

Now it may be that we are chasing our own tails as we try to create defenses for the threats that new technologies pose. Possibly, every counter measure is as dangerous as the technology for which it was meant to counter. But Joy’s conclusion is curious: “The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.” In the first place, it is unrealistic to believe that we could limit the pursuit of knowledge even if we wanted to and it was a good idea. Second, this “freeze” at current levels of technology does not expunge the danger; the danger exists now.

A basic difficulty with Joy’s article is this: he mistakenly accept the notion that technology rules people rather than the reverse.4 But if we can control our technology, there is another solution to our dilemmas. We can use our technology to change ourselves; to make ourselves more ethical, cautious, insightful, and intelligent. Surely Joy believes that humans make choices, how else could they choose relinquishment? So why not change ourselves, relinquishing not our pursuit of knowledge, but our self-destructive tendencies?

Joy’s hysteria blinds him to the possible fruits of our knowledge and his pessimism won’t allow him to see our knowledge and its applications as key to our salvation. Instead, he appeals to the ethics of the Dalia Lama to save us, as if another religious ethics will offer escape from the less noble angels of our nature. I know of no good evidence that the prescriptions of religious ethics have, on the whole, increased the morality of the human race. No doubt the contrary case could easily be made. Why not then use our knowledge to gain mastery over ourselves? If we do that, mastery of our technology will take care of itself. Joy’s concerns are legitimate, but his solutions unrealistic. His planned knowledge stoppage condemns human beings to an existence that cannot improve. And if that’s the case, what is the point of life?

I say forego Joy’s pessimism; reject all barriers and limitations to our intelligence, health, and longevity. Be mindful of our past accomplishments, appreciative of all that we are, but be driven passionately and creatively forward by the hope of all that we may become. Therein lies the hope of humankind and their descendents. In the words of Walt Whitman:

This day before dawn I ascended a hill,
and look’d at the  crowded heaven,
And I said to my Spirit,
When we become the enfolders of those orbs,
and the pleasure and knowledge of everything in them,
shall we be fill’d and satisfied then?
And my Spirit said:
No, we but level that lift,
to pass and continue beyond.
~ Walt Whitman 


1. Kaczynski argues that machines will either: a) make all the decisions thus rendering humans obsolete; or b) humans will retain control. If b then only an elite will rule in which case they will: 1)quickly exterminate the masses; 2)slowly exterminate the masses; or 3)take care of the masses. However if 3 then the masses will be happy but not free and life would have no meaning. My questions for Kaczynski are these: Does he really think the only way for humans to be happy is in an agricultural paradise? Does he think an agricultural life was a paradise? A hunter-gather life? Are we really less free when we have loosened the chains of our evolutionary heritage, or our we more free? Kaczynski’s vision of a world where one doesn’t work, pursues their own interests, while being very happy sounds good to me.

2. from Alfred Lord Tennyson’s Ulysses.

3. I would argue that had the rise of Christianity in the West not stopped scientific advancement for a thousand years until the Renaissance, we might be immortals already.

4. As in Thoreau’s well-known phrase which appears, not surprisingly, on the Luddite home page: “We do not ride on the railroad; it rides upon us.”

5. From Walt Whitman’s “Song of Myself” in Leaves of Grass.

Summary of Bill Joy’s, “Why the future doesn’t need us,”

Bill joy.jpg

Bill Joy (1954 – ) is an American computer scientist who co-founded Sun Microsystems in 1982, and served as chief scientist at the company until 2003. His now famous Wired magazine essay, “Why the future doesn’t need us,” (2000) sets forth his deep concerns over the development of modern technologies.[i] 

Joy traces his concern to a discussion he had with Ray Kurzweil at a conference in 1998. Taken aback by Kurzweil’s predictions, he read an early draft of The Age of Spiritual Machines: When Computers Exceed Human Intelligence, and found it deeply disturbed. Subsequently he encountered arguments by the Unabomber Ted Kaczynski’s. Kaczynski argued that if machines do all the work, as they inevitably will, then we can: a) let the machines make all the decisions; or b) maintain human control over the machines.

If we choose “a” then we are at the mercy of our machines. It is not that we would give them control or that they would take control, rather, we might become so dependent on them that we would have to accept their commands. Needless to say, Joy doesn’t like this scenario. If we choose “b” then control would be in the hands of an elite, and the masses would be unnecessary. In that case the tiny elite: 1) would exterminate the masses; 2) reduce their birthrate so they slowly became extinct; or 3) become benevolent shepherds to the masses. The first two scenarios entail our extinction, but even the third option is no good. In this last scenario the elite would see to it that all physical and psychological needs of the masses are met, while at the same time engineering the masses to sublimate their drive for power. In this case the masses might be happy, but they would not be free.

Joy finds these arguments convincing and deeply troubling. About this time Joy read Moravec’s book where he found more of the same kind of predictions. He found himself especially concerned by Moravec’s claim that technological superiors always defeat the inferiors, as well as his contention that humans will become extinct as they merge with the robots. Disturbed, Joy consulted other computer scientists who basically agreed with these technological predictions but were themselves unconcerned. Joy was stirred to action.

Joy’s concerns focuses on the transforming technologies of the 21st century—genetics, nanotechnology, and robotics (GNR). What is particularly problematic about them is that they have the potential to self-replicate. This makes them inherently more dangerous than 20th century technologies—nuclear, biological, and chemical weapons—which were expensive to build and require rare raw materials. By contrast, 21st century technologies allow for small groups or individuals to bring about massive destruction. Joy accepts that we will soon achieve the computing power to implement some of the dreams of Kurzweil and Moravec, worrying nevertheless that we overestimate our design abilities. Such hubris may lead to disaster.

Robotics is primarily motivated by the desire to be immortal—by downloading ourselves into them. (The terms uploading and downloading are used interchangeably.) But Joy doesn’t believe that we will be human after the download or that the robots would be our children. As for genetic engineering, it will create new crops, plants, and eventually new species including many variations of human species, but Joy fears that we do not know enough to conduct such experiments. And nanotechnology confronts the so-called “gray goo” problem—self-replicating nanobots out of control. In short, we may be on the verge of killing ourselves! Is it not arrogant, he wonders, to design a robot replacement species when we so often make design mistakes?

Joy concludes that we ought to relinquish these technologies before it’s too late. Yes, GNR may bring happiness and immortality, but should we risk the survival or the species for such goals? Joy thinks not.

Summary – Genetics, nanotechnology, and robotics are too dangerous to pursue. We should relinquish them.


[i] Bill Joy, “Why The Future Doesn’t Need Us,” Wired Magazine, April 2000.