Category Archives: Science & Technology

Social Media and Personal Connection

I had a conversation today a friend who claimed that “social media creates a false sense of connection and drives us further apart.” First of all, I’m not sure what counts as social media. For example, some argue that blogs count as social media, while others disagree. But if social media are “computer-mediated technologies that allow the creating and sharing of information, ideas, career interests and other forms of expression via virtual communities and networks,” then blogs are social media. And I do think that I connect with others through my blog.

At any rate I wouldn’t say that social media create a “false” sense of connection, but rather a “different” sense. In life, we know others to varying degrees. A connection with someone on Facebook or Twitter may typically be shallower than a connection between people who know each other personally, but that doesn’t mean the connection is bad or false. After all, you can have face-to-face relationships which are terrible. Maybe what we should say is that modern technology allows you, in general, to communicate with vastly more people than in the past, but that with the increased quantity probably comes a loss of quality. Still your social media acquaintances are less likely to kill than your friends or family!

What all this got me to thinking about was the role of technology in mediating human connectivity. (Disclaimer, I know nothing about communication theory.) If I Skype or talk on the phone with someone, read a book they wrote or watch a movie about them, I am connecting with them. So I know Bertrand Russell a little bit from reading his books, but not as well as if I had lived with him. And if I read his philosophical writings, I may know him better, in some sense, than people who knew him personally but never read his books. So if he were alive today and was my Facebook friend, I don’t think we should call this a false connection. True it wouldn’t be a deep connection, but it would be better than no connection at all.

Now consider letter writing. There was a time not that long ago when many people had “pen pals,” yesterday’s equivalent of email friends. Email is faster than letter writing, but both allow people to connect in ways that were impossible before we had computers or paper and letter carriers. I often feel that I actually communicate better with others through writing rather than in person. Using the written word allows me to be more clear and precise than oral communication, and eliminates the apprehension that often accompanies direct human interactions.

Thinking about communications reminded me that in graduate school I was fortunate enough to work in the same building with, and read some of the writing of,  Walter Ong SJ (1912 – 2003). Ong was an American Jesuit priest, humanist and communication theorist, and professor of English literature at St. Louis University for many years.

Ong’s  major interest was in exploring how the transition from orality to literacy influenced culture and changed human consciousness. He argued that the invention of writing played a major role in the emergence of individualism by providing the technology to think alone and to pursue intricate studies impossible in oral cultures that rely solely on face-to-face communication and memory. Ong claimed specifically, that the technologies of writing and printing created a new individualistic character, the private author who addresses an indefinite population. Paradoxically, he thought that “there is an inverse relationship between the number of people you are addressing and how alone you have to be.”

So I was introduced long ago to the sense that while technology changes communication, it doesn’t necessarily undermine it and may, in some ways, enhance it. You can easily imagine future technologies that would allow us to communicate even better, perhaps by being able to really feel what it is like to be the other or probe directly into others minds. Obviously Twitter and Facebook are shallow forms of communication, and on the whole they may be detrimental to society and personal relationships. But I reject the idea that technology necessarily leads to a decrease in the quality of human connectivity. In fact on the whole better technology allows for better communication.

Still, I offer a disclaimer, for I am sympathetic with the sentiments Andrew Sullivan expresses in “I Used To Be A Human Being,”

Every minute I was engrossed in a virtual interaction I was not involved in a human encounter. Every second absorbed in some trivia was a second less for any form of reflection, or calm, or spirituality.

So in the end, I’m just not sure about social media, technology and personal connection. Perhaps some of my readers have more ideas.

Summary of “How Technology Hijacks People’s Minds — from a Magician and Google’s Design Ethicist”

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, November 11, 2016.)

I recently read an article in The Atlantic by Tristan Harris, a former Product Manager at Google who studies the ethics of how the design of technology influences people’s psychology and behavior. The piece was titled: “The Binge Breaker” and it covers similar ground to his previous piece “How Technology Hijacks People’s Minds — from a Magician and Google’s Design Ethicist.

Harris is also a leader in the “Time Well Spent” movement which favors “technology designed to enhance our humanity over additional screen time. Instead of a ‘time spent’ economy where apps and websites compete for how much time they take from people’s lives, Time Well Spent hopes to re-structure design so apps and websites compete to help us live by our values and spend time well.”

Harris’ basic thesis is that “our collective tech addiction” results more from the technology itself than “on personal failings, like weak willpower.” Our smart phones, tablets, and computers seize our brains and control us, hence Harris’ call for a “Hippocratic oath” that implores software designers not to exploit “psychological vulnerabilities.” Harris and his colleague Joe Edelman compare “the tech industry to Big Tobacco before the link between cigarettes and cancer was established: keen to give customers more of what they want, yet simultaneously inflicting collateral damage on their lives.”

[I think this analogy is extraordinarily weak. The tobacco industry made a well-documented effort to make their physically deadly products more addictive while there is no compelling evidence of any similarly sinister plot regarding software companies nor or their products deadly. Tobacco will literally kill you while your smart phone will not.]

The social scientific evidence for Harris’ insights began when he was a member of the Stanford Persuasive Technology Lab. “Run by the experimental psychologist B. J. Fogg, the lab has earned a cult-like following among entrepreneurs hoping to master Fogg’s principles of ‘behavior design’—a euphemism for what sometimes amounts to building software that nudges us toward the habits a company seeks to instill.” As a result:

Harris learned that the most-successful sites and apps hook us by tapping into deep-seated human needs … [and] He came to conceive of them as ‘hijacking techniques’—the digital version of pumping sugar, salt, and fat into junk food in order to induce bingeing … McDonald’s hooks us by appealing to our bodies’ craving for certain flavors; Facebook, Instagram, and Twitter hook us by delivering what psychologists call “variable rewards.” Messages, photos, and “likes” appear on no set schedule, so we check for them compulsively, never sure when we’ll receive that dopamine-activating prize.

[Note though that because we may become addicted to technology, and many other things to, doesn’t mean that someone is intentionally addicting you to that thing. For example, you may become addicted to your gym or jogging but that doesn’t mean that the gym or running shoe store has nefarious intentions.]

Harris worked on Gmail’s Inbox app and is “quick to note that while he was there, it was never an explicit goal to increase time spent on Gmail.” In fact,

His team dedicated months to fine-tuning the aesthetics of the Gmail app with the aim of building a more ‘delightful’ email experience. But to him that missed the bigger picture: Instead of trying to improve email, why not ask how email could improve our lives—or, for that matter, whether each design decision was making our lives worse?

[This is an honorable view, but it is extraordinarily idealistic. First of all, improving email does minimally improve our lives, as anyone in the past who waited weeks or months for correspondence would surely attest. If the program works, allows us to communicate with our friends, etc., then it makes our lives a bit better. Of course email doesn’t directly help us obtain beauty, truth, goodness or world peace, if that’s your goal, but that seems to be a lot to ask of an email program! Perhaps then it is a case of lowering our expectations of what a technology company, or any business, is supposed to do. Grocery stores make our lives go better, even if grocers are mostly concerned with profit. I’m not generally a fan of Smith’s “invisible hand,” but sometimes the idea provides insight. Furthermore, if Google or any company tried to improve people’s lives without showing a profit, they would soon go out of business. The only way to ultimately be improve the world is to effect change in the world in which we live, not in some idealistic one that doesn’t exist.]

Harris makes a great point when he notes that “Never before in history have the decisions of a handful of designers (mostly men, white, living in SF, aged 25–35) working at 3 companies”—Google, Apple, and Facebook—“had so much impact on how millions of people around the world spend their attention … We should feel an enormous responsibility to get this right.”

Google responded to Harris’ concerns. He met with CEO Larry Page, the company organized internal Q&A sessions [and] he was given a job that researched ways that Google could adopt ethical design. “But he says he came up against “inertia.” Product road maps had to be followed, and fixing tools that were obviously broken took precedence over systematically rethinking services.” Despite these problems “he justified his decision to work there with the logic that since Google controls three interfaces through which millions engage with technology—Gmail, Android, and Chrome—the company was the “first line of defense.” Getting Google to rethink those products, as he’d attempted to do, had the potential to transform our online experience.”

[This is one of the most insightful things that Harris says. Again, the only way to change the world is to begin with the world you find yourself in, for you really can’t begin in any other place. I agree with what Eric Fromm taught me long ago, that we should be measured by what we are, not what we have. But, on the other hand, if we have nothing we have nothing to give.]

Harris hope is that:

Rather than dismantling the entire attention economy … companies will … create a healthier alternative to the current diet of tech junk food … As with organic vegetables, it’s possible that the first generation of Time Well Spent software might be available at a premium price, to make up for lost advertising dollars. “Would you pay $7 a month for a version of Facebook that was built entirely to empower you to live your life?,” Harris says. “I think a lot of people would pay for that.” Like splurging on grass-fed beef, paying for services that are available for free and disconnecting for days (even hours) at a time are luxuries that few but the reasonably well-off can afford. I asked Harris whether this risked stratifying tech consumption, such that the privileged escape the mental hijacking and everyone else remains subjected to it. “It creates a new inequality. It does,” Harris admitted. But he countered that if his movement gains steam, broader change could occur, much in the way Walmart now stocks organic produce. Even Harris admits that often when your phone flashes with a new text message it hard to resist. It is hard to feel like you are in control of the process.

[There is much to say here. First of all there are many places to spend time well on the internet. I’d like to think that some readers of this blog find something substantive here. I also believe that “mental highjacking,” is a loaded term. It implies an intent on the part of the highjacker that may not be present. Yes Facebook, or something much worse like the sewer of alt-right politics, might highjack our minds, but religious belief, football on TV, reading, stamp collecting, or even compulsive meditating could be construed as highjacking our minds. In the end we may have to respect individual autonomy. A few prefer to read my summaries of the great philosophers, others prefer reading about the latest Hollywood gossip.]

Concluding Reflections – I begin with a disclaimer. I know almost nothing about software product design. But I did teach philosophical issues in computer science for many years in the computer science department at UT-Austin, and I have an abiding interest in philosophy of technology. So let me say a few things.

All technologies have benefits and costs. Air conditioning makes summer endurable, but it has the potential to release hydrofluorocarbons into the air. Splitting the atom unleashes great power, but that power can be used for good or ill. Robots put people out of work, but give people potentially more time to do what they like to do. On balance, I find email a great thing, and in general I think technology, which is applied science, has been the primary force for improving the lives of human beings. So my prejudice is to withhold critique of new technology. Nonetheless, the purpose of technology should be to improve our lives, not make us miserable. Obviously.

Finally, as for young people considering careers, if you want to make a difference in the world I can think of no better place than at any of the world’s high-tech companies. They have the wealth, power and influence to actually change the world if they see fit. Whether they do that or not is up to the people who work there. So if you want to change the world, join in the battle. But whatever you do, given the world as it is, you must take care of yourself. For if you don’t do that, you will not be able to care for anything else either. Good luck.

Critique of Bill Joy’s “Why the future doesn’t need us”

“I’m Glad the Future Doesn’t Need Us: A Critique of Joy’s Pessimistic Futurism”
(Originally published in Computers and Society, Volume 32: Issue 6, June 2003. This article was later reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 24, 2016.)


In his well-known piece, “Why the future doesn’t need us,” Bill Joy argues that 21st century technologies—genetic engineering, robotics, and nanotechnology (GNR)—will extinguish human beings as we now know them, a prospect he finds deeply disturbing. I find his arguments deeply flawed and critique each of them in turn.

Joy’s unintended consequences argument cites a passage by the Unabomber Ted Kaczinski. According to Joy, the key to this argument is the notion of unintended consequences, which is “a well-known problem with the design and use of technology…” Independent of the strength of Kaczynski’s anti-technology argument—which I also find flawed—it is hard to quibble about the existence of unintended consequences.1 And it is easy to see why. The consequences of an action are in the future relative to that action and, since the future is unknown, some consequences are unknown. Furthermore, it is self-evident that an unknown future and unknown consequences are closely connected.

However, the strongest conclusion that Joy should draw from the idea of unintended consequences is that we should carefully choose between courses of action; and yet he draws the stronger conclusion that we ought to cease and desist in the research, development, and use of 21st century technologies. But he cannot draw this stronger conclusion without contradiction if, as he thinks, many unknown, unintended consequences result from our choices. And that’s because he can’t know that abandoning future technologies will produce the intended effects. Thus the idea of unintended consequences doesn’t help Joy’s case, since it undermines the justification for any course of action. In other words, the fact of unintended consequences tells us nothing about what we ought to choose, and it certainly doesn’t give us any reason to abandon technology. Of course Joy might reply that new, powerful technologies make unintended consequences more dangerous than in the past, but as I’ve just shown, he cannot know this. It may well be that newer technologies will lead to a safer world.

Joy’s big fish eat little fish argument quotes robotics pioneer Hans Moravec: “Biological species almost never survive encounters with superior competitors.” Analogously, Joy suggests we will be driven to extinction by our superior robotic descendents. But it isn’t obvious that robots will be superior to us and, even if they were, they may be less troublesome than our neighbors next door. In addition, his vision of the future presupposes that robots and humans will remain separate creatures, a view explicitly rejected by robotics expert Rodney Brooks and others. If Brooks is correct, humans will gradually incorporate technology into their own bodies thus eliminating the situation that Joy envisions. In sum, we don’t know that robots will be the bigger fish, that they will eat us even if they are, or that there will even be distinct fishes.

Joy’s mad scientist argument describes a molecular biologist who “constructs and disseminates a new and highly contagious plague that kills widely but selectively.” Now I have no desire to contract a plague, but Joy advances no argument that this follows from GNR; instead, he plays on our emotions by associating this apocalyptic vision with future technology. (In fact, medical science is the primary reason we have avoided plagues.) The images of mad scientist or Frankenstein may be popular, but scientists are no madder than anyone else and nightmarish describes only one possible future.

Joy’s lack of control argument focuses upon the self-replicating nature of GNR. According to Joy, self-replication amplifies the danger of GNR: “A bomb is blown up only once—but one bot can become many, and quickly get out of control.” First of all, bombs replicate, they just don’t replicate by themselves. So Joy’s concern must not be with replication, but with self-replication. So what is it about robotic self-replication that frightens us? The answer is obvious. Robotic self-replication appears to be out of our control, as compared to our own or other humans self-replication. Specifically, Joy fears that robots might replicate and then enslave us; but other humans can do the same thing. In fact, we may increase our survival chances by switching control to more failsafe robots designed and programmed by our minds. While Joy is correct that “uncontrolled self-replication in these newer technologies runs … a risk of substantial damage in the physical world,” so to does the “uncontrolled self-replication” of humans, their biological tendencies, their hatreds, and their ideologies. Joy’s fears are not well-founded because the lack of control over robotic self-replication is not, prima facie, more frightening than the similar lack of control we exert over other human’s replication.

Furthermore, to what extent do we control our own reproduction?  I’d say not much. Human reproduction results from a haphazard set of cultural, geographical, biological, and physiological circumstances; clearly, we exert less control over when, if, and with whom we reproduce than we suppose. And we certainly don’t choose the exact nature of what’s to be reproduced; we don’t replicate perfectly. We could change this situation thru genetic engineering, but Joy opposes this technology. He would rather let control over human replication remain in the hands of chance—at least chance as determined by the current state of our technology. But if he fears the lack of control implied by robotic self-replication, why not fear that lack of control over our own replication and apply more control to change this situation? In that way, we could enhance our capabilities and reduce the chance of not being needed.

Of course Joy would reiterate that we ought to leave things as they are now. But why? Is there something perfect or natural about the current state of our knowledge and technology? Or would things be better if we turned the technological clock back to 1950? 1800? or 2000 B.C.? I suggest that the vivid contrast Joy draws between the control we wield over our own replication and the lack of it regarding self-replicating machines is illusory. We now have and may always have more control over the results of our conscious designs and programs, then we do over ourselves or other people whose programs were written by evolution. If we want to survive and flourish then we ought to engineer ourselves with foresight and, at the same time, engineer machines consistent with these goals.

Joy’s easy access argument claims that 20th century technologies—nuclear, biological, and chemical (NBC)—required access to rare “raw materials and highly protected information,” while 21st century technologies “are widely within the reach of individuals or small groups.” This means that “knowledge alone will enable the use of them,” a phenomenon that Joy terms: “knowledge-enabled mass destruction (KMD).”

Now it is difficult to quibble with the claim that powerful, accessible technologies pose a threat to our survival. Joy might argue that even if we survived the 21st century without destroying ourselves, what of the 22nd or the 23rd centuries when more accessible and powerful KMD becomes possible? Of course we could freeze technology, but it is uncertain that this would be either realistic or advisable. Most likely the trend of cultural evolution over thousands of years will continue—we will gain more control and power over reality.

Now is this more threatening than if we stood still? This is the real question that Joy should ask because there are risks no matter what we do. If we remain at our current level of technology we will survive until we self-destruct or are destroyed by universal forces, say the impact of an asteroid or the sun’s exhaustion of its energy. But if we press forward, we may be able to save ourselves. Sure, we must be mindful of the promises and the perils of future technologies, but nothing Joy says justifies his conclusion that: “we are on the cusp of the further perfection of extreme evil…” Survival is a goal, but I don’t believe that abandonment of new technologies will assure this result or even make it more likely; it just isn’t clear that limiting the access to or discovery of knowledge is, or has ever been, the solution to human woes.

Joy’s  poor design abilities argument notes how often we “overestimate our design abilities,” and concludes: “shouldn’t we proceed with great caution?” But he forgets that we sometime underestimate our design abilities; and sometimes we are too cautious. Go forward with caution, look before you leap—but don’t stand still.

I take the next argument to be his salient one. He claims that scientists dream of building conscious machines primarily because they want to achieve immortality by downloading their consciousness into them. While he accepts this as distinct possibilities, his existential argument asks whether we will still be human after we download: “It seems far more likely that a robotic existence would not be like a human one in any sense that we understand, that the robots would in no sense be our children, that on this path our humanity may well be lost.” The strength of this argument depends on the meaning of: “in any sense,” “no sense,” “humanity,” and “lost.” Let’s consider each in turn.

It is simply false that a human consciousness downloaded into a robotic body would not be human “in any sense.” If our consciousness is well-preserved in the transfer, then something of our former existence would remain, namely our psychological continuity, the part most believe to be our defining feature. And if robotic bodies were sufficiently humanlike—why we would want them to be is another question—then there would be a semblance of physical continuity as well. In fact, such an existence would be very much like human existence now if the technologies were sufficiently perfected. So we would still be human to some, if not a great, extent. However, I believe we would come to prefer an existence with less pain, suffering, and death to our current embodied state; and the farther we distanced ourselves from our former lives the happier we will be.

As to whether robots would “in no sense” be our children, the same kind of argument applies. Whatever our descendants become they will, in some sense, be our children in the same way that we are, in some sense, the children of stars. Again notice that the extent to which we would want our descendants to be like us depends upon our view of ourselves. If we think that we now experience the apex of consciousness, then we should mourn our descendants’ loss of humanity. But if we hold that more complex forms of consciousness may evolve from ours, then we will rejoice at the prospect that our descendants might experience these forms, however non-human-like they may be. But then, why would anyone want to limit the kind of consciousness their descendants experience?

As for our “humanity being lost,” this is true in the sense that human nature will evolve beyond its present state, but false in the sense that there will still be a developmental continuity from beings past and present to beings in the future. Joy wants to limit our offspring for the sake of survival, but isn’t mere survival a lowly goal? Wouldn’t many of us prefer death to the infinite boredom of standing still? Wouldn’t we like to evolve beyond humanity?  It isn’t obvious that we have achieved the pinnacle of evolution, or that the small amount of space and time we fill satisfies us. Instead it is clear that we are deeply flawed and finite—we age, decay, lose our physical and mental faculties, and then perish. A lifetime of memories, knowledge, and wisdom, lost. Oh, that it could be better! Joy’s nostalgic longings for the past and naïve view that we preserve the present are misguided, however well they may resonate with those who share similar longings or fear the inevitable future. Our descendants won’t desire to be us anymore than we do to be our long ago ancestors. As Tennyson proclaims: “How dull it is to pause, to make an end, To rust unburnish’d, not to shine in use!2

Joy next turns to his other technologies make things worse argument. As for genetic engineering, I know of no reason—short of childish pleas not to play God—to impede our increasing abilities to perfect our bodies, eliminate disease, and prevent deformity. To not do so would be immoral, making us culpable for an untold amount of preventable suffering and death. And even if there are Gods who have endowed us with intelligence, it would hardly make sense that they didn’t mean for us to use it. As for nanotechnology, Joy eloquently writes of how “engines of creation” may transform into “engines of destruction, but again it is hard to see why we or the Gods prefer that we remain ignorant about nanotechnology.

Joy also claims that there is something sinister about the fact that NBC technologies have largely military uses and were developed by governments, while GNR have commercial uses and are being developed by corporations. Unfortunately, Joy gives us no reason whatsoever to share his fear. Are the commercial products of private corporations more likely to cause destruction than the military products of governments? At first glance, the opposite seems more likely to be true, and Joy gives us no reason to reconsider.

Joy’s it’s never been this bad argument asserts: “this is the first moment in the history of our planet when any species by its voluntary actions has become a danger to itself.” But this is false. Homo sapiens have always been a danger to themselves, both by their actions, as in incessant warfare, and by their inaction, as demonstrated by their impotence when facing plague and famine. I also doubt that humans are a greater threat to themselves now than ever before. We have explored and spread ourselves to all parts of the globe, multiplied exponentially, extended our life spans, created culture, and may soon have the power to increase our chance for survival from both celestial and terrestrial forces. This should be a cause for celebration not despair. We no longer need be at the mercy of forces beyond our control, we may soon direct our own evolution.

Joy next quotes Carl Sagan’s to the effect that the survival of cultures producing technology depends on “what may and what may not be done.” Joy interprets this insight as the essence of common sense or cultural wisdom. Independent of the question of whether this is a good definition of common sense, Joy assumes that Sagan’s phrase applies to an entire century’s technologies, when it is more likely that it applies to only some of it. It is hard to imagine that Sagan, a champion of science, meant for us to forego 21st century technology altogether.

And I vehemently dispute Joy’s claim that science is arrogant in its pursuits; instead, it is the humblest of human pursuits. Many human pursuits are more arrogant than science, which carefully and conscientiously tries to tease a bit of truth from reality. Its claims are always tentative and amenable to contrary evidence—much more than can be said for most creeds. And what of the charlatans, psychics, cultists, astrologers, and faith-healers? Not to mention the somewhat more respectable priests and preachers. Science humbly does not pretend to know with certainty, much more than can be said about some ignorant people.

And what of his claim that we have no business pursuing robotics and AI when we have “so much trouble …understanding—ourselves?”  The reply to this, trying to understand mind won’t help you understand the mind argument, notes that self-knowledge is the ultimate goal of the pursuit of knowledge. His sentimentally notes that his grandmother “had an awareness of the nature of the order of life, and of the necessity of living with and respecting that order,” but this is hopelessly naïve and belies the facts. Would he have us die poor and young, be food for beasts, defenseless against disease, living lives that were, as Hobbes so aptly put it: “nasty, brutish, and short?” The impotence and passivity implied by respecting the natural order has condemned millions to death.3 In fact, the life that Joy and most of the rest of us enjoy was built on the labors of persons who fought mightily with the natural order and the pain, poverty and suffering that nature exudes. Where would we be without Pasteur and Fleming and Salk? As Joy points out life may be fragile, but it was more so in a past that was nothing like the idyllic paradise that he imagines.

Joy’s analogy between the nuclear arms race and possible GNR races is also misplaced, inasmuch as the 20th century arms race resulted as much from a unique historical situation and conflicting ideologies as some unstoppable technological momentum. Evidence for this is to be found in the reduction of nuclear warheads by the superpowers both during and after the cold war. Yes, we need to learn from the past, but its lessons are not necessarily the ones Joy alludes to. Should we not have developed nuclear weapons? Is he sure that the world would be better today had there not been a Manhattan project?

Now it may be that we are chasing our own tails as we try to create defenses for the threats that new technologies pose. Possibly, every counter measure is as dangerous as the technology for which it was meant to counter. But Joy’s conclusion is curious: “The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.” In the first place, it is unrealistic to believe that we could limit the pursuit of knowledge even if we wanted to and it was a good idea. Second, this “freeze” at current levels of technology does not expunge the danger; the danger exists now.

A basic difficulty with Joy’s article is this: he mistakenly accept the notion that technology rules people rather than the reverse.4 But if we can control our technology, there is another solution to our dilemmas. We can use our technology to change ourselves; to make ourselves more ethical, cautious, insightful, and intelligent. Surely Joy believes that humans make choices, how else could they choose relinquishment? So why not change ourselves, relinquishing not our pursuit of knowledge, but our self-destructive tendencies?

Joy’s hysteria blinds him to the possible fruits of our knowledge and his pessimism won’t allow him to see our knowledge and its applications as key to our salvation. Instead, he appeals to the ethics of the Dalia Lama to save us, as if another religious ethics will offer escape from the less noble angels of our nature. I know of no good evidence that the prescriptions of religious ethics have, on the whole, increased the morality of the human race. No doubt the contrary case could easily be made. Why not then use our knowledge to gain mastery over ourselves? If we do that, mastery of our technology will take care of itself. Joy’s concerns are legitimate, but his solutions unrealistic. His planned knowledge stoppage condemns human beings to an existence that cannot improve. And if that’s the case, what is the point of life?

I say forego Joy’s pessimism; reject all barriers and limitations to our intelligence, health, and longevity. Be mindful of our past accomplishments, appreciative of all that we are, but be driven passionately and creatively forward by the hope of all that we may become. Therein lies the hope of humankind and their descendents. In the words of Walt Whitman:

This day before dawn I ascended a hill,
and look’d at the  crowded heaven,
And I said to my Spirit,
When we become the enfolders of those orbs,
and the pleasure and knowledge of everything in them,
shall we be fill’d and satisfied then?
And my Spirit said:
No, we but level that lift,
to pass and continue beyond.
~ Walt Whitman 


1. Kaczynski argues that machines will either: a) make all the decisions thus rendering humans obsolete; or b) humans will retain control. If b then only an elite will rule in which case they will: 1)quickly exterminate the masses; 2)slowly exterminate the masses; or 3)take care of the masses. However if 3 then the masses will be happy but not free and life would have no meaning. My questions for Kaczynski are these: Does he really think the only way for humans to be happy is in an agricultural paradise? Does he think an agricultural life was a paradise? A hunter-gather life? Are we really less free when we have loosened the chains of our evolutionary heritage, or our we more free? Kaczynski’s vision of a world where one doesn’t work, pursues their own interests, while being very happy sounds good to me.

2. from Alfred Lord Tennyson’s Ulysses.

3. I would argue that had the rise of Christianity in the West not stopped scientific advancement for a thousand years until the Renaissance, we might be immortals already.

4. As in Thoreau’s well-known phrase which appears, not surprisingly, on the Luddite home page: “We do not ride on the railroad; it rides upon us.”

5. From Walt Whitman’s “Song of Myself” in Leaves of Grass.

Summary of Bill Joy’s, “Why the future doesn’t need us,”

Bill joy.jpg

Bill Joy (1954 – ) is an American computer scientist who co-founded Sun Microsystems in 1982, and served as chief scientist at the company until 2003. His now famous Wired magazine essay, “Why the future doesn’t need us,” (2000) sets forth his deep concerns over the development of modern technologies.[i] 

Joy traces his concern to a discussion he had with Ray Kurzweil at a conference in 1998. Taken aback by Kurzweil’s predictions, he read an early draft of The Age of Spiritual Machines: When Computers Exceed Human Intelligence, and found it deeply disturbed. Subsequently he encountered arguments by the Unabomber Ted Kaczynski’s. Kaczynski argued that if machines do all the work, as they inevitably will, then we can: a) let the machines make all the decisions; or b) maintain human control over the machines.

If we choose “a” then we are at the mercy of our machines. It is not that we would give them control or that they would take control, rather, we might become so dependent on them that we would have to accept their commands. Needless to say, Joy doesn’t like this scenario. If we choose “b” then control would be in the hands of an elite, and the masses would be unnecessary. In that case the tiny elite: 1) would exterminate the masses; 2) reduce their birthrate so they slowly became extinct; or 3) become benevolent shepherds to the masses. The first two scenarios entail our extinction, but even the third option is no good. In this last scenario the elite would see to it that all physical and psychological needs of the masses are met, while at the same time engineering the masses to sublimate their drive for power. In this case the masses might be happy, but they would not be free.

Joy finds these arguments convincing and deeply troubling. About this time Joy read Moravec’s book where he found more of the same kind of predictions. He found himself especially concerned by Moravec’s claim that technological superiors always defeat the inferiors, as well as his contention that humans will become extinct as they merge with the robots. Disturbed, Joy consulted other computer scientists who basically agreed with these technological predictions but were themselves unconcerned. Joy was stirred to action.

Joy’s concerns focuses on the transforming technologies of the 21st century—genetics, nanotechnology, and robotics (GNR). What is particularly problematic about them is that they have the potential to self-replicate. This makes them inherently more dangerous than 20th century technologies—nuclear, biological, and chemical weapons—which were expensive to build and require rare raw materials. By contrast, 21st century technologies allow for small groups or individuals to bring about massive destruction. Joy accepts that we will soon achieve the computing power to implement some of the dreams of Kurzweil and Moravec, worrying nevertheless that we overestimate our design abilities. Such hubris may lead to disaster.

Robotics is primarily motivated by the desire to be immortal—by downloading ourselves into them. (The terms uploading and downloading are used interchangeably.) But Joy doesn’t believe that we will be human after the download or that the robots would be our children. As for genetic engineering, it will create new crops, plants, and eventually new species including many variations of human species, but Joy fears that we do not know enough to conduct such experiments. And nanotechnology confronts the so-called “gray goo” problem—self-replicating nanobots out of control. In short, we may be on the verge of killing ourselves! Is it not arrogant, he wonders, to design a robot replacement species when we so often make design mistakes?

Joy concludes that we ought to relinquish these technologies before it’s too late. Yes, GNR may bring happiness and immortality, but should we risk the survival or the species for such goals? Joy thinks not.

Summary – Genetics, nanotechnology, and robotics are too dangerous to pursue. We should relinquish them.


[i] Bill Joy, “Why The Future Doesn’t Need Us,” Wired Magazine, April 2000.

Martin Rees On the Extinction of Our Species

From Our Final Hour: A Scientist’s Warning by Martin Rees, Royal Society Professor at Cambridge and England’s Royal Astronomer.

“Twenty-first century science may alter human beings themselves—not just how they live.” (9)

Rees accepts the common wisdom that the next hundred years will see changes that dwarf those of the past thousand years, but he is skeptical about specific predictions. He gives numerous examples of forecasts that didn’t come true, of technologies that were not forecast, and of the forecasts that were never made—x-rays, nuclear energy, antibiotics, jet aircraft, computers, transistors, the internet, and more. Yet,

… we cannot set limits on what science can achieve, so we should leave our minds open, or at least ajar, to concepts that now seem on the wilder shores of speculative thought. Superhuman robots are widely predicted for mid-century. Even more astonishing advances could eventually stem from fundamentally new concepts in basic science that haven’t yet even been envisioned and which we as yet have no vocabulary to describe. (16)

Rees argues that computing power will not level off and “nanotechnology could extend Moore’s law for up to thirty years further; by that time, computers would match the processing power of a human brain.” (17) He quotes both Kurzweil and Moravec and takes their predictions seriously.

Rees accepts as reasonable speculative claims concerning the malleability of our physical and psychic selves. He also acknowledges that immortality may be possible. He discusses reverse-engineering a brain in order to download its contents into a machine, saying: “If present trends continue unimpeded, then … some people now living could attain immortality—in the sense of having a lifespan that is not constrained by their present bodies.” (18-19)

Rees also believes that superintelligent machines might destroy us. Once machines have surpassed human intelligence, they could themselves design and assemble a new generation of even more intelligent ones. This could then repeat itself, with technology running towards a cusp, or ‘singularity’.” Still, Rees admits this is all speculative.

I see Rees as forging a middle path. He recognizes that the potential of scientific knowledge to transform reality, but he cautions us that some predictions are fanciful. Many forecasts will be shown to be mistaken, and many things we don’t forecast will happen. Moreover there are social, religious, political, ethical, economic and other considerations that impede swift development of new technologies.

Rees also carefully considers extinction scenarios: “Throughout most of human history, the worst disasters have been inflicted by environmental forces—floods, earthquakes, volcanoes, and hurricanes—and by pestilence. But the greatest catastrophes of the 20th century were directly induced by human agency…” (25) He estimates that nearly two hundred million persons were killed by war, massacre, persecution, famine, etc. in the 20th century alone.

The primary extinction scenarios include: global nuclear war; nuclear mega-terror; bio-threats (the use of chemical and biological weapons); laboratory errors  (accidentally create a new virulent smallpox virus, for example); “grey goo” (nanobots out of control that consume all organic matter); particle physics experiments gone awry; and human-induced environmental or climate change. In addition, there are asteroid impacts; super-eruptions from Earth that block the sun; and more.

Martin Rees is one of the world’s most important living scientists. His worries about the extinction of the species should be carefully considered.