Category Archives: Science & Technology

Are Google and Facebook Evil?


I’ve read two recent pieces which attack the tech giants—Google, Facebook, Apple, Twitter, Microsoft—in various ways. “Silicon Valley Is Not Your Friend,” and “Ashamed to work in Silicon Valley: how techies became the new bankers.”

Let me state unequivocally at the outset that I know almost nothing about how technology works, and I have no expertise in the complex relationship between technology, politics, and society. So what I say is tentative.

First, I have mixed feelings about these attacks, mostly because I’m a transhumanist who believes that only science and technology informed by philosophy can save us. Now obviously a lot of junk gets produced by technology companies, a lot of time is wasted on Facebook, there are a lot of clueless nerds in the world, and a lot of bad stuff happens when lies about politics are spread on the internet.

It is especially disconcerting when you consider that Google could cut off Breitbart, Alex Jones, and neo-Nazi nonsense from search results in a second, but it doesn’t want to be perceived as “left-leaning” or lose advertising money—keeping the advertisers is more important than making sure that people read the truth.

So I do think tech companies have huge responsibilities, perhaps a model like Wikipedia, where they take their civic responsibilities seriously, as opposed to just focusing on profit, would benefit us all. Of course, this depends on the creation of a new economic model, since the drive for profit, as opposed to increasing societal good, is by definition a large part of the problem.

As for jobs lost to tech, I’ve written about this multiple times, and I say again that we need a new economic system which doesn’t encourage despoiling the natural environment and climate, leaves vast wealth in the hands of a very few, etc.

Still, tech companies do research and produce things—some of the most important possible research like AI and robotics and longevity is done by tech companies. So in that way, they have the potential to do enormous good too. That also provides quite a contrast to Wall Street, the whole point of which is mostly to scam money off others without producing anything or contributing to society.

Finally, let me say that ideally sci-tech research should be mostly funded by the public sector with accountability towards public good vs. private profit. Or at least there should be enough government regulation to ensure that the private operate in the public interest.

And again, a disclaimer. I am not an expert in such matters and these are complex issues.

Summary of “How Technology Hijacks People’s Minds — from a Magician and Google’s Design Ethicist”

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, November 11, 2016.)

I recently read an article in The Atlantic by Tristan Harris, a former Product Manager at Google who studies the ethics of how the design of technology influences people’s psychology and behavior. The piece was titled: “The Binge Breaker” and it covers similar ground to his previous piece “How Technology Hijacks People’s Minds — from a Magician and Google’s Design Ethicist.

Harris is also a leader in the “Time Well Spent” movement which favors “technology designed to enhance our humanity over additional screen time. Instead of a ‘time spent’ economy where apps and websites compete for how much time they take from people’s lives, Time Well Spent hopes to re-structure design so apps and websites compete to help us live by our values and spend time well.”

Harris’ basic thesis is that “our collective tech addiction” results more from the technology itself than “on personal failings, like weak willpower.” Our smartphones, tablets, and computers seize our brains and control us, hence Harris’ call for a “Hippocratic oath” that implores software designers not to exploit “psychological vulnerabilities.” Harris and his colleague Joe Edelman compare “the tech industry to Big Tobacco before the link between cigarettes and cancer was established: keen to give customers more of what they want, yet simultaneously inflicting collateral damage on their lives.”

[I think this analogy is extraordinarily weak. The tobacco industry made a well-documented effort to make their physically deadly products more addictive while there is no compelling evidence of any similarly sinister plot regarding software companies nor or their products deadly. Tobacco will literally kill you while your smartphone will not.]

The social scientific evidence for Harris’ insights began when he was a member of the Stanford Persuasive Technology Lab. “Run by the experimental psychologist B. J. Fogg, the lab has earned a cult-like following among entrepreneurs hoping to master Fogg’s principles of ‘behavior design’—a euphemism for what sometimes amounts to building software that nudges us toward the habits a company seeks to instill.” As a result:

Harris learned that the most-successful sites and apps hook us by tapping into deep-seated human needs … [and] He came to conceive of them as ‘hijacking techniques’—the digital version of pumping sugar, salt, and fat into junk food in order to induce bingeing … McDonald’s hooks us by appealing to our bodies’ craving for certain flavors; Facebook, Instagram, and Twitter hook us by delivering what psychologists call “variable rewards.” Messages, photos, and “likes” appear on no set schedule, so we check for them compulsively, never sure when we’ll receive that dopamine-activating prize.

[Note though that because we may become addicted to technology, and many other things too, doesn’t mean that someone is intentionally addicting you to that thing. For example, you may become addicted to your gym or jogging but that doesn’t mean that the gym or running shoe store has nefarious intentions.]

Harris worked on Gmail’s Inbox app and is “quick to note that while he was there, it was never an explicit goal to increase time spent on Gmail.” In fact,

His team dedicated months to fine-tuning the aesthetics of the Gmail app with the aim of building a more ‘delightful’ email experience. But to him that missed the bigger picture: Instead of trying to improve email, why not ask how email could improve our lives—or, for that matter, whether each design decision was making our lives worse?

[This is an honorable view, but it is extraordinarily idealistic. First of all, improving email does minimally improve our lives, as anyone in the past who waited weeks or months for correspondence would surely attest. If the program works, allows us to communicate with our friends, etc., then it makes our lives a bit better. Of course, email doesn’t directly help us obtain beauty, truth, goodness or world peace if that’s your goal, but that seems to be a lot to ask of an email program! Perhaps then it is a case of lowering our expectations of what a technology company, or any business, is supposed to do. Grocery stores make our lives go better, even if grocers are mostly concerned with profit. I’m not generally a fan of Smith’s “invisible hand,” but sometimes the idea provides insight. Furthermore, if Google or any company tried to improve people’s lives without showing a profit, they would soon go out of business. The only way to ultimately improve the world is to effect change in the world in which we live, not in some idealistic one that doesn’t exist.]

Harris makes a great point when he notes that “Never before in history have the decisions of a handful of designers (mostly men, white, living in SF, aged 25–35) working at 3 companies”—Google, Apple, and Facebook—“had so much impact on how millions of people around the world spend their attention … We should feel an enormous responsibility to get this right.”

Google responded to Harris’ concerns. He met with CEO Larry Page, the company organized internal Q&A sessions [and] he was given a job that researched ways that Google could adopt ethical design. “But he says he came up against “inertia.” Product roadmaps had to be followed, and fixing tools that were obviously broken took precedence over systematically rethinking services.” Despite these problems “he justified his decision to work there with the logic that since Google controls three interfaces through which millions engage with technology—Gmail, Android, and Chrome—the company was the “first line of defense.” Getting Google to rethink those products, as he’d attempted to do, had the potential to transform our online experience.”

[This is one of the most insightful things that Harris says. Again, the only way to change the world is to begin with the world you find yourself in, for you really can’t begin in any other place. I agree with what Eric Fromm taught me long ago, that we should be measured by what we are, not what we have. But, on the other hand, if we have nothing we have nothing to give.]

Harris hope is that:

Rather than dismantling the entire attention economy … companies will … create a healthier alternative to the current diet of tech junk food … As with organic vegetables, it’s possible that the first generation of Time Well Spent software might be available at a premium price, to make up for lost advertising dollars. “Would you pay $7 a month for a version of Facebook that was built entirely to empower you to live your life?,” Harris says. “I think a lot of people would pay for that.” Like splurging on grass-fed beef, paying for services that are available for free and disconnecting for days (even hours) at a time are luxuries that few but the reasonably well-off can afford. I asked Harris whether this risked stratifying tech consumption, such that the privileged escape the mental hijacking and everyone else remains subjected to it. “It creates a new inequality. It does,” Harris admitted. But he countered that if his movement gains steam, broader change could occur, much in the way Walmart now stocks organic produce. Even Harris admits that often when your phone flashes with a new text message it hard to resist. It is hard to feel like you are in control of the process.

[There is much to say here. First of all, there are many places to spend time well on the internet. I’d like to think that some readers of this blog find something substantive here. I also believe that “mental hijacking,” is a loaded term. It implies an intent on the part of the hijacker that may not be present. Yes, Facebook, or something much worse like the sewer of alt-right politics, might hijack our minds, but religious belief, football on TV, reading, stamp collecting, or even compulsive meditating could be construed as hijacking our minds. In the end, we may have to respect individual autonomy. A few prefer to read my summaries of the great philosophers, others prefer reading about the latest Hollywood gossip.]

Concluding Reflections – I begin with a disclaimer. I know almost nothing about software product design. But I did teach philosophical issues in computer science for many years in the computer science department at UT-Austin, and I have an abiding interest in the philosophy of technology. So let me say a few things.

All technologies have benefits and costs. Air conditioning makes summer endurable, but it has the potential to release hydrofluorocarbons into the air. Splitting the atom unleashes great power, but that power can be used for good or ill. Robots put people out of work, but give people potentially more time to do what they like to do. On balance, I find email a great thing, and in general, I think technology, which is applied science, has been the primary force for improving the lives of human beings. So my prejudice is to withhold critique of new technology. Nonetheless, the purpose of technology should be to improve our lives, not make us miserable. Obviously.

Finally, as for young people considering careers, if you want to make a difference in the world I can think of no better place than at any of the world’s high-tech companies. They have the wealth, power and influence to actually change the world if they see fit. Whether they do that or not is up to the people who work there. So if you want to change the world, join in the battle. But whatever you do, given the world as it is, you must take care of yourself. For if you don’t do that, you will not be able to care for anything else either. Good luck.

Critique of Bill Joy’s “Why the future doesn’t need us”

Image result for john messerly philosophy Dr. John Messerly

(This article was originally published in Computers and Society, Volume 32: Issue 6, June 2003. It was later reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 24, 2016.)


In his well-known piece, “Why the future doesn’t need us,” Bill Joy argues that 21st century technologies—genetic engineering, robotics, and nanotechnology (GNR)—will extinguish human beings as we now know them, a prospect he finds deeply disturbing. I find his arguments deeply flawed and critique each of them in turn.

Joy’s unintended consequences argument cites a passage by the Unabomber Ted Kaczynski. According to Joy, the key to this argument is the notion of unintended consequences, which is “a well-known problem with the design and use of technology…” Independent of the strength of Kaczynski’s anti-technology argument—which I also find flawed—it is hard to quibble about the existence of unintended consequences.1 And it is easy to see why. The consequences of an action are in the future relative to that action and, since the future is unknown, some consequences are unknown. Furthermore, it is self-evident that an unknown future and unknown consequences are closely connected.

However, the strongest conclusion that Joy should draw from the idea of unintended consequences is that we should carefully choose between courses of action; and yet he draws the stronger conclusion that we ought to cease and desist in the research, development, and use of 21st-century technologies. But he cannot draw this stronger conclusion without contradiction if, as he thinks, many unknown, unintended consequences result from our choices. And that’s because he can’t know that abandoning future technologies will produce the intended effects. Thus the idea of unintended consequences doesn’t help Joy’s case since it undermines the justification for any course of action. In other words, the fact of unintended consequences tells us nothing about what we ought to choose, and it certainly doesn’t give us any reason to abandon technology. Of course, Joy might reply that new, powerful technologies make unintended consequences more dangerous than in the past, but as I’ve just shown, he cannot know this. It may well be that newer technologies will lead to a safer world.

Joy’s big fish eat little fish argument quotes robotics pioneer Hans Moravec: “Biological species almost never survive encounters with superior competitors.” Analogously, Joy suggests we will be driven to extinction by our superior robotic descendants. But it isn’t obvious that robots will be superior to us and, even if they were, they may be less troublesome than our neighbors next door. In addition, his vision of the future presupposes that robots and humans will remain separate creatures, a view explicitly rejected by robotics expert Rodney Brooks and others. If Brooks is correct, humans will gradually incorporate technology into their own bodies thus eliminating the situation that Joy envisions. In sum, we don’t know that robots will be the bigger fish, that they will eat us even if they are, or that there will even be distinct fishes.

Joy’s mad scientist argument describes a molecular biologist who “constructs and disseminates a new and highly contagious plague that kills widely but selectively.” Now I have no desire to contract a plague, but Joy advances no argument that this follows from GNR; instead, he plays on our emotions by associating this apocalyptic vision with future technology. (In fact, medical science is the primary reason we have avoided plagues.) The images of a mad scientist or Frankenstein may be popular, but scientists are no madder than anyone else and nightmarish describes only one possible future.

Joy’s lack of control argument focuses on the self-replicating nature of GNR. According to Joy, self-replication amplifies the danger of GNR: “A bomb is blown up only once—but one bot can become many, and quickly get out of control.” First of all, bombs replicate, they just don’t replicate by themselves. So Joy’s concern must not be with replication, but with self-replication. So what is it about robotic self-replication that frightens us? The answer is obvious. Robotic self-replication appears to be out of our control, as compared to our own or other humans self-replication. Specifically, Joy fears that robots might replicate and then enslave us; but other humans can do the same thing. In fact, we may increase our survival chances by switching control to more failsafe robots designed and programmed by our minds. While Joy is correct that “uncontrolled self-replication in these newer technologies runs … a risk of substantial damage in the physical world,” so too does the “uncontrolled self-replication” of humans, their biological tendencies, their hatreds, and their ideologies. Joy’s fears are not well-founded because the lack of control over robotic self-replication is not, prima facie, more frightening than the similar lack of control we exert over other human’s replication.

Furthermore, to what extent do we control our own reproduction?  I’d say not much. Human reproduction results from a haphazard set of cultural, geographical, biological, and physiological circumstances; clearly, we exert less control over when, if, and with whom we reproduce than we suppose. And we certainly don’t choose the exact nature of what’s to be reproduced; we don’t replicate perfectly. We could change this situation thru genetic engineering, but Joy opposes this technology. He would rather let control over human replication remain in the hands of chance—at least chance as determined by the current state of our technology. But if he fears the lack of control implied by robotic self-replication, why not fear that lack of control over our own replication and apply more control to change this situation? In that way, we could enhance our capabilities and reduce the chance of not being needed.

Of course, Joy would reiterate that we ought to leave things as they are now. But why? Is there something perfect or natural about the current state of our knowledge and technology? Or would things be better if we turned the technological clock back to 1950? 1800? or 2000 B.C.? I suggest that the vivid contrast Joy draws between the control we wield over our own replication and the lack of it regarding self-replicating machines is illusory. We now have and may always have more control over the results of our conscious designs and programs, then we do over ourselves or other people whose programs were written by evolution. If we want to survive and flourish then we ought to engineer ourselves with foresight and, at the same time, engineer machines consistent with these goals.

Joy’s easy access argument claims that 20th-century technologies—nuclear, biological, and chemical (NBC)—required access to rare “raw materials and highly protected information,” while 21st-century technologies “are widely within the reach of individuals or small groups.” This means that “knowledge alone will enable the use of them,” a phenomenon that Joy terms: “knowledge-enabled mass destruction (KMD).”

Now it is difficult to quibble with the claim that powerful, accessible technologies pose a threat to our survival. Joy might argue that even if we survived the 21st century without destroying ourselves, what of the 22nd or the 23rd centuries when more accessible and powerful KMD becomes possible? Of course, we could freeze technology, but it is uncertain that this would be either realistic or advisable. Most likely the trend of cultural evolution over thousands of years will continue—we will gain more control and power over reality.

Now is this more threatening than if we stood still? This is the real question that Joy should ask because there are risks no matter what we do. If we remain at our current level of technology we will survive until we self-destruct or are destroyed by universal forces, say the impact of an asteroid or the sun’s exhaustion of its energy. But if we press forward, we may be able to save ourselves. Sure, we must be mindful of the promises and the perils of future technologies, but nothing Joy says justifies his conclusion that: “we are on the cusp of the further perfection of extreme evil…” Survival is a goal, but I don’t believe that abandonment of new technologies will assure this result or even make it more likely; it just isn’t clear that limiting the access to or discovery of knowledge is, or has ever been, the solution to human woes.

Joy’s  poor design abilities argument notes how often we “overestimate our design abilities,” and concludes: “shouldn’t we proceed with great caution?” But he forgets that we sometimes underestimate our design abilities; and sometimes we are too cautious. Go forward with caution, look before you leap—but don’t stand still.

I take the next argument to be his salient one. He claims that scientists dream of building conscious machines primarily because they want to achieve immortality by downloading their consciousness into them. While he accepts this as distinct possibilities, his existential argument asks whether we will still be human after we download: “It seems far more likely that a robotic existence would not be like a human one in any sense that we understand, that the robots would in no sense be our children, that on this path our humanity may well be lost.” The strength of this argument depends on the meaning of: “in any sense,” “no sense,” “humanity,” and “lost.” Let’s consider each in turn.

It is simply false that a human consciousness downloaded into a robotic body would not be human “in any sense.” If our consciousness is well-preserved in the transfer, then something of our former existence would remain, namely our psychological continuity, the part most believe to be our defining feature. And if robotic bodies were sufficiently humanlike—why we would want them to be is another question—then there would be a semblance of physical continuity as well. In fact, such an existence would be very much like human existence now if the technologies were sufficiently perfected. So we would still be human to some, if not a great, extent. However, I believe we would come to prefer an existence with less pain, suffering, and death to our current embodied state; and the farther we distanced ourselves from our former lives the happier we will be.

As to whether robots would “in no sense” be our children, the same kind of argument applies. Whatever our descendants become they will, in some sense, be our children in the same way that we are, in some sense, the children of stars. Again notice that the extent to which we would want our descendants to be like us depends upon our view of ourselves. If we think that we now experience the apex of consciousness, then we should mourn our descendants’ loss of humanity. But if we hold that more complex forms of consciousness may evolve from ours, then we will rejoice at the prospect that our descendants might experience these forms, however non-human-like they may be. But then, why would anyone want to limit the kind of consciousness their descendants’ experience?

As for our “humanity being lost,” this is true in the sense that human nature will evolve beyond its present state, but false in the sense that there will still be a developmental continuity from beings past and present to beings in the future. Joy wants to limit our offspring for the sake of survival, but isn’t mere survival a lowly goal? Wouldn’t many of us prefer death to the infinite boredom of standing still? Wouldn’t we like to evolve beyond humanity?  It isn’t obvious that we have achieved the pinnacle of evolution, or that the small amount of space and time we fill satisfies us. Instead, it is clear that we are deeply flawed and finite—we age, decay, lose our physical and mental faculties, and then perish. A lifetime of memories, knowledge, and wisdom, lost. Oh, that it could be better! Joy’s nostalgic longings for the past and naïve view that we preserve the present are misguided, however, well they may resonate with those who share similar longings or fear the inevitable future. Our descendants won’t desire to be us any more than we do to be our long-ago ancestors. As Tennyson proclaims: “How dull it is to pause, to make an end, To rust unburnish’d, not to shine in use!2

Joy next turns to his other technologies make things worse argument. As for genetic engineering, I know of no reason—short of childish pleas not to play God—to impede our increasing abilities to perfect our bodies, eliminate disease, and prevent deformity. To not do so would be immoral, making us culpable for an untold amount of preventable suffering and death. And even if there are Gods who have endowed us with intelligence, it would hardly make sense that they didn’t mean for us to use it. As for nanotechnology, Joy eloquently writes of how “engines of creation” may transform into “engines of destruction, but again it is hard to see why we or the Gods prefer that we remain ignorant about nanotechnology.

Joy also claims that there is something sinister about the fact that NBC technologies have largely military uses and were developed by governments, while GNR have commercial uses and are being developed by corporations. Unfortunately, Joy gives us no reason whatsoever to share his fear. Are the commercial products of private corporations more likely to cause destruction than the military products of governments? At first glance, the opposite seems more likely to be true, and Joy gives us no reason to reconsider.

Joy’s it’s never been this bad argument asserts: “this is the first moment in the history of our planet when any species by its voluntary actions has become a danger to itself.” But this is false. Homo sapiens have always been a danger to themselves, both by their actions, as in incessant warfare, and by their inaction, as demonstrated by their impotence when facing plague and famine. I also doubt that humans are a greater threat to themselves now than ever before. We have explored and spread ourselves to all parts of the globe, multiplied exponentially, extended our lifespans, created culture, and may soon have the power to increase our chance for survival from both celestial and terrestrial forces. This should be a cause for celebration, not despair. We no longer need be at the mercy of forces beyond our control, we may soon direct our own evolution.

Joy next quotes Carl Sagan’s to the effect that the survival of cultures producing technology depends on “what may and what may not be done.” Joy interprets this insight as the essence of common sense or cultural wisdom. Independent of the question of whether this is a good definition of common sense, Joy assumes that Sagan’s phrase applies to an entire century’s technologies when it is more likely that it applies to only some of it. It is hard to imagine that Sagan, a champion of science, meant for us to forego 21st-century technology altogether.

And I vehemently dispute Joy’s claim that science is arrogant in its pursuits; instead, it is the humblest of human pursuits. Many human pursuits are more arrogant than science, which carefully and conscientiously tries to tease a bit of truth from reality. Its claims are always tentative and amenable to contrary evidence—much more than can be said for most creeds. And what of the charlatans, psychics, cultists, astrologers, and faith-healers? Not to mention the somewhat more respectable priests and preachers. Science humbly does not pretend to know with certainty, much more than can be said about some ignorant people.

And what of his claim that we have no business pursuing robotics and AI when we have “so much trouble …understanding—ourselves?”  The reply to this, trying to understand mind won’t help you understand the mind argument, notes that self-knowledge is the ultimate goal of the pursuit of knowledge. His sentimentally notes that his grandmother “had an awareness of the nature of the order of life, and of the necessity of living with and respecting that order,” but this is hopelessly naïve and belies the facts. Would he have us die poor and young, be food for beasts, defenseless against disease, living lives that were, as Hobbes so aptly put it: “nasty, brutish, and short?” The impotence and passivity implied by respecting the natural order has condemned millions to death.3 In fact, the life that Joy and most of the rest of us enjoy was built on the labors of persons who fought mightily with the natural order and the pain, poverty, and suffering that nature exudes. Where would we be without Pasteur and Fleming and Salk? As Joy points out life may be fragile, but it was more so in a past that was nothing like the idyllic paradise that he imagines.

Joy’s analogy between the nuclear arms race and possible GNR races is also misplaced, inasmuch as the 20th-century arms race resulted as much from a unique historical situation and conflicting ideologies as some unstoppable technological momentum. Evidence for this is to be found in the reduction of nuclear warheads by the superpowers both during and after the cold war. Yes, we need to learn from the past, but its lessons are not necessarily the ones Joy alludes to. Should we not have developed nuclear weapons? Is he sure that the world would be better today had there not been a Manhattan project?

Now it may be that we are chasing our own tails as we try to create defenses for the threats that new technologies pose. Possibly, every countermeasure is as dangerous as the technology for which it was meant to counter. But Joy’s conclusion is curious: “The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.” In the first place, it is unrealistic to believe that we could limit the pursuit of knowledge even if we wanted to and it was a good idea. Second, this “freeze” at current levels of technology does not expunge the danger; the danger exists now.

A basic difficulty with Joy’s article is this: he mistakenly accept the notion that technology rules people rather than the reverse.4 But if we can control our technology, there is another solution to our dilemmas. We can use our technology to change ourselves; to make ourselves more ethical, cautious, insightful, and intelligent. Surely Joy believes that humans make choices, how else could they choose relinquishment? So why not change ourselves, relinquishing not our pursuit of knowledge, but our self-destructive tendencies?

Joy’s hysteria blinds him to the possible fruits of our knowledge and his pessimism won’t allow him to see our knowledge and its applications as key to our salvation. Instead, he appeals to the ethics of the Dalia Lama to save us, as if another religious ethics will offer an escape from the less noble angels of our nature. I know of no good evidence that the prescriptions of religious ethics have, on the whole, increased the morality of the human race. No doubt the contrary case could easily be made. Why not then use our knowledge to gain mastery over ourselves? If we do that, mastery of our technology will take care of itself. Joy’s concerns are legitimate, but his solutions unrealistic. His planned knowledge stoppage condemns human beings to an existence that cannot improve. And if that’s the case, what is the point of life?

I say forego Joy’s pessimism; reject all barriers and limitations to our intelligence, health, and longevity. Be mindful of our past accomplishments, appreciative of all that we are, but be driven passionately and creatively forward by the hope of all that we may become. Therein lies the hope of humankind and their descendants. In the words of Walt Whitman:

This day before dawn I ascended a hill,
and look’d at the  crowded heaven,
And I said to my Spirit,
When we become the enfolders of those orbs,
and the pleasure and knowledge of everything in them,
shall we be fill’d and satisfied then?
And my Spirit said:
No, we but level that lift,
to pass and continue beyond.
~ Walt Whitman 


1. Kaczynski argues that machines will either: a) make all the decisions thus rendering humans obsolete; or b) humans will retain control. If b then only an elite will rule in which case they will: 1)quickly exterminate the masses; 2)slowly exterminate the masses; or 3)take care of the masses. However if 3 then the masses will be happy but not free and life would have no meaning. My questions for Kaczynski are these: Does he really think the only way for humans to be happy is in an agricultural paradise? Does he think an agricultural life was a paradise? A hunter-gather life? Are we really less free when we have loosened the chains of our evolutionary heritage, or are we freer? Kaczynski’s vision of a world where one doesn’t work, pursues their own interests while being very happy sounds good to me.

2. from Alfred Lord Tennyson’s Ulysses.

3. I would argue that had the rise of Christianity in the West not stopped scientific advancement for a thousand years until the Renaissance, we might be immortals already.

4. As in Thoreau’s well-known phrase which appears, not surprisingly, on the Luddite home page: “We do not ride on the railroad; it rides upon us.”

5. From Walt Whitman’s “Song of Myself” in Leaves of Grass.

The Supposed Dangers of Techno-Optimism

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, August 21, 2015)

In his recent article, “Why Techno-Optimism Is Dangerous,” the philosopher Nicholas Agar argues that we not should pursue radical human enhancement. (Professor Agar has made the same basic argument in three recent books: 1) The Sceptical Optimist: Why Technology Isn’t the Answer to Everything; 2) Truly Human Enhancement: A Philosophical Defense of Limits; and 3) Humanity’s End: Why We Should Reject Radical Enhancement.)

Agar says that when we imagine a better future, assuming that there is one, most of us believe that better technology will play a key role in bringing that future about. Agar dubs such believers techno-optimists, and their ranks include: Matt Ridley, David Deutsch, K. Eric Drexler and Peter Diamandis. Techno-optimists acknowledge the dangers of technology, but they believe that technology’s potential to improve human life makes them worth the risk.

Hedonistic Normalization

Agar is skeptical of the power of new technologies to improve individual well-being because “hedonic normalization aligns our subjective experiences to our objective circumstances.” Humans living a thousand years ago were hedonically normalized to live in their environment, as will beings living a thousand years hence be normalized to theirs. From our perspective living in the middle ages seems terrible, and living in the far future seems incredible because we are normalized to life today. But that does not mean that individuals living in the past were more unhappy than we are, says Agar, or that individuals living in the future will be happier than we are. So even if technology in the future is great from our vantage point, our descendants will take such things for granted.

Agar also argues that “overlooking hedonic normalization leads us to exaggerate the joyfulness of the future and to overstate the joylessness of the past.”  We would hate to go back in time before dentistry and inedible food, he says, but for our ancestors bad dentistry and food were normal. Our descendants may look back with horror at our death and suffering, but we don’t feel that level of disgust. Now Professor Agar is correct that we become accustomed to the technology that surrounds us. I don’t think daily about how modern medicine eliminated many childhood diseases, that antibiotics cure my infections, and that the dentist can fix my teeth, whereas people of a century ago would be unimaginably happy at those prospects.

But Agar is mistaken that we become completely normalized to the advances that technology provides. I am happy that dentists use Novocaine, that I can communicate instantly over long distances, that I can be warm when it is cold outside, and that antibiotics treat infections and enabled me to forgo amputation! I may not be as awed by such things as people from the past would be, but I am happy to have these technologies nonetheless.  It only takes a bit of reflection about the past and I’m counting my lucky stars. So Agar’s “your descendants won’t be as happy as you think they will” argument doesn’t provide sufficient reason to abandon the pursuit of new technologies.

Objective Goods

It also does not follow from Agar’s argument that preventing physical pain or infant mortality aren’t good things. They are. It is better from an objective perspective not to live and die in pain. The fact that we become somewhat accustomed to modern medicine may cause us to overestimate how happy our descendants will be when technology improves their lives even more, but those descendants will still be thankful that we pursued those technologies that helps them live better and longer lives. I know I am happy not to die young and miserable.

Technology Is Risky 

Agar’s other main argument against techno-optimism is “that technological progress comes with risks.” There are many unintended consequences of advancing technology, he says, and we can’t just assume that future generations will be able to solve the problems we leave them. We may even destroy ourselves.

Agar is correct, new technology poses risks, but there is no risk-free way to proceed into the future. If we don’t pursue new technologies, then asteroids, climate change, nuclear war, environmental degradation, climate or deadly viruses and bacteria will eventually destroy us. We should not be reckless, but we should not be conservative either. For if we are too timid, we will die just as surely. In the end, only science and technology properly applied have the power to save us.

The Global Brain

Opte Project visualization of routing paths through a portion of the Internet. The connections and pathways of the internet could be seen as the pathways of neurons and synapses in a global brain

I began using analytical tools in February 2014 to track my readership. I have now exceeded 50,000 page views! (About 40,000 visitors.) I would like to thank all the readers, especially my regular readers. I would also like to thank those who take the time to write responses, all of which I read and reflect upon.

And thanks to the engineers and computer scientists who designed and maintain the various parts of the  world-wide web. It amazes me that I reach people all around the globe, as my site stats confirm. Science and technology, the true bringers of miracles, are creating a global brain.

The global brain is a conceptualization of the worldwide network formed by all the people on this planet together with the information and communication technologies that connect them into an intelligent, self-organizing system. As the internet becomes faster, more intelligent, and more encompassing, it increasingly ties its users together into a single information processing system, which functions like a nervous system for the planet Earth. The intelligence of this network is collective or distributed: it is not centralized or localized in any particular individual, organization or computer system. It rather emerges from the dynamic networks of interactions between its components, a property typical of complex adaptive systems.1

  1.  “Cyberspace: The Ultimate Complex Adaptive System”. The International C2 Journal. Retrieved 25 August 2012. by Paul W. Phister Jr