Category Archives: Futurism

Is The Singularity A Religious Doctrine?

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, April 23, 2016.)

A colleague forwarded John Horgan‘s recent Scientific American article, “The Singularity and the Neural Code.” Horgan argues that the intelligence augmentation and mind uploading that would lead to a technological singularity depend upon cracking the neural code. The problem is that we don’t understand our neural code, the software or algorithms that transform neurophysiology into the stuff of minds like perceptions, memories, and meanings. In other words, we know very little about how brains make minds. 

The neural code is science’s deepest, most consequential problem. If researchers crack the code, they might solve such ancient philosophical conundrums as the mind-body problem and the riddle of free will. A solution to the neural code could also, in principle, give us unlimited power over our brains and hence minds. Science fiction—including mind-control, mind-reading, bionic enhancement and even psychic uploading—could become reality. But the most profound problem in science is also by far the hardest.

But it does appear “that each individual psyche is fundamentally irreducible, unpredictable, inexplicable,” which that suggests that it would be exceedingly difficult to extract that uniqueness from a brain and transfer it to another medium. Such considerations lead Horgan to conclude that, “The Singularity is a religious rather than a scientific vision … a kind of rapture for nerds …” As such it is one of many “escapist, pseudoscientific fantasies …”

I don’t agree with Horgan’s conclusion. He believes that belief in technological or religious immortality springs from a “yearning for transcendence,” which suggests that what is longed for is pseudoscientific fantasy. But the fact that a belief results from a yearning doesn’t mean the belief is false. I can want things to be true that turn out to be true.

More importantly, I think Horgan mistakenly conflates religious and technological notions of immortality, thereby denigrating ideas of technological immortality by association. But religious beliefs about immortality are based exclusively on yearning without any evidence of their truth. In fact, every moment of every day the evidence points away from the truth of religious immortality. We don’t talk to the dead and they don’t talk to us. On the other hand, technological immortality is based on scientific possibilities. The article admits as much, since cracking the neural code may lead to technological immortality. So while both types of immorality may be based on a longing or yearning, only one has the advantage of being based on science.

Thus the idea of a technological singularity is for the moment science fiction, but it is not pseudoscientific. Undoubtedly there are other ways to prioritize scientific research, and perhaps trying to bring about the Singularity isn’t a top priority. But it doesn’t follow from anything that Horgan says that we should abandon trying to crack the neural code, or to the Singularity that might lead to. Doing so may solve most of our other problems, and usher in the Singularity too.

Critique of Bill Joy’s “Why the future doesn’t need us”

“I’m Glad the Future Doesn’t Need Us: A Critique of Joy’s Pessimistic Futurism”
(Originally published in Computers and Society, Volume 32: Issue 6, June 2003. This article was later reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 24, 2016.)

ABSTRACT

In his well-known piece, “Why the future doesn’t need us,” Bill Joy argues that 21st century technologies—genetic engineering, robotics, and nanotechnology (GNR)—will extinguish human beings as we now know them, a prospect he finds deeply disturbing. I find his arguments deeply flawed and critique each of them in turn.

Joy’s unintended consequences argument cites a passage by the Unabomber Ted Kaczinski. According to Joy, the key to this argument is the notion of unintended consequences, which is “a well-known problem with the design and use of technology…” Independent of the strength of Kaczynski’s anti-technology argument—which I also find flawed—it is hard to quibble about the existence of unintended consequences.1 And it is easy to see why. The consequences of an action are in the future relative to that action and, since the future is unknown, some consequences are unknown. Furthermore, it is self-evident that an unknown future and unknown consequences are closely connected.

However, the strongest conclusion that Joy should draw from the idea of unintended consequences is that we should carefully choose between courses of action; and yet he draws the stronger conclusion that we ought to cease and desist in the research, development, and use of 21st century technologies. But he cannot draw this stronger conclusion without contradiction if, as he thinks, many unknown, unintended consequences result from our choices. And that’s because he can’t know that abandoning future technologies will produce the intended effects. Thus the idea of unintended consequences doesn’t help Joy’s case, since it undermines the justification for any course of action. In other words, the fact of unintended consequences tells us nothing about what we ought to choose, and it certainly doesn’t give us any reason to abandon technology. Of course Joy might reply that new, powerful technologies make unintended consequences more dangerous than in the past, but as I’ve just shown, he cannot know this. It may well be that newer technologies will lead to a safer world.

Joy’s big fish eat little fish argument quotes robotics pioneer Hans Moravec: “Biological species almost never survive encounters with superior competitors.” Analogously, Joy suggests we will be driven to extinction by our superior robotic descendents. But it isn’t obvious that robots will be superior to us and, even if they were, they may be less troublesome than our neighbors next door. In addition, his vision of the future presupposes that robots and humans will remain separate creatures, a view explicitly rejected by robotics expert Rodney Brooks and others. If Brooks is correct, humans will gradually incorporate technology into their own bodies thus eliminating the situation that Joy envisions. In sum, we don’t know that robots will be the bigger fish, that they will eat us even if they are, or that there will even be distinct fishes.

Joy’s mad scientist argument describes a molecular biologist who “constructs and disseminates a new and highly contagious plague that kills widely but selectively.” Now I have no desire to contract a plague, but Joy advances no argument that this follows from GNR; instead, he plays on our emotions by associating this apocalyptic vision with future technology. (In fact, medical science is the primary reason we have avoided plagues.) The images of mad scientist or Frankenstein may be popular, but scientists are no madder than anyone else and nightmarish describes only one possible future.

Joy’s lack of control argument focuses upon the self-replicating nature of GNR. According to Joy, self-replication amplifies the danger of GNR: “A bomb is blown up only once—but one bot can become many, and quickly get out of control.” First of all, bombs replicate, they just don’t replicate by themselves. So Joy’s concern must not be with replication, but with self-replication. So what is it about robotic self-replication that frightens us? The answer is obvious. Robotic self-replication appears to be out of our control, as compared to our own or other humans self-replication. Specifically, Joy fears that robots might replicate and then enslave us; but other humans can do the same thing. In fact, we may increase our survival chances by switching control to more failsafe robots designed and programmed by our minds. While Joy is correct that “uncontrolled self-replication in these newer technologies runs … a risk of substantial damage in the physical world,” so to does the “uncontrolled self-replication” of humans, their biological tendencies, their hatreds, and their ideologies. Joy’s fears are not well-founded because the lack of control over robotic self-replication is not, prima facie, more frightening than the similar lack of control we exert over other human’s replication.

Furthermore, to what extent do we control our own reproduction?  I’d say not much. Human reproduction results from a haphazard set of cultural, geographical, biological, and physiological circumstances; clearly, we exert less control over when, if, and with whom we reproduce than we suppose. And we certainly don’t choose the exact nature of what’s to be reproduced; we don’t replicate perfectly. We could change this situation thru genetic engineering, but Joy opposes this technology. He would rather let control over human replication remain in the hands of chance—at least chance as determined by the current state of our technology. But if he fears the lack of control implied by robotic self-replication, why not fear that lack of control over our own replication and apply more control to change this situation? In that way, we could enhance our capabilities and reduce the chance of not being needed.

Of course Joy would reiterate that we ought to leave things as they are now. But why? Is there something perfect or natural about the current state of our knowledge and technology? Or would things be better if we turned the technological clock back to 1950? 1800? or 2000 B.C.? I suggest that the vivid contrast Joy draws between the control we wield over our own replication and the lack of it regarding self-replicating machines is illusory. We now have and may always have more control over the results of our conscious designs and programs, then we do over ourselves or other people whose programs were written by evolution. If we want to survive and flourish then we ought to engineer ourselves with foresight and, at the same time, engineer machines consistent with these goals.

Joy’s easy access argument claims that 20th century technologies—nuclear, biological, and chemical (NBC)—required access to rare “raw materials and highly protected information,” while 21st century technologies “are widely within the reach of individuals or small groups.” This means that “knowledge alone will enable the use of them,” a phenomenon that Joy terms: “knowledge-enabled mass destruction (KMD).”

Now it is difficult to quibble with the claim that powerful, accessible technologies pose a threat to our survival. Joy might argue that even if we survived the 21st century without destroying ourselves, what of the 22nd or the 23rd centuries when more accessible and powerful KMD becomes possible? Of course we could freeze technology, but it is uncertain that this would be either realistic or advisable. Most likely the trend of cultural evolution over thousands of years will continue—we will gain more control and power over reality.

Now is this more threatening than if we stood still? This is the real question that Joy should ask because there are risks no matter what we do. If we remain at our current level of technology we will survive until we self-destruct or are destroyed by universal forces, say the impact of an asteroid or the sun’s exhaustion of its energy. But if we press forward, we may be able to save ourselves. Sure, we must be mindful of the promises and the perils of future technologies, but nothing Joy says justifies his conclusion that: “we are on the cusp of the further perfection of extreme evil…” Survival is a goal, but I don’t believe that abandonment of new technologies will assure this result or even make it more likely; it just isn’t clear that limiting the access to or discovery of knowledge is, or has ever been, the solution to human woes.

Joy’s  poor design abilities argument notes how often we “overestimate our design abilities,” and concludes: “shouldn’t we proceed with great caution?” But he forgets that we sometime underestimate our design abilities; and sometimes we are too cautious. Go forward with caution, look before you leap—but don’t stand still.

I take the next argument to be his salient one. He claims that scientists dream of building conscious machines primarily because they want to achieve immortality by downloading their consciousness into them. While he accepts this as distinct possibilities, his existential argument asks whether we will still be human after we download: “It seems far more likely that a robotic existence would not be like a human one in any sense that we understand, that the robots would in no sense be our children, that on this path our humanity may well be lost.” The strength of this argument depends on the meaning of: “in any sense,” “no sense,” “humanity,” and “lost.” Let’s consider each in turn.

It is simply false that a human consciousness downloaded into a robotic body would not be human “in any sense.” If our consciousness is well-preserved in the transfer, then something of our former existence would remain, namely our psychological continuity, the part most believe to be our defining feature. And if robotic bodies were sufficiently humanlike—why we would want them to be is another question—then there would be a semblance of physical continuity as well. In fact, such an existence would be very much like human existence now if the technologies were sufficiently perfected. So we would still be human to some, if not a great, extent. However, I believe we would come to prefer an existence with less pain, suffering, and death to our current embodied state; and the farther we distanced ourselves from our former lives the happier we will be.

As to whether robots would “in no sense” be our children, the same kind of argument applies. Whatever our descendants become they will, in some sense, be our children in the same way that we are, in some sense, the children of stars. Again notice that the extent to which we would want our descendants to be like us depends upon our view of ourselves. If we think that we now experience the apex of consciousness, then we should mourn our descendants’ loss of humanity. But if we hold that more complex forms of consciousness may evolve from ours, then we will rejoice at the prospect that our descendants might experience these forms, however non-human-like they may be. But then, why would anyone want to limit the kind of consciousness their descendants experience?

As for our “humanity being lost,” this is true in the sense that human nature will evolve beyond its present state, but false in the sense that there will still be a developmental continuity from beings past and present to beings in the future. Joy wants to limit our offspring for the sake of survival, but isn’t mere survival a lowly goal? Wouldn’t many of us prefer death to the infinite boredom of standing still? Wouldn’t we like to evolve beyond humanity?  It isn’t obvious that we have achieved the pinnacle of evolution, or that the small amount of space and time we fill satisfies us. Instead it is clear that we are deeply flawed and finite—we age, decay, lose our physical and mental faculties, and then perish. A lifetime of memories, knowledge, and wisdom, lost. Oh, that it could be better! Joy’s nostalgic longings for the past and naïve view that we preserve the present are misguided, however well they may resonate with those who share similar longings or fear the inevitable future. Our descendants won’t desire to be us anymore than we do to be our long ago ancestors. As Tennyson proclaims: “How dull it is to pause, to make an end, To rust unburnish’d, not to shine in use!2

Joy next turns to his other technologies make things worse argument. As for genetic engineering, I know of no reason—short of childish pleas not to play God—to impede our increasing abilities to perfect our bodies, eliminate disease, and prevent deformity. To not do so would be immoral, making us culpable for an untold amount of preventable suffering and death. And even if there are Gods who have endowed us with intelligence, it would hardly make sense that they didn’t mean for us to use it. As for nanotechnology, Joy eloquently writes of how “engines of creation” may transform into “engines of destruction, but again it is hard to see why we or the Gods prefer that we remain ignorant about nanotechnology.

Joy also claims that there is something sinister about the fact that NBC technologies have largely military uses and were developed by governments, while GNR have commercial uses and are being developed by corporations. Unfortunately, Joy gives us no reason whatsoever to share his fear. Are the commercial products of private corporations more likely to cause destruction than the military products of governments? At first glance, the opposite seems more likely to be true, and Joy gives us no reason to reconsider.

Joy’s it’s never been this bad argument asserts: “this is the first moment in the history of our planet when any species by its voluntary actions has become a danger to itself.” But this is false. Homo sapiens have always been a danger to themselves, both by their actions, as in incessant warfare, and by their inaction, as demonstrated by their impotence when facing plague and famine. I also doubt that humans are a greater threat to themselves now than ever before. We have explored and spread ourselves to all parts of the globe, multiplied exponentially, extended our life spans, created culture, and may soon have the power to increase our chance for survival from both celestial and terrestrial forces. This should be a cause for celebration not despair. We no longer need be at the mercy of forces beyond our control, we may soon direct our own evolution.

Joy next quotes Carl Sagan’s to the effect that the survival of cultures producing technology depends on “what may and what may not be done.” Joy interprets this insight as the essence of common sense or cultural wisdom. Independent of the question of whether this is a good definition of common sense, Joy assumes that Sagan’s phrase applies to an entire century’s technologies, when it is more likely that it applies to only some of it. It is hard to imagine that Sagan, a champion of science, meant for us to forego 21st century technology altogether.

And I vehemently dispute Joy’s claim that science is arrogant in its pursuits; instead, it is the humblest of human pursuits. Many human pursuits are more arrogant than science, which carefully and conscientiously tries to tease a bit of truth from reality. Its claims are always tentative and amenable to contrary evidence—much more than can be said for most creeds. And what of the charlatans, psychics, cultists, astrologers, and faith-healers? Not to mention the somewhat more respectable priests and preachers. Science humbly does not pretend to know with certainty, much more than can be said about some ignorant people.

And what of his claim that we have no business pursuing robotics and AI when we have “so much trouble …understanding—ourselves?”  The reply to this, trying to understand mind won’t help you understand the mind argument, notes that self-knowledge is the ultimate goal of the pursuit of knowledge. His sentimentally notes that his grandmother “had an awareness of the nature of the order of life, and of the necessity of living with and respecting that order,” but this is hopelessly naïve and belies the facts. Would he have us die poor and young, be food for beasts, defenseless against disease, living lives that were, as Hobbes so aptly put it: “nasty, brutish, and short?” The impotence and passivity implied by respecting the natural order has condemned millions to death.3 In fact, the life that Joy and most of the rest of us enjoy was built on the labors of persons who fought mightily with the natural order and the pain, poverty and suffering that nature exudes. Where would we be without Pasteur and Fleming and Salk? As Joy points out life may be fragile, but it was more so in a past that was nothing like the idyllic paradise that he imagines.

Joy’s analogy between the nuclear arms race and possible GNR races is also misplaced, inasmuch as the 20th century arms race resulted as much from a unique historical situation and conflicting ideologies as some unstoppable technological momentum. Evidence for this is to be found in the reduction of nuclear warheads by the superpowers both during and after the cold war. Yes, we need to learn from the past, but its lessons are not necessarily the ones Joy alludes to. Should we not have developed nuclear weapons? Is he sure that the world would be better today had there not been a Manhattan project?

Now it may be that we are chasing our own tails as we try to create defenses for the threats that new technologies pose. Possibly, every counter measure is as dangerous as the technology for which it was meant to counter. But Joy’s conclusion is curious: “The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.” In the first place, it is unrealistic to believe that we could limit the pursuit of knowledge even if we wanted to and it was a good idea. Second, this “freeze” at current levels of technology does not expunge the danger; the danger exists now.

A basic difficulty with Joy’s article is this: he mistakenly accept the notion that technology rules people rather than the reverse.4 But if we can control our technology, there is another solution to our dilemmas. We can use our technology to change ourselves; to make ourselves more ethical, cautious, insightful, and intelligent. Surely Joy believes that humans make choices, how else could they choose relinquishment? So why not change ourselves, relinquishing not our pursuit of knowledge, but our self-destructive tendencies?

Joy’s hysteria blinds him to the possible fruits of our knowledge and his pessimism won’t allow him to see our knowledge and its applications as key to our salvation. Instead, he appeals to the ethics of the Dalia Lama to save us, as if another religious ethics will offer escape from the less noble angels of our nature. I know of no good evidence that the prescriptions of religious ethics have, on the whole, increased the morality of the human race. No doubt the contrary case could easily be made. Why not then use our knowledge to gain mastery over ourselves? If we do that, mastery of our technology will take care of itself. Joy’s concerns are legitimate, but his solutions unrealistic. His planned knowledge stoppage condemns human beings to an existence that cannot improve. And if that’s the case, what is the point of life?

I say forego Joy’s pessimism; reject all barriers and limitations to our intelligence, health, and longevity. Be mindful of our past accomplishments, appreciative of all that we are, but be driven passionately and creatively forward by the hope of all that we may become. Therein lies the hope of humankind and their descendents. In the words of Walt Whitman:

This day before dawn I ascended a hill,
and look’d at the  crowded heaven,
And I said to my Spirit,
When we become the enfolders of those orbs,
and the pleasure and knowledge of everything in them,
shall we be fill’d and satisfied then?
And my Spirit said:
No, we but level that lift,
to pass and continue beyond.
~ Walt Whitman 

__________________________________________________________________

1. Kaczynski argues that machines will either: a) make all the decisions thus rendering humans obsolete; or b) humans will retain control. If b then only an elite will rule in which case they will: 1)quickly exterminate the masses; 2)slowly exterminate the masses; or 3)take care of the masses. However if 3 then the masses will be happy but not free and life would have no meaning. My questions for Kaczynski are these: Does he really think the only way for humans to be happy is in an agricultural paradise? Does he think an agricultural life was a paradise? A hunter-gather life? Are we really less free when we have loosened the chains of our evolutionary heritage, or our we more free? Kaczynski’s vision of a world where one doesn’t work, pursues their own interests, while being very happy sounds good to me.

2. from Alfred Lord Tennyson’s Ulysses.

3. I would argue that had the rise of Christianity in the West not stopped scientific advancement for a thousand years until the Renaissance, we might be immortals already.

4. As in Thoreau’s well-known phrase which appears, not surprisingly, on the Luddite home page: “We do not ride on the railroad; it rides upon us.”

5. From Walt Whitman’s “Song of Myself” in Leaves of Grass.

Summary of Bill Joy’s, “Why the future doesn’t need us,”

Bill joy.jpg

Bill Joy (1954 – ) is an American computer scientist who co-founded Sun Microsystems in 1982, and served as chief scientist at the company until 2003. His now famous Wired magazine essay, “Why the future doesn’t need us,” (2000) sets forth his deep concerns over the development of modern technologies.[i] 

Joy traces his concern to a discussion he had with Ray Kurzweil at a conference in 1998. Taken aback by Kurzweil’s predictions, he read an early draft of The Age of Spiritual Machines: When Computers Exceed Human Intelligence, and found it deeply disturbed. Subsequently he encountered arguments by the Unabomber Ted Kaczynski’s. Kaczynski argued that if machines do all the work, as they inevitably will, then we can: a) let the machines make all the decisions; or b) maintain human control over the machines.

If we choose “a” then we are at the mercy of our machines. It is not that we would give them control or that they would take control, rather, we might become so dependent on them that we would have to accept their commands. Needless to say, Joy doesn’t like this scenario. If we choose “b” then control would be in the hands of an elite, and the masses would be unnecessary. In that case the tiny elite: 1) would exterminate the masses; 2) reduce their birthrate so they slowly became extinct; or 3) become benevolent shepherds to the masses. The first two scenarios entail our extinction, but even the third option is no good. In this last scenario the elite would see to it that all physical and psychological needs of the masses are met, while at the same time engineering the masses to sublimate their drive for power. In this case the masses might be happy, but they would not be free.

Joy finds these arguments convincing and deeply troubling. About this time Joy read Moravec’s book where he found more of the same kind of predictions. He found himself especially concerned by Moravec’s claim that technological superiors always defeat the inferiors, as well as his contention that humans will become extinct as they merge with the robots. Disturbed, Joy consulted other computer scientists who basically agreed with these technological predictions but were themselves unconcerned. Joy was stirred to action.

Joy’s concerns focuses on the transforming technologies of the 21st century—genetics, nanotechnology, and robotics (GNR). What is particularly problematic about them is that they have the potential to self-replicate. This makes them inherently more dangerous than 20th century technologies—nuclear, biological, and chemical weapons—which were expensive to build and require rare raw materials. By contrast, 21st century technologies allow for small groups or individuals to bring about massive destruction. Joy accepts that we will soon achieve the computing power to implement some of the dreams of Kurzweil and Moravec, worrying nevertheless that we overestimate our design abilities. Such hubris may lead to disaster.

Robotics is primarily motivated by the desire to be immortal—by downloading ourselves into them. (The terms uploading and downloading are used interchangeably.) But Joy doesn’t believe that we will be human after the download or that the robots would be our children. As for genetic engineering, it will create new crops, plants, and eventually new species including many variations of human species, but Joy fears that we do not know enough to conduct such experiments. And nanotechnology confronts the so-called “gray goo” problem—self-replicating nanobots out of control. In short, we may be on the verge of killing ourselves! Is it not arrogant, he wonders, to design a robot replacement species when we so often make design mistakes?

Joy concludes that we ought to relinquish these technologies before it’s too late. Yes, GNR may bring happiness and immortality, but should we risk the survival or the species for such goals? Joy thinks not.

Summary – Genetics, nanotechnology, and robotics are too dangerous to pursue. We should relinquish them.

________________________________________________________

[i] Bill Joy, “Why The Future Doesn’t Need Us,” Wired Magazine, April 2000.

Summary of Jaron Lanier’s, “One Half A Manifesto”

Jaron Lanier (1960 – ) is a pioneer in the field of virtual reality who left Atari in 1985 to found VPL Research, Inc., the first company to sell VR goggles and gloves. In the late 1990s Lanier worked on applications for Internet2, and in the 2000s he was a visiting scholar at Silicon Graphics and various universities. More recently he has acted as an advisor to Linden Lab on their virtual world product Second Life, and as “scholar-at-large” at Microsoft Research where he has worked on the Kinect device for Xbox 360.

Lanier’s “One Half A Manifesto” opposes what he calls “cybernetic totalism,” the view of Kurzweil and others which proposes to transform the human condition more than any previous ideology. The following beliefs characterize cybernetic totalism.

  1. That cybernetic patterns of information provide the ultimate and best way to understand reality.
  2. That people are no more than cybernetic patterns.
  3. That subjective experience either doesn’t exist, or is unimportant because it is some sort of peripheral effect.
  4. That what Darwin described in biology, or something like it, is in fact also the singular, superior description of all creativity and culture.
  5. That qualitative as well as quantitative aspects of information systems will be accelerated by Moore’s Law. And
  6. That biology and physics will merge with computer science (becoming biotechnology and nanotechnology), resulting in life and the physical universe becoming mercurial; achieving the supposed nature of computer software. Furthermore, all of this will happen very soon! Since computers are improving so quickly they will overwhelm all the other cybernetic processes, like people, and fundamentally change the nature of what’s going on in the familiar neighborhood of Earth at some moment when a new “criticality” is achieved—maybe in about the year 2020. To be a human after that moment will be either impossible or something very different than we now can know.[i]

Lanier responds to each belief in detail. A summary of those responses are as follows:

  1. Culture cannot be reduced to memes, and people cannot be reduced to cybernetic patterns.
  2. Artificial intelligence is a belief system, not a technology.
  3. Subjective experience exists, and it separates humans from machines.
  4. Darwin provides the “algorithm for creativity” which explains how computers will become smarter than humans. However, that nature didn’t require anything “extra” to create people doesn’t mean that computers will evolve on their own.
  5. There is little reason to think that software is getting better, and no reason at all to think it will get better at a rate like hardware.

The sixth belief, the heart of the cybernetic totalism, terrifies Lanier. Yes, computers might kill us, preserve us in a matrix, or be used by evil humans to do harm to the rest of us. It is deviations of this latter scenario that most frightens Lanier for it is easy to imagine that a wealthy few would become a near godlike species, while the rest of us remain relatively the same. And Lanier expects immortality to be very expensive, unless software gets much better. For example, if you were to use biotechnology to try to make your flesh into a computer, you would need excellent software without glitches to achieve such a thing. But this would be extraordinarily costly.

Lanier grants that there will indeed be changes in the future, but they should be brought about by humans not by machines. To do otherwise is to abdicate our responsibility. Cybernetic totalism, if left unchecked, may cause suffering like so many other eschatological visions have in the past. We ought to remain humble about implementing our visions.

Summary – Cybernetic totalism is philosophically and technologically problematic.

____________________________________________________________________

[i] Jaron Lanier, “One Half A Manifesto”

Summary of Marshall Brain’s, “The Day You Discard Your Body”

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 21, 2016.)

Marshall Brain (1961 – ) is an author, public speaker, and entrepreneur. He earned an MS in computer science from North Carolina State University where he taught for many years, and is the founder of the website HowStuffWorks, which was sold in 2007 to Discovery Communications for $250,000,000. He also maintains a website where his essays on transhumanism, robotics, and naturalism can be found. His essay, “The Day You Discard Our Bodies,” presents a compelling case that sometime in this century the technology will be available to discard our bodies.[i] And when the time comes, most of us will do so.

Why would we want to discard our bodies? The answer is that by doing so we would achieve an unimaginable level of freedom and longevity. Consider how vulnerable your body is. If you fall off a horse or dive into a too-shallow pool of water, your body will become completely useless. If this happened to you, you would gladly discard your body. But this happens to all of us as we age—our bodies generally kill our brains—creating a tragic loss of knowledge and experience. Our brains die because our bodies do.

Consider also how few of us are judged to have beautiful bodies, and how the beauty we do have declines with age. If you could have a more beautiful body, you would gladly discard your body. Additionally, your body has to go to the bathroom, it smells, it becomes obese easily, it takes time for it to travel through space, it cannot fly or swim underwater for long, and it cannot perform telekinesis. As for the aging of our bodies, most would happily dispense with it, discarding their bodies if they could.

Why would the healthy discard their bodies? Consider that healthy people play video games in staggering numbers. As these games become more realistic, we can imagine people wanting to live and be immersed in them. Eventually you would want to connect your biological brain to your virtual body inside the virtual reality. And your virtual body could be so much better than your biological body—it could be perfect. Your girlfriend or boyfriend who made the jump to the virtual world would have a perfect body. They would ask you to join them. All you would have to do is undergo a painless surgery to connect your brain to its new body in the virtual reality. There you could see anything in the world without having to take the plane ride (or go through security.) You could visit the Rome or Greece of two thousand years ago, fight in the battle of Stalingrad, talk to Charles Darwin, or live the life of Superman. You could be at any time and any place, you can overcome all limitations, you could have great sex!  When your virtual body would be better in every respect from your biological body, you would discard the latter.

Initially your natural brain may still be housed in your natural body, but eventually your brain will be disconnected from your body and housed in a safe brain storage facility. Your transfer will be complete—you will live in a perfect virtual reality without your cumbersome physical body, and the limitations it imposes.

Summary – We will be able to discard our bodies and live in a much better virtual reality relatively soon. We should do so.

______________________________________________________________

[i] Marshall Brain, “The Day You Discard Your Body”