Monthly Archives: January 2016

John Searle’s Critique of Ray Kurzweil

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 8, 2016.)

John Searle (1932 – ) is currently the Slusser Professor of Philosophy at the University of California, Berkeley. He received his PhD from Oxford University. He is a prolific author and one of the most important living philosophers.

According to Searle, Kurzweil’s book, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, is an extensive reflection on the implications of Moore’s law.[i] The essence of that argument is that smarter than human computers will arrive, we will download ourselves into this smart hardware, thereby guaranteeing our immortality. Searle attacks this fantasy by focusing on the chess playing computer “Deep Blue,” (DB) which defeated world chess champion Gary Kasparov in 1997.

Kurzweil thinks DB is a good example of the way that computers have begun to exceed human intelligence. But DB’s brute force method of searching through possible moves differs dramatically from the how human brains play chess. To clarify Searle offers his famous Chinese Room Argument. If I’m in a room with a program that answers questions in Chinese even though I do not understand Chinese, the fact that I can output the answer in Chinese does not mean I understand the language. Similarly DB does not understand chess, and Kasparov was playing a team of programmers, not a machine. Thus Kurzweil is mistaken if he believes that DB was thinking.

According to Searle, Kurzweil confuses a computers seeming to be conscious with it actually being conscious, something we should worry about if we are proposing to download ourselves into it! Just as a computer simulation of digestion cannot eat pizza, so too a computer simulation of consciousness is not conscious. Computers manipulate symbols or simulate brains through neural nets—but this is not the same as duplicating what the brain is doing. To duplicate what the brain does the artificial system would have to act like the brain. Thus Kurzweil confuses simulation with duplication.

Another confusion is between observer-independent (OI) features of the world, and observer-dependent (OD) features of the world. The former include features of the world studied by, for example, physics and chemistry; while the latter are things like money, property, governments and all things that exist only because there are conscious observers of them. (Paper has objective physical properties, but paper is money only because persons relate to it that way.)

Searle says that he is more intelligent than his dog and his computer in some absolute, OI sense because he can do things his dog and computer cannot. It is only in the OD sense that you could say that computers and calculators are more intelligent than we are. You can use intelligence in the OD sense provided that you remember it does not mean that a computer is more intelligent in the OI sense. The same goes for computation. Machines compute analogously to the way we do, but they don’t computer intrinsically at all—they know nothing of human computation.

The basic problem with Kurzweil’s book, according to Searle, is its assumption that increased computational power leads to consciousness. But he says that increased computational power of machines gives us no reason to believe machines are duplicating consciousness. The only way to build conscious machines would be to duplicate the way brains work and we don’t know how they work. In sum, behaving like one is conscious is not the same as actually being conscious.

Summary – Computers cannot be conscious.

______________________________________________________________________

[i] John Searle, “I Married A Computer,” review of The Age of Spiritual Machines, by Ray Kurzweil, New York Review of Books, April 8, 1999.

Ray Kurzweil’s Basic Ideas

   

I and many other scientists now believe that in around twenty years we will have the means to reprogram our bodies’ stone-age software so we can halt, then reverse, aging. Then nanotechnology will let us live forever. ~ Ray Kurzweil

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 4, 2016.)

Ray Kurzweil (1948 – ) is an author, inventor, futurist, and currently Director of Engineering at Google. He is involved in fields such as optical character recognition, text-to-speech synthesis, speech recognition technology, and electronic keyboard instruments; he is the author of several books on health, artificial intelligence, transhumanism, the technological singularity, and futurism; and he may be the most prominent spokesman in the world today for advocating the use of technology to transform humanity.

In his book, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, Kurzweil argues that in the next one hundred years machines will surpass human intelligence. Computers already surpass humans in playing chess, diagnosing certain medical conditions, buying and selling stocks, guiding missiles, and solving complex mathematical problems. However, unlike human intelligence, machine intelligence cannot describe objects on a table, write a term paper, tie shoes, distinguish a dog from a cat, or appreciate humor. One reason for this is that computers are simpler than the human brain, about a million times simpler. However, this difference will go away as computers continue to double in speed every twelve months, achieving the memory capacity and computing speed of the human brain around 2020.

Still, this won’t allow computers to match the flexibility of human intelligence because the software of intelligence is as important as the hardware. One way to mirror the brain’s software is by reverse engineering—scanning a human brain and copying its neural circuitry into a neural computer of sufficient capacity. If computers reach a human level of intelligence through such technologies, they will then go beyond it. They already remember and process information better than we do, remembering trillions of facts perfectly while we have a tough time with a few phone numbers. The combination of human-level intelligence along with greater speed, accuracy, and memory capabilities will push computers beyond human intelligence. A main reason for this is that our neurons are slow compared with electronic circuits, and most of their complexity supports life processes, not computation and information analysis. Thus, while many of us think of evolution as a billion-year drama that leads to human intelligence, the creation of greater than human intelligence will quickly dispel that notion.

Kurzweil supports his case with a number of observations about cosmic evolution and human history. Consider that for most of the history of the universe, cosmologically significant events took eons of time—the interval between significant events was quite long for most of cosmic evolution. But as the universe aged the interval between significant events grew shorter, and cosmically significant events now happen at increasingly shorter intervals. We can see this in the pace of cosmic evolution: ten billion years until the earth’s formation; a few billion more for life to evolve, hundreds of millions of years till the emergence of primates, millions of years till the emergence of humanoids, and the emergence of homo sapiens a mere 200 thousand years ago. In short, transformation is speeding up; the interval between salient events is shrinking.

Now technology is moving this process. Technology—fashioning and using ever more sophisticated tools—is simply another means of evolution which expedites the process of change considerably. Consider that Homo sapiens sapiens appeared only 90 thousand years ago, and become the lone hominoids a mere 30,000 years ago. Still, it took tens of thousands of years to figure out how to sharpen both ends of stones to make them effective! Needless to say, the pace of technological change has accelerated remarkably since then. For example, the 19th century saw technology increase at a dramatic rate compared to the 18th century, and increased unbelievably fast compared to the 12th century. In the 20th century major shifts in technology began to happen in decades or in some cases in a few years. A little more than a hundred years ago there was no flight or radio; and a mere fifty years ago there were no wireless phones or personal computers, much less cell phones or the internet. Today it seems your phone and computer are obsolete in a matter of months.

Technology has enabled our species to dominate the earth, exercise some control over our environment, and survive. Perhaps the most important of these technological innovations has been computation, the ability of machines to remember, compute, and solve problems. So far computers have been governed by Moore’s law: every two years or so the surface area of a transistor is reduced by fifty percent, putting twice as many transistors on an integrated circuit. The implication is that every two years you get twice the computing power for the same amount of money. This trend should continue for another fifteen years or so after which it will break down when transistor insulators will be but a few atoms wide. (At that point quantum computing may move the process forward in fantastic ways.) To really understand what will happen in the 21st century and beyond, we need to look at the exponential growth of the technology that will bring about vast changes in the near future.

Crucial to Kurzweil’s analysis is what he calls “the law of time and chaos.” He asks why some processes begin fast and then slow down—salient events in cosmic evolution or in the biological development of an organism—and why others start slowly and then speed up—the evolution of life forms or technology. The law of time and chaos explains this relationship. If there is a lot of chaos or disorder in a system, the time between salient events is great; as the chaos decreases and the order increases, the time between salient events gets smaller. The “law of accelerating returns” describes the latter phenomenon and is essential to Kurzweil’s argument. (You might say that his entire philosophy is a meditation on accelerating returns or exponential growth.) He argues that though the universe as a whole increases in disorder or entropy, evolution leads to increasing pockets of order (information for the purpose of survival) and complexity. Technology evolution is evolution by means other than biology, and it constantly speeds up as it builds upon itself.

We might reconstruct his basic argument as follows: a) evolution builds on itself, thus; b) in an evolutionary process order increases exponentially, thus; c) the returns accelerate. This law of accelerating returns drives cultural and technological evolution forward, with the returns building on themselves to create higher returns. Thus the entire process changes and grows exponentially, meaning that the near future will be radically different than the present.

… evolution has found a way around the computational limitations of neural circuitry. Cleverly, it has created organisms that in turn invented a computational technology a million times faster than carbon-based neurons … Ultimately, the computing conducted on extremely slow mammalian neural circuits will be ported to a far more versatile and speedier electronic (and photonic) equivalent.[i] This will eventually lead to reverse engineering the human brain by scanning it, mapping it, and eventually downloading our minds into computers. This means that your mind (software) would no longer be dependent on your body (hardware). Moreover, your evolving mind file will not be stuck with the circuitry of the brain, making it capable of being transferred from one medium to another, just as files are transferred from one computer to another. Then “our immortality will be a matter of being sufficiently careful to make frequent backups. If we’re careless about this, we’ll have to load an old backup copy and be doomed to repeat our recent past.”[ii]

We could download our personal evolving mind files into our original bodies, upgraded bodies, nanoengineered bodies, or virtual bodies. As we are currently further along with body transformation than with brain transformation—titanium devices, artificial skin, heart values, pacemakers—we might want to first completely rebuild our bodies using genetic therapies. But this will only go so far because of the limitations of DNA-based cells that depend on protein synthesis. No matter how well we enhance our bodies, they would still just be second-rate robots.

Instead Kurzweil suggests we use nanotechnology to rebuild the world atom by atom. The holy grail of nanotechnology would be intelligent self-replicating nanomachines capable of manipulating things at the nanolevel. (The great physicist Richard Feynman originally explained the possibility of nanotechnology in the 1950s. Today, important theorists like Eric Drexler and Ralph Merkle have shown the feasibility of self-replicating nanobots. Nanotechnology programs are now common in major universities.) The possibilities for nanotechnology to transform the world are extraordinary. It could build inexpensive solar cells to replace fossil fuels, or be launched in our bloodstream to improve the immune system, destroy pathogens, eradicate cancer cells, and reconstruct bodily organs and systems. It even has the potential to reverse engineer human neurons or any cell in the human body. Will people use this technology?

There is a clear incentive to go down this path. Given a choice, people will prefer to keep their bones from crumbling, their skin supple, and their life systems strong and vital. Improving our lives through neural implants on the mental level, and nanotech enhance bodies on the physical level, will be popular and compelling. It is another one of those slippery slopes—there is no obvious place to stop this progression until the human race has largely replaced the brains and bodies that evolution first provided.[iii]

Kurzweil also argues that “the law of accelerating returns” applies to the entire universe. He conjectures that life may exist elsewhere in the universe and proceed through various thresholds: the evolution of life forms; of intelligence; of technology; of computation; and finally the merging of a species with its technology—all driven by accelerating returns. Of course there are many things that can go wrong—nuclear war, climate change, asteroids, bacteria, self-replicating nanobots, and software viruses. Still, he remains optimistic.

Kurzweil ends his book by arguing that intelligence is not impotent against the mighty forces of the universe. Intelligence thwarts gravity and manipulates other physical phenomena despite its density being vanishingly small in a vast cosmos. If intelligence increases exponentially with time, then it will become a worthy competitor for big universal forces. He concludes: “The laws of physics are not repealed by intelligence, but they effectively evaporate in its presence… the fate of the Universe is a decision yet to be made, one which we will intelligently consider when the time is right.”[iv]

_____________________________________________________________________

[i] Ray Kurzweil, The Age of Spiritual Machines (New York: Penguin, 1999), 101-102
[ii] Kurzweil, The Age of Spiritual Machines, 129.
[iii] Kurzweil, The Age of Spiritual Machines, 141.
[iv] Kurzweil, The Age of Spiritual Machines, 260.

Death is an Ultimate Evil

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 2, 2016.)

The story of Ivan Ilyich indicates an inseparable connection between death and meaning. The precise connection is unclear, but surely it depends in large part on whether death is the end of our consciousness. While beliefs in immortality have been widespread among humans, such beliefs are extremely difficult to defend rationally.

If death is the end of an individual human life, the question naturally arises whether this is a good, bad, or indifferent thing. The argument of Epicurus states that being dead cannot be bad for someone, and thus the fear of death is misplaced. Deprivationists argue that we can be harmed by things we don’t experience, but it is hard to see how someone can be harmed if that someone is non-existent. But even if the deprivationists are correct, their view implies the counter-intuitive conclusion that we should regret that we did not exist before birth. In reply, deprivationists try to explain this asymmetry by pointing out that most of us do care more about the future than the past. After considering the arguments, Barry says that death probably is bad for us and nihilism a real possibility. Nonetheless he concludes that we give life subjective meaning by reflecting about our life and death.

Rosenbaum replies that being dead cannot be bad for the dead person—the Epicurean arguments is sound—and fears about death, while explainable, are unfounded. Hanfling stakes out the middle ground, acknowledging the pall that death casts over life while accepting the Epicurean view as palliative. In the end we just do not know the role death plays regarding the meaning of life. Pitcher defends the claim that a dead person can be wronged and harmed, with the caveat that this harm is to be understood as affecting the ante-mortem rather than post-mortem individual. However, it is not clear that this undercuts the Epicurean argument since it is addressed to the post-mortem individual. Luper defends the badness of death by the simple observations that few would reject the offer to live longer, and most believe they could accomplish more if they had more time. These observations make it clear that almost everyone does think that death is an unmitigated disaster, and the Epicurean argument is of limited value.

Benatar relies on an asymmetry to claim that it is better never to have been born, and it would be a good thing if the human race became extinct. Despite its philosophical subtlety, it is hard to believe that Benatar believes his own argument. Can one really prefer eternal nothingness to the possibility of a good life? If I prefer to remain alive, I am not implicitly accepting that life is better than non-life? Does it really make sense to dedicate a book to the parents who harmed you by bringing you into existence? Still, Benatar’s arguments are persuasive enough that Leslie cannot find any knock-down arguments against them, although he cautions us against accepting philosophical prescriptions that, if followed, will result in the death of the species. Surely we ought to tread carefully here despite the power of Benatar’s claims.

These considerations lead to another question. If life is worth something, as most of us generally believe, then why not have as much of it as we like? Lenman rejects immortality for multiple reasons, primarily because immortals would no longer be human. It is easy to see how young philosophers would advocate such a view, thinking that they have enough time to do what they want, but few older, healthy persons could think such a thing. (Lenman wrote this piece when quite young.) For them aging causes the smell of death to be more real, powerful and putrid. As for losing our humanity, that was gained in the course of our long evolutionary history and we will, hopefully, transcend it.

Bostrom picks up the argument here, arguing forcefully that death is evil. Some tell us we will be born again or that death is good or natural, but all such explanations are cases of adaptive preferences. If we cannot do anything about death, we adapt and say we prefer it; but when we can do something about it, almost everyone will rejoice. When the elixir is real, you can be sure it will be used. At the moment we do not know how to prevent death, but we have some scientific insights that could lead in that direction. If some individuals still want to die when death is preventable, we should respect their autonomy, but for those of us who do not want to die, our autonomy should be honored as well. Thus we agree with Bostrom; we should rid ourselves of the dragon—death should be optional.

At the moment, however, death is not optional. Given our predicament—the problem of life that we discussed in the introduction—we have little choice then but to face death stoically, bravely, optimistically. The optimistic attitude prescribed by Michael and Caldwell violates no principles of reason and is practical to boot. A similar kind of optimism was captured in a famous passage from William James essay “The Will To Believe,”

We stand on a mountain pass in the midst of whirling snow and blinding mist, through which we get glimpses now and then of paths which may be deceptive. If we stand still we shall be frozen to death. If we take the wrong road we shall be dashed to pieces. We do not certainly know whether there is any right one. What must we do? ‘Be strong and of a good courage.’ Act for the best, hope for the best, and take what comes. … If death ends all, we cannot meet death better.[1]

A comparable viewpoint was relayed to me in a hand-written letter (remember those?) in the mid 1990s from my friend and graduate school mentor, Richard J. Blackwell. Replying to my queries about the meaning of life he wrote:

As to your “what does it all mean” questions, you do not really think that I have strong clear replies when no one else since Plato has had much success! It may be more fruitful to ask about what degree of confidence one can expect from attempted answers, since too high expectations are bound to be dashed. It’s a case of Aristotle’s advice not to look for more confidence than the subject matter permits. At any rate, if I am right about there being a strong volitional factor here, why not favor an optimistic over a pessimistic attitude, which is something one can control to some degree? This is not an answer, but a way to live.

This seems right. We really have nothing to lose by being optimistic and, given the current reality of death, this is a wise option. But that does not change the fact that death is bad. Bad because it puts an end to something which at its best is beautiful; bad because all the knowledge and insight and wisdom of that person is lost; bad because of the harm it does to the living; bad because it causes people to be unconcerned about the future beyond their short lifespan; and bad because we know in our bones, that if we had the choice, and if our lives were going well, we would choose to on. That death is generally bad—especially so for the physically, morally, and intellectually vigorous—is nearly self-evident.

But most of all, death is bad because it renders completely meaningful lives impossible. It is true that longer lives do not guarantee meaningful ones, but all other things being equal, longer lives are more meaningful than shorter ones. (Both the quality and the quantity of a life are relevant to its meaning; both are necessary though not sufficient conditions for meaning.) An infinite life can be without meaning, but a life with no duration must be meaningless. Thus the possibility of greater meaning increases proportionately with the length of a lifetime.

Yes, there are indeed fates worse than death, and in some circumstances death may be welcomed even if it extinguishes the further possibility of meaning. Nevertheless, death is one of the worst fates that can befall us, despite the consolations offered by the deathists—the lovers of death. We may become bored with eternal consciousness, but as long as we can end our lives if we want, as long as we can opt out of immortality, who wouldn’t want the option to live forever?

Only if we can choose whether to live or die are we really free. Our lives are not our own if they can be taken from us without our consent, and, to the extent death can be delayed or prevented, further possibilities for meaning ensue. Perhaps with our hard-earned knowledge we can slay the dragon tyrant, thereby opening up the possibility for more meaningful lives. This is perhaps the fundamental imperative for our species. For now the best we can do is to remain optimistic in the face of the great tragedy that is death.

[1] William James, Pragmatism and Other Writings (New York: Penguin, 2000), x.

 

Michaelis Michael & Peter Caldwell’s, “The Consolations of Optimism”

Michaelis Michael is a senior lecturer of the University of New South Wales in Sydney Australia, and Peter Caldwell is a lecturer at the University of Technology in Sydney. In their insightful piece, “The Consolations of Optimism” (2004), they argue for adopting an attitude of optimism regarding the meaning of life.

The optimist and pessimist may agree on the facts, but not on their attitude toward those facts. “This nicely sketches what our thesis is: optimism is an attitude, not a theoretical position; moreover, there are reasons why one ought to be an optimist.”[i]  The reasons for preferring optimism have nothing to do with how the world is—optimism is not a better description of reality. Instead it is that a reasonable optimism is best for ourselves and those around us. To better understand reasonable optimism, the authors turn to the Stoics.

The Happiness of the Stoic Sage – Stoics are often characterized as emotionless, indifferent, individuals who simply put up with their fate, accepting that life is bad. Such a picture is uninspiring. While resignation toward the dreadfulness of life is cynical and pessimistic, this is not how the authors interpret Stoicism. The Stoics counsels us to embrace that which we cannot change rather than fight against it and, in the process, embrace reality. Thus Stoicism is realistic, not cynical.

For the Stoics emotions follow from beliefs.  For example, if we believe that death is bad then the emotion of fear or dread may follow. In this case, Stoics generally holds that the belief that death is bad is unjustified and hence negative emotions should not follow. Now consider cheerfulness. There are good reasons to be cheerful and happy—it feels better than being unhappy. This is the reason to be cheerfully optimistic. But can we adopt this optimistic attitude, is it psychologically feasible? The authors think it is both feasible and reasonable to adopt optimism. While the pessimist might object that optimism provides little consolation, optimism contributes to a happier existence and that is a reason to adopt it. Optimism is more than a small consolation.

But optimism is not a set of beliefs about how reality is; rather it is a response to reality. A stoical attitude does not mean not caring or being indifferent to unpleasant things, rather it doesn’t add lamenting to one’s caring. Stoics do not deny that pain and suffering exist—because that is to deny reality—but accept such evils without resenting them. The Stoics reject responding to situations with strong, irrational emotions that would cloud judgment, counseling instead to remain calm and optimistic. “This way of experiencing pains without losing equanimity is the key to stoical optimism.”[ii] Optimism leads to happiness and is therefore reasonable.

The Rationality of Beliefs – Beliefs represent how things are to us. If we find beliefs do not adequately do this, we ought to reject them; if they do represent the world well, then we ought to keep them. In addition to believing things about the world, we might desire, expect, hope, fear, or want things about the world. If we expect things about the world, we believe those things will happen. If we hope, desire, want, or fear things, we might not believe those things will happen, instead believing only that they might happen. In all of these cases beliefs are about possibilities that are rational to entertain. But what counts as making a belief rational? Here we can distinguish between strongly rational—the evidence is nearly irrefutable—or weakly rational—as a practical necessity we must believe some things that are not certain but necessary for us to act in the world. So the test of a belief system may be whether it is practical in this way.

Optimism & Pessimism – Again optimists and pessimists do not necessarily disagree about how the world is, although they could, but instead project differing attitudes toward it. Since optimism is an attitude, it does not assume any cluster of beliefs and thus cannot be undermined for being irrational like a belief can. Pessimism is an attitude which demands things from reality and resent that reality does not conform to their wishes. Optimists are typically more accepting of the limitations of the world. Of course optimists may lose their optimism when bad fortune strikes, but we are all happier when we are optimistic and less happy when we are pessimistic—this is the rational ground for optimism.

Yet optimism is not wishful thinking. Wishful thinking involves beliefs that are false, whereas optimism is an attitude that does not necessarily involve false beliefs. Furthermore, optimism has positive results, as the case of Hume’s attitude toward his impending death shows. Diagnosed with a fatal disease Hume begins his ruminations on his situation thus: “I was ever more disposed to see the favorable than unfavorable side of things: a turn of mind which it is more happy to possess, than to be born to an estate of ten thousand a year… It is difficult to be more detached from life than I am at the present.”[iii] While many fear death or react variously in ways that disturb tranquility “Hume’s calm and sanguine resignation stands like a beacon of reasonableness, calling out for emulation.”[iv] Optimism is a reasonable and beneficial response to the human condition.

Summary – We do not know if life is meaningful or not. For now we might as well be optimistic though, especially when facing death.

_____________________________________________________________

[i] Michaelis Michael & Peter Caldwell, “The Consolations of Optimism,” (2004) in Life, death, and meaning, ed. David Benatar, (Lanham MD.: Rowman & Littlefield, 2004), 383.
[ii] Michael & Caldwell, “The Consolations of Optimism,” 386.
[iii] Michael & Caldwell, “The Consolations of Optimism,” 389.
[iv] Michael & Caldwell, “The Consolations of Optimism,” 390.

Summary of Nick Bostrom’s,  “The Fable of the Dragon-Tyrant”

Nick Bostrom (1973 – ) holds a PhD from the London School of Economics (2000). He is a co-founder of the World Transhumanist Association (now called Humanity+) and co–founder of the Institute for Ethics and Emerging Technologies. He was on the faculty of Yale University until 2005, when he was appointed Director of the newly created Future of Humanity Institute at Oxford University. He is currently Professor, Faculty of Philosophy & Oxford Martin School; Director, Future of Humanity Institute; and Director, Program on the Impacts of Future Technology; all at Oxford University.

Bostrom’s article, “The Fable of the Dragon-Tyrant,” tells the story of a planet ravaged by a dragon (death) that demands a tribute which is satisfied only by consuming thousands of people each day. Neither priests with curses, warriors with weapons, or chemists with concoctions could defeat the dragon. The elders were selected to be sacrificed, although they were often wiser than the young, because they had at least lived longer than the youth. Here is a description of their situation:

Spiritual men sought to comfort those who were afraid of being eaten by the dragon (which included almost everyone, although many denied it in public) by promising another life after death, a life that would be free from the dragon-scourge. Other orators argued that the dragon has its place in the natural order and a moral right to be fed. They said that it was part of the very meaning of being human to end up in the dragon’s stomach. Others still maintained that the dragon was good for the human species because it kept the population size down. To what extent these arguments convinced the worried souls is not known. Most people tried to cope by not thinking about the grim end that awaited them.[i]

Given the ceaselessness of the dragon’s consumption, most people did not fight it and accepted the inevitable. A whole industry grew up to study and delay the process of being eaten by the dragon, and a large portion of the society’s wealth was used for these purposes. As their technology grew, some suggested that they would one day build flying machines, communicate over great distances without wires, or even be able to slay the dragon. Most dismissed these ideas.

Finally, a group of iconoclastic scientists figured out that a projectile could be built to pierce the dragon’s scales. However, to build this technology would cost vast sums of money and they would need the king’s support. (Unfortunately, the king was busy raging war killing tigers, which cost the society vast sums of wealth and accomplished little.) The scientists then began to educate the public about their proposals and the people became excited about the prospect of killing the dragon. In response the king convened a conference to discuss the options.

First to speak was a scientist who explained carefully how research should yield a solution to the problem of killing the dragon in about twenty years. But the king’s moral advisors said that it is presumptuous to think you have a right not to be eaten by the dragon; they said that finitude is a blessing and removing it would remove human dignity and debase life. Nature decries, they said, that dragons eat people and people should be eaten. Next to speak was a spiritual sage who told the people not to be afraid of the dragon, but a little boy crying about his grandma’s death moved most toward the anti-dragon position.

However, when the people realized that millions would die before the research was completed, they frantically sought out financing for anti-dragon research and the king complied. This started a technological race to kill the dragon, although the process was painstakingly slow, and filled with many mishaps. Finally, after twelve years of research the king launch a successful dragon-killing missile. The people were happy but the king saddened that they had not started their research years earlier—millions had died unnecessarily. As to what was next for his civilization, the king proclaimed: “Today we are like children again. The future lies open before us. We shall go into this future and try to do better than we have done in the past. We have time now—time to get things right, time to grow up, time to learn from our mistakes, time for the slow process of building a better world…”[ii] 

Summary – We should try to overcome the tyranny of death with technology.

_____________________________________________________________________

[i] Nick Bostrom, “The Fable of the Dragon-Tyrant,” Journal of Medical Ethics (2005) Vol. 31, No. 5: 273.
[ii] Bostrom, “The Fable of the Dragon-Tyrant,” 277.