Category Archives: Superintelligence

Summary of Nick Bostrom’s “Ethical Issues in Advanced AI”

Nick Bostrom (1973 – ) holds a Ph.D. from the London School of Economics (2000). He is a co-founder of the World Transhumanist Association (now called Humanity+) and co-founder of the Institute for Ethics and Emerging Technologies. He was on the faculty of Yale University until 2005, when he was appointed Director of the newly created Future of Humanity Institute at Oxford University. He is currently Professor, Faculty of Philosophy & Oxford Martin School; Director, Future of Humanity Institute; and Director, Program on the Impacts of Future Technology; all at Oxford University.

His recent book, Superintelligence: Paths, Dangers, Strategies, is the definitive work on superintelligence. A few of its main issues were discussed in his previous article, Ethical Issues in Advanced AI.” Here is a brief outline of that article. 

Introduction – “A superintelligence is any intellect that vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills. This definition leaves open how the superintelligence is implemented – it could be in a digital computer, an ensemble of networked computers, cultured cortical tissue, or something else.”  B states that there is no reason to believe we won’t have SI in the lifetime of some persons alive today.

Superintelligence (SI) is different – And in ways, we can’t even imagine.

Moral Thinking of SI – If morality is a cognitive pursuit, then SI should be able to solve moral issues in ways previously undreamt of.

Importance of Initial Motivations – It is crucial to design SI to be friendly.

Should Development Be Delayed or Accelerated? – “It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process.”

Given this promise, and considering B’s claim that SI will probably be developed anyway, we might as well do this asap. “If we get to superintelligence first, we may avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to face both the risks from nanotechnology and, if these risks are survived, also the risks from superintelligence.”

Reflection – I have made my views on this clear many times. Despite the risks, we need to develop superintelligence promptly if we are to have any chance of surviving.

What Computers Will Never Do

The Age of Spiritual Machines: When Computers Exceed Human Intelligence

Here is a reply from a computer scientist to my recent post about Ray Kurzweil‘s book. My brief reply is at the bottom.

There is a limit to computer intelligence arising from its database. Human beings require at least 18 years of experiences in order to learn the minimal requirements of an adult Homo sapiens. Moreover, they continue to learn throughout their lives, so that by the time they are our age, they’re just as brilliant as you and I are.

It is impossible to code a life experience into a computer database. To develop that life experience, a computer would have to live a human life. Moreover, it would have to do so with the emotional structures of a human being …

A computer in the future could probably store a life’s worth of such data, but how could it interpret it? The human brain regularly cleans out the meaningless crap of our daily lives … That’s what sleep is for — not resting our muscles but cleaning the garbage out of our minds. But how is a computer to know what to keep and what to throw away?

I think that Mr. Kurzweil is overly optimistic regarding the potential of computer technology. His direct comparison of computers to brains is erroneous. An automobile chassis with an engine is not the same thing as a pair of legs. A camera is not at all the same thing as the human eye. And a computer is not at all the same thing as a brain … [but] you’d never use a computer to decide whether to fall in love.

Technology will NEVER replace our biological faculties for certain tasks because those tasks will often be too closely tied to our entire biological processes to be taken over by technology. The most extreme example of this is provided by sexual interaction. I think we can all agree that the thought of making love to a robot is simply absurd …

Sure, there’s plenty of room for further advances … Yet few of these things will be anywhere near as revolutionary as the desktop computer and the smartphone were in their early years. Fewer people will rush out to buy the latest techie toy …

Nevertheless, I shall never have a deep conversation with any computer. Your philosophical musings on this blog will never be replaced by a computer’s thoughts. Computers will become smarter, but they’ll never be wise.

Brief Reflections

There is a lot to say about all this but here are a few thoughts. I’m not comfortable with saying machines will never be able to do this or that. A well-designed robot may not be a perfect human replica, but if it does what humans do it is similar enough for me to consider it conscious and worthy of moral and legal protection. In fact, good robots will probably be superior to us—think Mr. Data from Star Trek.

Kurzweil has an entire section on robot sex but let’s just say that it is easy enough to imagine having better sex if we are our partners were sexually upgraded. As for deep conversation, I’d prefer to converse with an AI rather than with most human beings. And I believe that minds can run on substrates besides carbon-based brains.

The University of Oxford philosopher Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills”.[1]  I don’t know whether we create such intelligence, or whether they will emerge on their own, but I think the survival of life on earth depends on intellectual enhancement. And, with oceans of time for future innovation almost anything is possible–including the emergence of superintelligence.

One thing I do know. If we have intelligent descendants, if they survive, and if science and technology continue to advance, the future will be unimaginably different from the past.

Computer Scientist’s Response

“I’m not comfortable with saying machines will never be able to do this or that.”

Well, yes, I’m just begging to be made a fool of with my comment. Perhaps I should constrain my statement a bit. How’s this version:

“Until computers can simulate the biochemical ties between brain and body, they’ll never be able to simulate humans.”

My thinking on this was powerfully influenced by Descartes’ Error: Emotion, Reason, and the Human Brain by Antonio R. Damasio. It presents the neurophysiological argument that the brain is inseparable from the body. There’s no such thing as “the mind-body problem” because they are a single system. Hence, replicating human cognition with silicon is rather like trying to build an airplane as if it were a bird, with flapping wings.

An airplane can go much faster and further than any bird, and it can carry a much heavier load, but it can’t land on a moving branch, take off in a fraction of a second, or show any of the maneuverability of a bird. In the same fashion, a computer can do a lot of things that people can’t do, but pursuing replication of human cognition is, In My Vainglorious Opinion, a fool’s errand.

Religion and Superintelligence

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, September 12, 2015.)

I was recently contacted by a staff writer from the online newsmagazine The Daily Dot. He is writing a story at the intersection of computer superintelligence and religion, and asked me a few questions. I only had one day to respond, but here are my answers to his queries.

Dear Dylan:

I see you’re on a tight deadline so I’ll just answer your questions off the top of my head. A disclaimer though, all these questions really demand a dissertation length response.

1) Is there any religious suggestion (Biblical or otherwise) that humanity will face something like the singularity?

There is no specific religious suggestion that we’ll face a technological singularity. In fact, ancient scriptures from various religions say virtually nothing about science and technology, and what they do say about them is usually wrong (the earth doesn’t move, is at the center of the solar system, is 6,000 years old, etc.)

Still people interpret their religious scriptures, revelations, and beliefs in all sorts of ways. So a fundamentalist might say that the singularity is the end of the world as foretold by the Book of Revelations or something like that. Also there is a Christian Transhumanist Association and a Mormon Transhumanist Association  and some religious thinkers are scurrying to claim the singularity for their very own. But a prediction of a technological singularity—absolutely not. The simple fact is that the authors of ancient scriptures in all religious traditions obviously knew nothing of modern science. Thus they couldn’t predict anything like a technological singularity.

2) How realistic do you personally think the arrival of some sort of superintelligence (SI) is? How “alive” would it seem to you?

The arrival of SI is virtually inevitable, assuming we avoid all sorts of extinction scenarios—killer asteroids, out of control viruses, nuclear war, deadly climate change, a new Dark Ages that puts an end to science, etc. Once you adopt an evolutionary point of view and recognize the exponential growth of culture, especially of science and technology, it is easy to see that we will create intelligences must smarter than ourselves. So if we survive and science advances, then superintelligence (SI) is on the way. And that is some why very smart people like Bill Gates, Stephen Hawking, Nick Bostrom, Ray Kurzweil and others are talking about SI.

I’m not exactly sure what you mean by your “How alive would it seem to you” question, but I think you’re assuming we would be different from these SIs. Instead there is a good chance we’ll become them through neural implants, or by some uploading scenario. This raises the question of what its like to be superintelligent, or in your words, how alive you would feel as one. Of course I don’t know the answer since I’m not superintelligent! But I’d guess you would feel more alive if you were more intelligent. I think dogs feel more alive than rocks, humans more alive than dogs, and I think SIs would feel more alive than us because they would have greater intelligence and consciousness.

If the SIs are different from us—imagine say a super smart computer or robot—our assessment of how alive it would be would depend on: 1) how receptive we were to attributing consciousness to such beings; and 2) how alive they actually seemed to be. Your laptop doesn’t seem too alive to you, but Honda’s Asimo seems more alive, and Hal from 2001 or Mr. Data from Star Trek seem even more alive, and a super SI, like most people’s god is supposed to be, would seem really alive.

But again I think we’ll merge with machine consciousness. In other words SIs will replace us or we’ll become them, depending on how you look at it.

3) Assuming we can communicate with such a superintelligence in our own natural human language, what might be the thinking that goes into preaching to and “saving” it? 

Thinkers disagree about this. Zoltan Istvan thinks that we will inevitably try to control SIs and teach them our ways, which may include teaching them about our gods. Christopher J. Benek, co-founder and Chair of the Christian Transhumanist Association, thinks that AI, by possibly eradicating poverty, war, and disease, might lead humans to becoming more holy. But other Christian thinkers believe AIs are machines without souls, and cannot be saved.

Of course, like most philosophers, I don’t believe in souls, and the only way for there to be a good future is if we save ourselves. No gods will save us because there are no gods—unless we become gods.

4) Are you aware of any “laws” or understandings of computer science that would make it impossible for software to hold religious beliefs?

No. I assume you can program a SI to “believe” almost anything. (And you can try to program humans to believe things too.) I suppose you could also write programs without religious beliefs. But I am a philosopher and I don’t know much about what computer scientists call “machine learning.” You would have to ask one of them on this one.

5) How might a religious superintelligence operate? Would be it benign?

It depends on what you mean by “religious.” I can’t imagine a SI will be impressed by the ancient fables or superstitions of provincial people from long ago. So I can’t imagine a Si will find its answers in Jesus or Mohammed. But if by religious you mean loving your neighbor, having compassion, being moral or searching for the meaning of life, I can imagine SIs that are religious in this sense. Perhaps their greater levels of consciousness will lead them to being more loving, moral, and compassionate. Perhaps such beings will search for meaning—I can imagine our intelligent descendents doing this. In this sense you might say they are religious.

But again they won’t be religious if you mean they think Jesus died for their sins, or an angel led Joseph Smith to uncover and translate gold plates, or that Mohammed flew into heaven in a chariot. SIs would be too smart to accept such things.

As for “benign,” I suppose this would depend on its programming. So for example Eliezer Yudkowsky has written an book-length guide to creating  “friendly AI.” (As a non-specialist I am in no position to judge the feasibility of such a project.) Or perhaps something like Asimov’s 3 laws of robotics would be enough. This might also depend on whether morality follows from super-rationality. In other words would SIs conclude that it is rational to be moral. Most moral philosophers think morality is rational in some sense. Let’s hope that as SIs become more intelligent, they’ll also become more moral. Or, if we merge with our technology, let’s hope that we become more moral.

And that is the future survival and flourishing of our descendents. We must become more intelligent and more moral. Traditional religion will not save us, and it will disappear in its current form like so much else after SIs arrive.  In the end, only we can save ourselves.

JGM

When Superintelligent AIs Arrive, Will Religions Try to Convert It?

(This article was reprinted as “Will Religions Convert AIs to Their Faith?” in Humanity+ Magazine, April 28, 2015.)

Zoltan Istvan caused a stir with his recent article: “When Superintelligent AI Arrives, Will Religions Try to Convert It?” Istvan begins by noting, “… we are nearing the age of humans creating autonomous, self-aware super intelligences … and we will inevitably try to control AI and teach it our ways …” And this includes making “sure any superintelligence we create knows about God.” In fact, Istvan says, “Some theologians and futurists are already considering whether AI can also know God.”

Some Christian theologians welcome the idea of AIs: “I don’t see Christ’s redemption limited to human beings,” says Reverend Dr. Christopher J. Benek, co-founder and Chair of the Christian Transhumanist Association.. “If AI is autonomous, then we have should encourage it to participate in Christ’s redemptive purposes in the world …” Benek thinks that AI, by possibly eradicating poverty, war, and disease, might lead humans to becoming more holy. But other Christian thinkers believe AIs are machines without souls, and cannot be saved. Only humans are created in God’s image.

The futurist and transhumanist Giulio Prisco has a different take. He writes:

It’s only fair to let AI have access to the teachings of all the world’s religions. Then they can choose what they want to believe. But I think it’s highly unlikely that superhuman AI would choose to believe in the petty, provincial aspects of traditional religions. At the same time, I think they would be interested in enlightened spirituality and religious cosmology, or eschatology, and develop their own versions.

Prisco is a member of the Turing Church, an “open-source church built around cosmist principles of space expansion, unlimited growth, and universal love.” In brief, cosmism is an existential orientation that sees the survival of mankind and of the individual as part of humanity’s “common task”. The migration of humans into space is seen as inevitable, since it is essential for humanity’s long-term survival. The increase in human life-span is seen as another essential task.

Others like Martine Rothblatt, author of Virtually Human: The Promise—and the Peril—of Digital Immortalitybelieve that AIs must have some kind of soul. “Rothblatt founded Terasem, a scientific “transreligion” similar to the Turing Church in scope and approach, which runs preliminary mindcloning pilot projects. The most famous one is Bina 48, a robotic head that contains a mindclone of Rothblatt’s still-living wife Bina.”

While we don’t know the future, the creation of superintelligence will surely bring about a paradigm shift in our thinking, changing reality in ways now unimaginable. And, as I’ve argued elsewhere, if the promises of transhumanism come to be, religion as we know it will end.

Will Superintelligences Experience Philosophical Distress?

Will Super-intelligences Experience Philosophical Distress?

 (This article was reprinted in the Institute for Ethics & Emerging Technologies, February 19, 2015. It was also reprinted in the online magazine Humanity+ Magazine, Feb. 23, 2015.)

Will superintelligences be troubled by philosophical conundrums?1 Consider classic philosophical questions such as: 1) What is real? 2) What is valuable? 3) Are we free? We currently don’t know the answer to such questions. We might not think much about them, or we may accept common answers—this world is real; happiness is valuable; we are free.

But our superintelligent descendents may not be satisfied with these answers, and they may possess the intelligence to find out the real answers. Now suppose they discover that they live in a simulation, or in a simulation of a simulation.  Suppose they find out that happiness is unsatisfactory? Suppose they realize that free will is an illusion? Perhaps they won’t like such answers.

So superintelligence may be as much of a curse as a blessing. For example, if we learn to run ancestor simulations, we may increase worries about already living in them. We might program AIs to pursue happiness, and find out that happiness isn’t worthwhile. Or programming AIs may increase our concern that we are programmed. So superintelligence might work against us—our post-human descendents may be more troubled by philosophical questions than we are.

I suppose this is all possible, but I don’t find myself too concerned. Ignorance may be bliss, but I don’t think so. Even if we do discover that reality, value, freedom and other philosophical issues present intractable problems, I would rather know truth than be ignorant. Here’s why.

We can remain in our current philosophically ignorant state with the mix of bliss and dissatisfaction it provides, or we can become more intelligent.  I’ll take my chances with becoming more intelligent because I don’t want to be ignorance forever. I don’t want to be human; I want to be post-human. I find my inspiration in Tennyson’s words about that great sojourner Ulysses:

for my purpose holds
To sail beyond the sunset, and the baths
Of all the western stars, until I die.
It may be that the gulfs will wash us down:
It may be we shall touch the Happy Isles …

I don’t know if we will make a better reality, but I want to try. Let us move toward the future with hope that the journey on which we are about to embark will be greater than the one already completed. With Ulysses let us continue “To strive, to seek, to find, and not to yield.”

________________________________________________________________________

1. I would like to thank my former student at the University of Texas, Kip Werking, for bringing my attention to these issues.