Nick Bostrom (1973 – ) holds a Ph.D. from the London School of Economics (2000). He is a co-founder of the World Transhumanist Association (now called Humanity+) and co-founder of the Institute for Ethics and Emerging Technologies. He was on the faculty of Yale University until 2005, when he was appointed Director of the newly created Future of Humanity Institute at Oxford University. He is currently Professor, Faculty of Philosophy & Oxford Martin School; Director, Future of Humanity Institute; and Director, Program on the Impacts of Future Technology; all at Oxford University.
His recent book, Superintelligence: Paths, Dangers, Strategies, is the definitive work on superintelligence. A few of its main issues were discussed in his previous article, “Ethical Issues in Advanced AI.” Here is a brief outline of that article.
Introduction – “A superintelligence is any intellect that vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills. This definition leaves open how the superintelligence is implemented – it could be in a digital computer, an ensemble of networked computers, cultured cortical tissue, or something else.” B states that there is no reason to believe we won’t have SI in the lifetime of some persons alive today.
Superintelligence (SI) is different – And in ways, we can’t even imagine.
Moral Thinking of SI – If morality is a cognitive pursuit, then SI should be able to solve moral issues in ways previously undreamt of.
Importance of Initial Motivations – It is crucial to design SI to be friendly.
Should Development Be Delayed or Accelerated? – “It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process.”
Given this promise, and considering B’s claim that SI will probably be developed anyway, we might as well do this asap. “If we get to superintelligence first, we may avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to face both the risks from nanotechnology and, if these risks are survived, also the risks from superintelligence.”
Reflection – I have made my views on this clear many times. Despite the risks, we need to develop superintelligence promptly if we are to have any chance of surviving.
An important consideration: intelligence is not merely processing power; it also requires data in the form of knowledge of the universe. The most brilliant mind in the universe that is not informed about the universe cannot think brilliant thoughts. I suggest that the data we feed into a super intelligence will have greater effect than the magnitude of the intelligence.
We could perhaps address this problem by simply feeding everything that is known into our super intelligent computer. But that would certainly require a very very large computer. The library of Congress contains 16 million books and 120 million artifacts. The Internet contains over 1,000 petabytes of data. That’s a lot of stuff to store.
But then we get into really tricky questions: how do we prioritize the information that our super intelligence is given? Should the latest ad for a wonder device that enlarges penis size get in earlier than the analysis of the chemical properties of bismuth? What about the differences in the reliability of different sources of information? What if the Russians try to feed lots of misinformation into the super intelligence? Who decides what is true in educating the super intelligence?
Or how about this: our knowledge of the universe is a huge webwork of knowledge. You can’t understand Special Relativity until you have first digested Newton’s Laws. You can only appreciate the nature of magnetism after you’ve learned Special Relativity. Our ability to learn builds on our existing knowledge. But our existing knowledge also slants our current learning. If you did a twin experiment in which you educated two equally intelligent students using different sequences of teaching, you would surely get different perceptions of reality in the two students. Thus, we don’t simply pile knowledge up in a huge undifferentiated pile. Instead, we assemble a webwork of associative knowledge, and the sequence we use to present that knowledge determines the shape of the webwork, and hence the perception of reality of the learner.
Another fun idea: would two super intelligences necessarily agree on everything? Sure, if they were taught the same way, but what if they were both given exactly the same information, but with somewhat different sequences and priorities? If two super intelligences disagree, what conclusions can we draw?
Fun, fun, fun!
“A superintelligence is any intellect that vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.” … “It is hard to think of any problem that a superintelligence could not either solve…” … “could give us indefinite lifespan” …
The above definition is, for all practical purposes, a definition of “God”. It even has the promise of living forever, presumably in a “heaven” created by “God”. The belief that humans can create this thing as defined above is, for all practical purposes, a form of religion. And this religion is just as silly as most of the other religions that humans have invented down through the ages.
the key difference is that we create heaven, we become gods rather than imagining they exist. Some similarities some differences.
I think that Jim is right in saying that the below repeated quote IS God.
“A superintelligence is any intellect that vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.” … “It is hard to think of any problem that a superintelligence could not either solve…” … “could give us indefinite lifespan” …
To me, who believe in God, such intelligence coupled with its benevolent, loving form, already exists.
It would be wonderful if we could access such globally as humans.
However, without a globally benevolent morality, superintelligence put in our hands is extremely prone for abuse, such as dictatorial exploitation of one another or annihilating our civilisation.
I would like to see the moral evolution of humanity first to catch up with technological greatness before we have access to suprintelligence so that a sound global morality could be assured to guide it and protect it from falling into the hands of ignorant maniacs.
As far as I am concerned, for example, Nitche was both a genius of the intellect and a moral idiot.
I would hate this to happen to suprtintelligence.
I’m not sure I’ve mentioned it here before, but I think we already live with superintelligences.
Using networks of processors, each as powerful as a human brain, these SIs engage in activities of higher quality and quantity than any human could achieve alone, and reach around the globe with a speed and thoroughness that no human could hope to accomplish in a single life time.
We don’t think of them as intelligent because their power is glacial. They do what they do inexorably, but slowly. They are called publicly traded corporations.
Unfortunately, as Mr. Heks put it, these intellects are moral idiots. A great many have fallen into a version of the “paperclip optimization” trap. They apply their vast power and resources to one goal: greater profit, every three months. Human misery en route to this goal is an inconvenience, not a fatal flaw.
While we are inclined to assign blame for this moral lapse to the leadership of a corporation, ask yourself how much power a single executive has. The collective CULTURE of the typical boardroom overwhelms its human components, and tends to exclude or punish those who stray too far from the center of gravity.
Individual humans think faster, but with less agency. We see the cliff edge we’re collectively walking toward, but only in moments of true crisis do we gather the collective power to affect our path.
Social stratification and climate change are growing in significance. When the true crisis arrives, let’s hope we can reprogram these SIs to develop a better ethic without too much human sacrifice.
Len – I agree with almost everything you say here. The profit motive of corporations is destructive to life and the planet. There are also SIs in another sense, AIs and networks of computers that outperform humans at multiple tasks—playing chess, modeling the future of the climate, solving great math problems, etc. Also, you might be interested in the Global Brain Institute’s work.
Following his initial description of superintelligence, Professor Bostrom states: “On this definition, Deep Blue is not a superintelligence, since it is only smart within one narrow domain (chess), and even there it is not vastly superior to the best humans. Entities such as corporations or the scientific community are not superintelligences either. Although they can perform a number of intellectual feats of which no individual human is capable, they are not sufficiently integrated to count as “intellects”, and there are many fields in which they perform much worse than single humans.”
I think Professor Bostrom was discussing a more general, all-encompassing entity, with capabilities that far exceed those of any entity that currently exists.
Jim – You have it correct. Thanks. JGM