Category Archives: Evolution & Futurism

Implanting Moral Chips

(This article was republished in Humanity+ Magazine, May 6, 2014)

In a recent post I argued that humans need to become more intelligent and moral if they are to survive and flourish. In other words, they must evolve.

A few perceptive readers raised objections about the nature of morality and the techniques to be used to maximize moral behavior. As for the nature of morality, I claimed that “the essence of morality lies in … the benefits of mutual cooperation and the destructiveness of ethical egoism.” I think this is right and a careful analysis of many ethical systems including religious ones points to such a conclusion. (I admitted previously that this view is controversial, and I cannot adumbrate a theory of ethics in this limited space.) Still the idea that human morality is an extension of biologically advantageous behaviors like kin selection and reciprocal altruism is the putative view among most scientists and many philosophers. Morality can in large part be understood as arising from the evolution of cooperation.1

As for actually getting people to be moral I argued that “we need to utilize technology … including education, genetic engineering, biotechnology, and the use of artificial intelligence (AI). This would include controversial techniques like implanting moral chips within our brains.” The moral chip was not necessarily meant to be taken literally since the problems with such an approach are apparent. Who implants the chip? What does it do? Can you refuse it? Rather it was meant to convey the idea that humans must make serious choices in order to survive and flourish. An ape-like brain prescribing ape-like behaviors to creatures armed with nuclear weapons is a prescription for disaster. Nonetheless a moral implant like a happiness or intelligence-boosting implant is an idea to be considered.

Low-tech means of making people moral—coercion, education, religion—have not entirely achieved their purpose. They might if given enough time, and Steven Pinker has recently argued that we are becoming less violent.2 But whether our moral evolution will keep pace with our power to destroy ourselves is questionable. Ideally, as stated previously, “As we became more intelligent, we would recognize the rationality of morality.” However there are no guarantees that our intelligence will evolve quickly enough or that rationality will ground morality. At some point, if we are to survive, we will probably be forced to use every technology at our disposal to change our natures.

But if we engineer ourselves to be more moral are we still free? This questions deserves a book-length response, but remember, none of us are very free now. We are genomes in environments, with no more than a sliver of free will. Perhaps we can design free will into our cognitive systems–although I admit this is a strange and counter-intuitive idea. Or perhaps this so-called freedom–if it even exists–isn’t worth the havoc it causes. Better to be wired and get along than free and at each other’s throats.

At any rate, I still agree with the basic idea of my previous post—to survive and flourish we must evolve, ultimately by transcending our current nature. The following books touch on this evolution:

1.  The Evolution of Cooperation: Revised Edition

2. The Better Angels of Our Nature: Why Violence Has Declined

 

We Must Evolve

 (This article was reprinted in Humanity+ Magazine, April 29, 2014)

To better the world, humans need to be more intelligent and moral. They must evolve.

In the intellectual realm we need to utilize technology to augment our intelligence (IA) by any means possible—including education, genetic engineering, biotechnology, and artificial intelligence (AI). We also need to use technology in the moral realm. This includes controversial techniques like implanting moral chips in our brains. What does making ourselves more moral entail? After reading, teaching, and writing about ethics for almost thirty years, it is clear that the answer to this question is controversial. But I think the essence of morality lies in understanding the benefits of cooperation and the costs of ethical egoism. This is illuminated in the “prisoner’s dilemma (PD),” which reveals that we would all do better and none of us would do worse if we all cooperate. This insight also helps resolve the multi-person PD known as the “tragedy of the commons.” In this version of the dilemma, each acting in their apparent self-interest brings about disastrous consequences for the rest. The effects of situations with the structure of a PD resonates throughout the world in problems as diverse as insufficient public funding, to threats of environmental disaster and nuclear annihilation.

But of course knowing that we all do better if we all cooperate is undermined by the fact that each does better individually if they do not cooperate regardless of what others do. (At least in the one-time version of the interaction; in the n-person game this solution is not clear.) Hobbes’ solution was coercive governmental power that ensured individuals complied with their agreements. Other solutions include disablement strategies where the non-cooperative move is eliminated. Ulysses having himself tied to the mast of his ship so as not to be seduced by the sirens is an example of disablement. It may be necessary to wire our brains or utilize other technologies so that we must cooperate.

Ideally increasing intelligence and morality would cross-fertilize. As we became more intelligent, we would recognize the rationality of morality.1  We would see that the benefits of mutual cooperation outweigh the benefits of non-cooperation. (This was Hobbes insight, we all do best avoiding the state of nature.) As we became more moral, we would understand the need for greater intelligence to assure our flourishing and survival. We would accept that increased intelligence is indispensable to a good future. Eventually we would reach the higher states of being and consciousness so desired by transhumanists.

But then again we may destroy ourselves.

___________________________________________________________________________

1. Assuming morality is rational. If it isn’t we’d need another approach such as engineering people to be more sympathetic. Or we could conclude that morality is for suckers and try to kill, imprison, torture or enslave everyone else, like Hitler, Stalin, Tom Delay, Dick Cheney or similar psychopaths.

Evolutionary Visions

Now that we have examined grand evolutionary visions in previous posts about Teilhard, Huxley and Wilson we can draw some tentative conclusions.

We affirm that a study of cosmic evolution supports the claim that life has become increasingly meaningful, a claim buttressed primarily by the emergence of beings with conscious purposes and meanings. Where there once was no meaning or purpose—in a universe without mind—there now are meanings and purposes. These meanings have their origin in the matter which coalesced into stars and planets, and which in turn supported organisms that evolved bodies with brains and their attributes—behavior, consciousness, personal identity, freedom, value, and meaning. Meaning has emerged in the evolutionary process. It came into being when complexly organized brains, consisting of constitutive parts and the interactive relationships between those parts, intermingled with physical and then cultural environments. This relationship was reciprocal—brains effected biological and cognitive environments which in turn affected those brains. The result of this interaction between organisms and environments was a reality that became, among other things, infused with meaning.

But will meaning continue to emerge as evolution moves forward? Will progressive evolutionary trends persevere to complete or final meaning, or to approaching meaning as a limit? Will the momentum of cognitive development make such progress nearly inevitable? These are different questions—ones which we cannot answer confidently. We could construct an inductive argument, that the past will resemble the future in this regard, but such an argument is not convincing. For who knows what will happen in the future? The human species might bring about its own ruin tomorrow or go extinct due to some biological, geophysical, or astronomical phenomenon. We cannot bridge the gap between what has happened and what will happen.

And this leads naturally to another question. Is the emergence of meaning a good thing? It is easy enough to say that conscious beings create meaning, but it is altogether different to say that this is a good thing. Before consciousness no one derived meaning from torturing others, but now they sometimes do. Although we can establish the emergence of meaning, we cannot establish that this is good.

Still, we fantasize that our scientific knowledge will improve both the quality and quantity of life. We will make ourselves immortal, build ourselves better brains, and transform our moral natures—making life better and more meaningful, perhaps fully meaningful. We will become pilots worthy of steering evolution to fantastic heights, toward creating a heaven on earth or in simulated realities of our design. If meaning and value continue to emerge we may find meaning by partaking in, and hastening along, that meaningful process. As the result of past meanings and as the conduit for the emergence of future ones, we could be the protagonists of a great epic that ascends higher, as Huxley and Teilhard had hoped.

In our imagination we exist as links in a golden chain leading onward and upward toward greater levels of being, consciousness, joy, beauty, goodness, and meaning—perhaps even to their apex. As part of such a glorious process we would find meaning instilled into our lives from previously created meaning, and we would reciprocate by emanating meaning back into a universe with which we are ultimately one. Evolutionary thought, extended beyond its normal bounds, is an extraordinarily speculative, quasi-religious metaphysics in which a naturalistic heaven appears on the horizon.

In my next post I will consider whether such optimism is warranted.