In a recent post I argued that humans need to become more intelligent and moral if they are to survive and flourish. In other words, they must evolve.
A few perceptive readers raised objections about the nature of morality and the techniques to be used to maximize moral behavior. As for the nature of morality, I claimed that “the essence of morality lies in … the benefits of mutual cooperation and the destructiveness of ethical egoism.” I think this is right and a careful analysis of many ethical systems including religious ones points to such a conclusion. (I admitted previously that this view is controversial, and I cannot adumbrate a theory of ethics in this limited space.) Still the idea that human morality is an extension of biologically advantageous behaviors like kin selection and reciprocal altruism is the putative view among most scientists and many philosophers. Morality can in large part be understood as arising from the evolution of cooperation.1
As for actually getting people to be moral I argued that “we need to utilize technology … including education, genetic engineering, biotechnology, and the use of artificial intelligence (AI). This would include controversial techniques like implanting moral chips within our brains.” The moral chip was not necessarily meant to be taken literally since the problems with such an approach are apparent. Who implants the chip? What does it do? Can you refuse it? Rather it was meant to convey the idea that humans must make serious choices in order to survive and flourish. An ape-like brain prescribing ape-like behaviors to creatures armed with nuclear weapons is a prescription for disaster. Nonetheless a moral implant like a happiness or intelligence-boosting implant is an idea to be considered.
Low-tech means of making people moral—coercion, education, religion—have not entirely achieved their purpose. They might if given enough time, and Steven Pinker has recently argued that we are becoming less violent.2 But whether our moral evolution will keep pace with our power to destroy ourselves is questionable. Ideally, as stated previously, “As we became more intelligent, we would recognize the rationality of morality.” However there are no guarantees that our intelligence will evolve quickly enough or that rationality will ground morality. At some point, if we are to survive, we will probably be forced to use every technology at our disposal to change our natures.
But if we engineer ourselves to be more moral are we still free? This questions deserves a book-length response, but remember, none of us are very free now. We are genomes in environments, with no more than a sliver of free will. Perhaps we can design free will into our cognitive systems–although I admit this is a strange and counter-intuitive idea. Or perhaps this so-called freedom–if it even exists–isn’t worth the havoc it causes. Better to be wired and get along than free and at each other’s throats.
At any rate, I still agree with the basic idea of my previous post—to survive and flourish we must evolve, ultimately by transcending our current nature. The following books touch on this evolution:
1. The Evolution of Cooperation: Revised Edition
2. The Better Angels of Our Nature: Why Violence Has Declined
To many, the concept of morality has no grounds in rationality. Because what they base their morality on is not in itself rational. Or the bigger issue is that morality is subjective. I am not sure there is a way to come to a concept of morality that everyone agrees on. Example: the concept of killing. We (as people) cannot even agree what qualifies as killing another–or even what qualifies a person as a “person” who can be killed in the first place. Most would say that killing is morally wrong, but what does “wrong” mean? Because the concept of moral wrongness is also subjective. For example, a large number of people in our culture support killing those who have been found guilty of a heinous crime. Some of course do not, but at least it is sort of familiar to us as a concept. Conversely, in other parts of the world killing is accepted and part of the mainstream moral code that we find appalling. (Such as honor killings).
This is actually giving me a headache.
I do agree that something has to give, or we (the collective we) will succeed in blowing ourselves off the map. I don’t have a lot of confidence though—I recently emerged from a meeting in which 6 highly educated people could not agree on a specific type of pen. Pens should be an easier subject to tackle than morality.
Morality has to be replaced by situational ethics– not that situational ethics are positive. It is that conventional morality, CM, doesn’t succeed in the 21th century.
CM worked well in the 1950s, yet now life is too complicated: 1000 channels to choose from; 1000 flavors of ice cream…
CM does work in a simple society; the Amish can be (and sometimes are) moral in their simple, A Separate Peace enclaves.