In a recent post I argued that humans need to become more intelligent and moral if they are to survive and flourish. In other words, they must evolve.
A few perceptive readers raised objections about the nature of morality and the techniques to be used to maximize moral behavior. As for the nature of morality, I claimed that “the essence of morality lies in … the benefits of mutual cooperation and the destructiveness of ethical egoism.” I think this is right and a careful analysis of many ethical systems including religious ones points to such a conclusion. (I admitted previously that this view is controversial, and I cannot adumbrate a theory of ethics in this limited space.) Still the idea that human morality is an extension of biologically advantageous behaviors like kin selection and reciprocal altruism is the putative view among most scientists and many philosophers. Morality can in large part be understood as arising from the evolution of cooperation.1
As for actually getting people to be moral I argued that “we need to utilize technology … including education, genetic engineering, biotechnology, and the use of artificial intelligence (AI). This would include controversial techniques like implanting moral chips within our brains.” The moral chip was not necessarily meant to be taken literally since the problems with such an approach are apparent. Who implants the chip? What does it do? Can you refuse it? Rather it was meant to convey the idea that humans must make serious choices in order to survive and flourish. An ape-like brain prescribing ape-like behaviors to creatures armed with nuclear weapons is a prescription for disaster. Nonetheless a moral implant like a happiness or intelligence-boosting implant is an idea to be considered.
Low-tech means of making people moral—coercion, education, religion—have not entirely achieved their purpose. They might if given enough time, and Steven Pinker has recently argued that we are becoming less violent.2 But whether our moral evolution will keep pace with our power to destroy ourselves is questionable. Ideally, as stated previously, “As we became more intelligent, we would recognize the rationality of morality.” However there are no guarantees that our intelligence will evolve quickly enough or that rationality will ground morality. At some point, if we are to survive, we will probably be forced to use every technology at our disposal to change our natures.
But if we engineer ourselves to be more moral are we still free? This questions deserves a book-length response, but remember, none of us are very free now. We are genomes in environments, with no more than a sliver of free will. Perhaps we can design free will into our cognitive systems–although I admit this is a strange and counter-intuitive idea. Or perhaps this so-called freedom–if it even exists–isn’t worth the havoc it causes. Better to be wired and get along than free and at each other’s throats.
At any rate, I still agree with the basic idea of my previous post—to survive and flourish we must evolve, ultimately by transcending our current nature. The following books touch on this evolution: