Ethical Theory as Applied Engineering?

 We are discussing no small matter, but how we ought to live. ~ Socrates In Plato’s Republic 

INTRODUCTION

There are many theories that deny morality: nihilism; determinism; skepticism; relativism; egoism; etc. In my view ethicists too easily dismiss these theories—they have philosophical merit. Nihilism just “feels” wrong, but all of the others are at least partly true and appeal to me to varying degrees.

Most ethical theories try to justify morality.1 Typically this justification has been supplied by: self-interest—theories deriving from Plato and Hobbes; sympathy—theories deriving from Hume and Mill; nature—theories deriving from Aristotle and Aquinas; or reason—theories deriving from Kant and Locke. Let us briefly consider each in turn.

Some contemporary thinkers, Darwall and Gewirth come to mind, have tried to justify morality following Kant. However, few philosophers believe this project has been successful. At most, I would argue, these theories show that morality is weakly rational, i.e., morality is not clearly irrational. But I don’t see how they can show me how another person’s interests give me a reason to do anything.

Few contemporary thinkers have advanced natural law theories in the tradition of Aristotle and Aquinas. Contemporary thinkers try to bridge the is/ought gap with an evolutionary ethics or moral psychology utilizing knowledge of human nature unavailable to ancient and medieval philosophers.2  These projects show more promise.

Theories deriving from considerations of sympathy are also promising. Mill’s utilitarianism was based on a “social feeling,” Hume thought sympathy the basis of morality, Darwin had an entire theory of moral sentiments, and the contemporary philosopher Kai Nielsen places great emphasis on the role of sympathy in morality. It is hard to imagine a justification of the moral life without a role for sympathy.  

Theories deriving from self-interest are promising, and contemporary contract and game theorists, particularly Gauthier, have gone a long way toward sustaining and revitalizing the Hobbesian project. Nonetheless, their results are inconclusive and it is not clear that this approach can resolve the compliance problem. However, combining a contract approach with considerations of our evolutionary nature and ingrained or acquired human sympathies may have more promise.

Finally, there are ethical theories associated with religious and metaphysical views, but a lack of agreement about these views precludes any hope of grounding morality in them. (Of course the same may be said about one or another of our moral theories—that they all suppose some metaphysic and that the dispute about ethics depends on resolving metaphysical issues first.)

SO, WHY BE MORAL?

Let’s explore the issue of self-interest vs. morality—what we might call the hard question of morality—in more detail. (To put it another way, should we care about others less, the same, or more than we care about ourselves?) Hobbes answered the question why we should moral—it is in all of our interests. Still, the question why I should be moral remains unanswered. This is the challenge originally set forth in Plato’s Republic—why should I be moral if I have a ring that makes me invisible? Why be moral if SI demands an immoral course? In short, doesn’t it pays to steal candy when no one is looking and you want candy?

Let’s begin with the prisoner’s dilemma (PD). It is easy to see that self-interest demands defection, a supposedly non-moral move, in a one-time PD. So here self-interest and ordinary morality conflict. The fact that both parties do better through mutual cooperation somewhat ameliorates this conclusion, but does not change the fact that it is better for one to not comply no matter what the other does.

The situation changes when the PD is iterated since tit-for-tat (TFT) has been shown to be a robust strategy. But recent work by Ken Binmore has challenged this assumption. (The “Folk-theorem” is also relevant here.) It is not that TFT is a bad strategy, but that real life is more complex than iterated PDs can model. There may be an infinite number of strategies that are robust, calling into question whether we can even determine what is in our self-interest. And if we don’t know what’s in our interest, how can self-interest ground morality?

Well to begin to answer these questions, consider again our candy stealers. They may arrogantly assume they won’t get caught or suffer the pangs of conscience, that the cameras aren’t rolling, or that we won’t perceive their true motivations and exclude them from cooperation. In short, they can’t determine what is in their self-interest. But can they make an educated guess? Not really. It is too difficult to know the repercussions of their acts and impossible to predict what adopting a disposition to behave will cost them in the long run.3 The complexity of the situation makes complete assessment impossible and reliable judgment unlikely, raising doubts about applying any moral theory to a complex world of interactions with other agents whose psychologies, motives, disposition, and intents are difficult to determine if not opaque.

Thus it is unlikely that self-interest can ground morality or immorality since self-interest can’t be determined with accuracy. So where to from here? In large part, I find myself agreeing with the contemporary philosopher Kai Nielsen. 4

Nielsen wants to know if we have good reasons to assume the immoralist is mistaken. He accepts the view that morality entails sympathy and sensitivity to others, but some people are not moved by such considerations. So, why should those people be moral? Surely pursuing self-interest to the exclusion of morality is not irrational, despite the fact that philosophers as varied as Hobbes, Plato, and Aristotle tell us that the moral life and the happy life are synonymous. But can we really be so sure? Nielsen maintains that whether the bad guys are happy or not depends on what kinds of persons they are, and I agree. Neither rationality nor happiness requires morality: we must simply decide for ourselves how we should act and what sort of persons we will strive to be or become.

This means that considerations of reason, happiness, and self-interest, in the absence of sympathy and a commitment to a moral life, cannot adjudicate between morality and self-interest.5 While both Neilsen and myself find this situation somewhat depressing, we accept that we cannot get to morality with intellect alone. From an objective point of view, reason is impotent to determine our values and thus the moral life demands a non-rational, voluntary commitment. In other words, the moral life and the immoral one are Kantian antimonies, and the choice between them is interfused with existential angst. In the end, we simply choose … and hope.

CAN WE SAY MORE?

I don’t think so. Can we say anything with confidence? We may be able to say that moral rules are contracts or agreements between self-interested people for mutual benefit assuming that others will reciprocate. And that these rules resulted from a protracted process of bargaining and power-struggling from the original biological foundations in reciprocal altruism and kin selection. Of course, we can revolt against biology; we can abandon our children. We choose our own destiny. But moral behavior, like all behavior, always has part of its explanation in its origins. And what can we say regarding what morality should be? Maybe that moral rules ought to promote human flourishing? This strikes me as intuitive, but then, as Wilson told us, we consult our emotions like hidden oracles. And why think our intuitions supply insight into the truth? Perhaps it is better, as Wittgenstein suggested, to remain silent about that which we don’t know.

WHERE TO?

Still, the problem remains. Some individuals don’t comply with their agreements, and viciously flaunt their disregard for the social contract. Does it help to know that we can’t give good self-interested reasons to comply with the social contract? It doesn’t seem so. Traditionally we relied on moral education as a way of ensuring that persons became cooperators. And if they didn’t, we penalized them. Maybe punishment would resolve the problem? Maybe the expansion of cameras will eliminate the invisibility that encourages immoralism. Or maybe, as Aristotle imagined, we can structure society so as to inculcate in persons the kinds of habitual behaviors that benefit us all? Of course, if Aristotle had been aware of behavior modification, mind control, and genetic engineering, he might have advocated more drastic measures to ensure human flourishing.

In fact, if mutual cooperation becomes important enough, ethics may become a branch of applied engineering. We may have to engineer ourselves, removing tendencies adaptive for foragers, but suicidal for beings with technology. Of course, we would lose the freedom to, say, release chemical or nuclear weapons, but this may be a small price to pay for security. And maybe engineering ourselves won’t entail a loss of freedom, but instead free us from some residual effects of our evolution, from overt aggressions and other tendencies that are now anachronistic in a technological world. But whatever we choose to do, one thing is certain, we alone are the stewards of the future of life and mind on this small outpost in an infinite cosmos. We alone must decide where we want to go.

CONCLUSION

Remember that none of the above implies that it is irrational to be moral, only that rationality alone can’t get us to morality. This isn’t to say there aren’t good reasons to be moral. There are. Immoralists might be punished and lose the benefits of cooperation, and moralists don’t have to be looking over their shoulders and may have more friends. All we have said is that we can’t show that the reasons to be moral outweigh the reasons to be immoral if you benefit from and can get away with immorality.

And we have also suggested that it is becoming increasingly within our power to remake the world and ourselves in such a way that no one can benefit from or get away with immorality. While some will object that nightmarish scenarios will follow from our increasing control of immoral behavior, it is quite likely that we will all benefit from a world in which peaceful living can be secured by the application of our knowledge. Ironically, our inability to convincingly answer the “why should I be moral” question in theory, will lead to our answering it in practice. In short, there never have been completely convincing reasons to be moral, evidenced by the barbarism of human history,  but, desperately in need of morality for our survival and flourishing, we will freely choose to transform ourselves by all means at our disposal.

In retrospect, biology and evolutionary stable strategies imposed early moral constraints, philosophical and religious education furthered the project, governments provided the muscle that conscience lacked, and now it is up to us to continue the project so that immorality doesn’t kill us. So we will be the ones who ultimately create the answer to the “why be moral” question.

Notes

  1. Morality defined as a system demanding that persons express care, concern, and interest in others; exemplified by moral rules such as: “don’t kill, lie, cheat, or steal;” “help others;” etc.
  2. Virtue ethics, with roots in Plato, Aristotle, the Stoics and Epicureans, and some early Christians, has enjoyed renewed success in the late 20th century thru the work of Anscombe and MacIntyre. But I view virtue ethics as part of some other overall theory of ethics and not as a complete theory in itself.
  3. In addition, our passions often cause us to misread situations. Maybe I think there is little chance of being caught because I am a compulsive candy stealer.
  4. Nielsen, Kai. “Why Should I Be Moral?—Revisited” American Philosophical Quarterly 21, January 1984.
  5. There is however one possible way out of this conundrum. SI justifications of morality are especially difficult because we work with isolated senses of self. If the self is separate from others, if our games are mostly non-zero-sum, it is hard to see why we should care about others. But if the other is an extension of ourselves, then helping others is SI by definition. In that case, zero-sum games are illusory. The problem is that this broad view of self is counter-intuitive.
  6. This essay was composed over a single weekend; it is not a substitute for sustained philosophical reflection and research.
  7. For an excellent introduction to ethical theory Louis P. Pojman Ethical Theories: Classic and Contemporary Reading or James Rachels’ The Elements of Moral Philosophy.
Liked it? Take a second to support Dr John Messerly on Patreon!
Become a patron at Patreon!

3 thoughts on “Ethical Theory as Applied Engineering?

  1. John,

    I was glad to read your largely favorable mention of an evolutionary understanding of morality as being potentially useful.

    Understanding the origins and function of human morality is too often dismissed with something like “You can’t convincingly explain how to derive an ought from an is”. It would be foolish to dismiss the rest of science with this criticism. Dismissing the science of morality with this criticism is equally foolish.

    Understanding the evolutionary function (the principal reason they exist) of our cultural moral norms and moral sense is wonderfully useful for enabling us to better achieve shared goals. That function, solving cooperation problems, is at the heart of what makes us human and such an incredibly successful species. For example, this knowledge is useful for resolving moral disputes by demystifying what morality’s function ‘is’, coming to grips with troubling aspects of human morality such as guilt, tribalism, and sexism, understanding the critical role of punishment in morality and how it can go so wrong, and understanding morality’s central role in achieving sustainable happiness. And none of these applications require any magic oughts. All are based, like the rest of science, in informing us how to achieve our goals. The science of morality is special in its focus on understanding the means for achieving shared goals.

    From my perspective, I would rephrase (and split into three) your last paragraph as follows:

    In retrospect, biology and evolutionarily stable strategies that solved cooperation problems were encoded in our moral sense, including our sense of empathy, fairness, and an inclination to punish people that exploit others. With the emergence of culture, moral norms such as “Do to others as you would have them do to you” were similarly selected for by the benefits of cooperation they produced (in this case by advocating initiating indirect reciprocity).

    Philosophical and religious education set itself to answering important ought questions such as “How should I live?”, “What is good?”, and “What are my obligations?” However, these ought questions are about a category of thing that is largely independent of what science tells us the function of our moral sense and cultural moral norms ‘is’.

    Religions, governments, monetary systems, and businesses have all been selected for by their ability to solve cooperation problems that either support evolutionary strategies such as reciprocity and hierarchies or are new, powerful inventions for solving cooperation problems such as money economies and the rule of law.

  2. Mark, thanks for your thoughtful reply. For more, I direct my readers to Mark’s blog
    scienceandmorality.com

  3. Given our human nature and strong desires as well as the relativity among different cultures, It’s difficult to define what is a absolute moral imperative. One important saying I took away from a past study of ethics was: “Don’t treat people as objects.”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.