Moral Enhancement

Chimp Brain in a jar.jpg

A recent comment on my piece, “The Future of Religion,” offered a few good objections to my transhumanism. I’d like to offer some brief replies to those objections.

First, Dr. Arnold stated, “One of your assumptions seems to be that enhancing human well-being will be the fundamental driver of our technologies.” This may be true but isn’t what I’m saying.

For me, it’s the other way around. I’m saying that using future technologies have the potential to greatly enhance human (and posthuman) well-being. It is easy to see how technology does this today by considering sanitation, clean water, vaccines, labor-saving devices, antibiotics, dentistry, comfortable furniture to sit and sleep on, well-designed shelter, heating and cooling, etc. Just consider living in the middle ages when the life expectancy was about 25 years and, for the most part, people died miserably. So we can imagine that future technologies will enhance human flourishing even more.

Now I do agree with the professor that many technologies: 1) benefit only those who can afford them (that was true in the past, is true now, and may well be true in the future); 2) are designed only for profit (and thus may or may not benefit people); and 3) can have disastrous results (nuclear weapons are an obvious example).

Dr. Arnold also says that “technologies will be developed with broader concerns of human well-being as the fundamental driving force.” I suppose some will and some won’t and some will be a mixed bag. For example, medical research is often driven both by profit and the researcher’s sincere desire to do good. Such issues raise economic considerations about the extent to which, for example, the supposed “invisible hand” (pursuing your self-interest in the market helps others) is operative.

So the extent to which the common good motivates support for science and technology probably depends on the cultural milieu. So in the Scandinavian countries, there is a better chance that concerns about the common good motivate scientific research and economic policy than in the USA where those in power (especially Republicans) increasingly don’t care about the common good. (And when government does try to help its citizens, say by passing an “affordable care act,” private interests and the political parties they control work to undermine the promotion of the general welfare.)

So I agree when Dr. Arnold says, “I see little reason to be particularly hopeful that we will manage this [develop technologies aimed at increasing human flourishing unless] … we will have either (a) effective democratic control over technology developments that bend them to universalizable (or general) human needs, or (b) an enlightened core of leaders who develop them in these ways.” This insight reinforces my view that some countries are better than others in promoting the flourishing of their citizens.

I also agree with Dr. Arnold that capitalism creates desires for things we don’t need, “Our big tech companies, more than anything, seem driven by figuring out ways to market stuff to us that we don’t need.” (I actually think the issue with tech companies relates more to the data they collect which can be used for good or ill.) Again all this raises complicated issues about capitalism, the wealth inequality it creates, the destruction of the environment it encourages, and more.

But the market is a human creation so this again leads me back to the fundamental problem—humans have reptilian brains forged in the Pleistocene. We are so deeply flawed morally (territoriality, aggression, dominance hierarchies, etc) and intellectually (thousands of brain bugs) that in order to avoid being destroyed (climate change, nuclear war, environmental degradation, pandemics, asteroids, etc.) we need a radical upgrade. And technologies like genetic engineering, artificial intelligence, neural implants, and nanotechnology hold the promise of, for the first time in human history, upgrading our programming.

Naturally, there are obvious risks involved and no risk-free way to proceed. I’m just not sure whether we can slowly become more educated and improve culture fast enough to survive. But perhaps using technology to expedite the process of reprogramming ourselves will lead to our extinction too. I just don’t know.

I’m not even sure where the weight of reason lies on this issue or if we now possess the intellectual wherewithal to know how best to proceed. Maybe reason is mostly “the slave of the passions,” because of our evolutionary history.

What I know is this—only intellectual and moral virtue will save us.

___________________________________________________________

Note: I’ll try to do more research on this topic soon. For more see:


Unfit for the Future: The Need For Moral Enhancement (Uehiro Series In Practical Ethics) (Oxford University Press)


The Ethics of Human Enhancement: Understanding the Debate
(Oxford University Press)


Brain Bugs: How the Brain’s Flaws Shape Our Lives (W. W. Norton & Company)

And here are a few articles on the topic:

” Reply to commentators on Unfit for the Future” – Ingmar Persson and Julian Savulescu

“The Moral Agency Argument Against Moral Bioenhancement” – Massimo Reichlin
“Is It Desirable to Be Able to Do the Undesirable? Moral Bioenhancement and the Little Alex Problem” – Michael Hauskeller
“Why is It Hard for Us to Accept Moral Bioenhancement?: Comment on Savulescu’s Argument” – Masahiro Morioka
“Would Moral Enhancement Limit Freedom?” – Carissa Veliz
Liked it? Take a second to support Dr John Messerly on Patreon!
Become a patron at Patreon!

6 thoughts on “Moral Enhancement

  1. I’d like to address the common belief that companies offer us things that we don’t need. If that were true, who would buy it? Technology provides us with lots of examples. Obviously, we didn’t need smartphones before Apple introduced the iPhone in 2007. We were doing just fine with conventional cellphones. But Apple “forced” us to buy smartphones, even though we didn’t need them, right?
    Of course not! Smartphones aren’t necessary to life, but they’re fabulously useful. Indeed, we stopped needing things by 1950; everybody had the minimum of food, clothing, and shelter. Everything else we buy is just icing on the cake. So in that sense, we don’t need any of the stuff we spend so much money on.
    Things get messier when we bring the irrational proclivities of Homo Sapiens into it. Advertising a car by draping a sexy girl over the hood works. What do sexy girls have to do with cars? Nothing. But Homo Sapiens is stupid enough that the association sells cars. The same thing goes with lots of other things: advertisers know how to create associations that sell products. Brand X soap is made from natural ingredients, so it must be healthier. Brand Y paper is made from trees grown in an ecologically responsible manner, so buying Brand Y helps save the environment. And on and on.
    Yes, it’s irrational, but the blame doesn’t fall on the marketing people–it falls on the stupid people who are susceptible to this nonsense.
    Here’s another way of looking at it. I bought my dog a Kong; it’s a big chewy lump of hard rubber that she chews on but never manages to destroy. This thing provides absolutely nothing in the way of nutrition. It is completely superfluous to her well-being. Why do I waste money on something that she obviously doesn’t need? Because she likes it, that’s why! And if some dumb teenager thinks that he can get laid by buying the latest cool sunglasses, who are we to condemn him or the person who sells him the sunglasses? Let him who is without idiosyncratic desires cast the first stone.

  2. I must say I in largely agree with this. And if people are so stupid, let’s find ways to enhance their intelligence. Education is low tech so maybe we need high tech to help too?

  3. Chris: I never stated that companies force us to buy things. I stated essentially that they devise strategies to manipulate us to buy things we don’t need. In your response, you use “need” two different ways. First you say that if we didn’t need the things marketed, we wouldn’t buy them. (Specifically you write, “If that were true [that they offer us things we don’t need], who would buy it.”) But then you later concede that we don’t really need the iphones, etc. that companies spend time trying to sell us, but that they benefit our lives. (You note “they aren’t necessary to life.”)

    My basic point is that most companies don’t care about improving our lives per se. They care about generating profits. A study of how big data is being used is showing that many companies are quite good at finding out how to place products with us, knowing intricately as they do, what appeals to us. Now, some things we want are indeed things that improve our lives. But many are not, even if some of us might think they are. So we might get amusement from some plastic or batter operated products, but I’m guessing a cost/benefit analysis that highlights social costs would show that the waste produced by such products would in very many cases not be worth the amusement that they bring. I’m thinking of the North Pacific Gyre, for example, or any of the other five gyre, formed from single use plastics. Besides that I’m thinking of surveillance technologies that might indeed make us safer, at some times and places, but that carry great future risks.

    About your comment regarding stupid people: I think most of us are much more susceptible to marketing than we’d care to admit. And there’s plenty of blame to go around. Marketing products that pollute the earth long-term and interfere with global sustainability isn’t what I would characterize as smart, even if you can line your pockets with billions of dollars by doing it. In fact, I’d argue that we would be better off with regulations that limit the types of plastics used for packaging, for production. The market then might reflect the social costs to cleaning up the garbage left behind. We need cradle to cradle production in order to have a sustainable future. Further, I think we have more than enough information to show that relying on a market mechanism to deal with this environmental threat isn’t working. Consumers may indeed slowly be waking up and demanding more environmentally sustainable products, packing, production techniques, etc. And that is slowly giving rise to new production, etc. But we’re facing issues of historic proportions. And the market shows little willingness to self-regulate quickly or radically enough.

    Regarding enhancement technologies, these might be developed and marketed to the wealthy who care to purchase them. But for the type of transformation that John is talking about, it seems that you would need more than initiatives that make such things (if they’re found desirable for many at all) to be more broadly available. Otherwise, you only would end up with a small sub-class of individuals with accessibility to them. I believe John talks of the future of humanity, not merely a sub-class of it.

    More generally, my statement is of the view that people will develop technologies that reflect their own values. If they prevailing values of those developing technologies are primarily for profits I’m skeptical that we’ll get the type of transformation that John hopes for.

    Stepping back from that further, I strongly doubt the future envisioned by transhumanists would be all its cracked up to be. But that’s a further argument.

  4. Well, yes, OF COURSE companies are out to make a profit. So are workers; if companies stopped paying wages, how many people would come to work? This has all been explained very clearly by Adam Smith. There’s nothing wrong with the fact that people try to improve their lot by making money.
    You’re certainly right that companies are gathering information about us in order to sell us things we might want. But would you prefer the old ways, which had me watching commercials for “cross your heart” bras and detergents that gave whiter whites? If they’re going to bug me with their damn commercials, the least they can do is make the commercials vaguely relevant to me.
    Besides, those data-collecting algorithms are great fun to mess with. Every now and then I post something on Facebook about how I LOVE to eat chocolate-covered rutabagas at 3 in the morning, or how much fun it is to collect old bubble gum wrappers. That oughtta give those algorithms a run for their money.
    The many environmental problems caused by business activity are indeed a serious problem, but wagging our fingers at them accomplishes nothing. A much better solution was proposed more than a century ago; it’s called a “Pigovian tax”. The basic idea is that the government calculates how much damage any business inflicts upon society and levies a tax sufficient to compensate society for that damage. It’s a simple, easy-to-implement concept. The fact that it hasn’t been enacted is not the fault of business; blame your government for that.

  5. of course the problem with implementing the tax is that businesses and plutocrats own government.

  6. “of course the problem with implementing the tax is that businesses and plutocrats own government.”

    Greenwashers Incorporated International.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.