Hospitals and Bodies

In the last few weeks I have spent a lot of time with others who were in hospitals. There is something about being in a hospital, even as a mere visitor, which transports you to a different world. Of course you could say the same of churches, casinos and sports stadiums, or jails. Churches are filled with both hope and absurdity, casinos and stadiums with mindless distractions, jails with utter hopelessness and despair. Perhaps where they take you to is better or worse than where you came from.

But hospitals are unique; they smell of death, disease and dysfunction. Within their walls you encounter the consequences of being bodies; you encounter the earthiness and the ugliness of human bodies and their generated minds. You encounter humanity. Let no one deceive you; the encounter is, at once, both humbling and distasteful. Imagine then what it must be like to be a patient. Yes, there are good people trying to help you, but eventually they will fail. You have, perhaps for the first time, noticed mortality. As a patient, you have literally been transported from the world of the living, to the world of the dying.

It is easy to see then why our culture idolizes youthful, vital bodies and minds. They glow, they seem immortal. Their skin has no wrinkles, their backs are not hunched, their hair has not thinned, their brains work quickly. But those youthful bodies and brains are decaying before our eyes, and even some wisdom and patience do come to them, they will ultimately fail. The process is not pretty; aging is not for sissies.

Being in a hospital makes me wonder why people are so attached to their bodies.  Tell them you are a transhumanist, who looks forward to a genetically engineered or robotic body, or a life without a body in a computer generated reality, and they retreat in horror. I think if they spent more time in hospitals, they might change their minds. There is nothing noble about having the bodies and brains of modified monkeys; nothing much good about being controlled by bodies and brains forged in the Pleistocene. Perhaps that’s why human being deceive themselves, they don’t want to know what they really are, they want to believe they are angels. But they are not. As Shakespeare put it:

But man, proud man,
Drest in a little brief authority,
Most ignorant of what he’s most assur’d;
His glassy essence, like an angry ape,
Plays such fantastic tricks before high heaven,
As make the angels weep.

The world should make us weep; it would make the gods and angels weep if there were any. But there are not. There are only modified apes, with the authority over the survival of an entire planet. I want to be more than a modified monkey. How I wish we could all be more. Let us not pause then, let us go forward. I’ll let Walt Whitman have the last word.

This day before dawn I ascended a hill,
and look’d at the  crowded heaven,
And I said to my Spirit,
When we become the enfolders of those orbs,
and the pleasure and knowledge of everything in them,
shall we be fill’d and satisfied then?
And my Spirit said:
No, we but level that lift,
to pass and continue beyond.

Will Transhumanism Lead to Greater Freedom?

(This articles was reprinted in Ethics & Emerging Technologies, July  26, 2014)

A friend emailed me to say that he believed that transhumanists should strive to be free, if free will doesn’t currently exist, or strive to be freer, if humans currently possess some small modicum of free will. He also suggested that becoming transhuman would expedite either process. In short he was claiming that transhumanists should desire more freedom.

I’ll begin with a disclaimer. I have not done much with the free will problem beyond teaching the issue in introductory philosophy courses over the years. I have also penned two brief summaries of the free will issue, “The Case Against Free Will,” which summarizes the modern scientific objections to the existence of free will, and “Freedom and Determinism,” which summarizes some positions and counter positions on the topic. But that is all, so my knowledge of the issue is rudimentary. I will note that by a wide margin, most contemporary philosophers are compatibilists; they believe that free will and determinism are compatible. Here are the stats: (compatibilism 59.1%; libertarianism 13.7%; no free will 12.2%; other 14.9%.)

I am sympathetic with my friend’s thinking that transhumanists should want free will. Transhumanism is about overcoming all human limitations, including psychological ones, and I think psychological determinism is an obvious limitation. We are limited if we don’t have free will. (Yes, all these terms need to be carefully defined.) That makes sense to me, at least at first glance. If I can’t freely choose to desire psychological health or inner peace, or if I can’t desire to be transhuman, or explore new ideas or new types of consciousness, then I am limited. And transhumanists don’t believe in limitations.

If the majority of philosophers are correct that we now possess a bit of free will because we have highly complex brains—something that rocks, trees and worms don’t have—then why can’t more and better consciousness/intelligence make us more free? Perhaps consciousness and freedom are emergent properties of evolution. And if free will could emerge through natural selection, then why can’t we design ourselves or  robots superintelligences to be more free?

I think the problem comes in explaining how you do this. Designing yourself or robot to be free seems counter-intuitive. Maybe you have to increase the intelligence of system and freedom will naturally emerge. But it is hard to see how you implant say a moral chip in your brain that would make you more free. Still, as we become transhuman, freedom and consciousness will hopefully increase.

Perhaps there is even a connection between intelligence and freedom. Maybe more intelligence makes you freer because you have more choices—you know more and can do more. For example, if I am ultimately omniscient I can think anything, or if I’m omnipotent I can do anything. So as we evolve progressively toward transhuman and post-human states, our ability to make choices unconstrained by genes and environment will naturally increase. Why wouldn’t it, if we could bypass genes or choose environments? And yes I think to do all this would be a good thing. (An aside. We also aren’t truly free if we have to die, so defeating death would go a long way to making us freer.)

All of this raises questions that E. O. Wilson raised almost 40 years ago in the final chapter of On Human Nature. Where do we want to go as a species? What goals are desirable? As I’ve stated multiple times in this blog, we should move toward a reality with more knowledge, freedom, beauty, truth, goodness, and meaning; and away from a reality with more of their opposites. We should overcome all pain, suffering and death and create a heaven on earth. We have a long way to go, but that is the only worthwhile goal for beings worthy of existence.

Evolution and Philosophy: Things I Learned From Richard J. Blackwell

An email correspondence with Ed Gibney about the influence of evolutionary theory on philosophy got me to thinking about my graduate school mentor, Richard J. Blackwell. I was a student in a number of his graduate seminars in the 1980s, all of which had a profound and continuing influence on my thinking.

In his course “Concepts of Time” I first pondered that enigmatic continuum which we all experience but cannot define. I remember my particular fascination with J. M. E. McTaggert’s famous article “The Unreality of Time.” The only thing I knew about time when I left this seminar was that it was mysterious.

In “Evolutionary Ethics” and “Evolutionary Epistemology” I came to believe that knowledge and morality weren’t static; rather both evolve as conscious beings move through a time. And in “The Seventeenth Century Scientific Revolution” I was introduced to a dramatic historical example of intellectual evolution.

A synthesis of some of these ideas occurred when I took an independent seminar with Professor Blackwell on “Aristotle’s Metaphysics.” I wondered if Aristotle’s view of teleology—that reality strives unconsciously toward ends—could be reconciled with modern evolutionary theory which is decidedly non-teleological.

In response to my queries Professor Blackwell introduced me to the concept of evolution in Jean Piaget. [For more see my book, Piaget’s Conception of Evolution, or my summary of Piaget’s biological theorizing in Chapter 4 of The Cambridge Companion to Piaget.] What I found in Piaget was a theory of evolution that was quasi-teleological. His concept of equilibrium was the biological analogue of the quasi-teleology that I was looking for. Thus I found myself able to believe in a free, non-deterministic orthogenesis without resorting to Aristotle’s idea of final causation.

Furthermore the evidence for an orthogenesis was derived from an a posteriori analysis of cosmic evolution—order did emerge from chaos. A concrete example of orthogenesis can be found by simply observing how the potential for language and thought are actualized in the maturing child. Teleology/equilibrium is strong enough to steer the development of the child’s language and cognitive faculties, but weak enough to allow for creative freedom.

In essence, what I learned from Professor Blackwell was that reality is unfolding in a progressive direction, and that human life has meaning amidst the process of change.

Since that time I have somewhat hedge my bets. Perhaps life’s traumas have dampened my  youthful optimism. In “Cosmic Evolution and the Meaning of Life” I concluded that the best we can do is to hope that life if meaningful, inasmuch as the evidence that life is meaningful is mixed. For the moment I’ll stick with the hope in life’s meaning, as the only intellectually honest response to the conflicting messages we get from whatever reality or apparent reality in which we are enmeshed.

But the only way to ensure a meaningful reality is by continuing the project of transhumanism. Only when we change ourselves for the better will we be able to change reality for the better. As for Professor Blackwell, I can only reiterate the dedication of my book, Piaget’s Conception of Evolution

To Richard J. Blackwell
an exemplar of moral and intellectual virtue

Evolution and Ethics

(reprinted in the Institute for Ethics & Emerging Technologies, July 15, 2015)

I have been interested in the above topic since taking a wonderful graduate seminar in the subject about 30 years ago from Richard J. Blackwell at St. Louis University. Recently a friend introduced me to a paper on the topic, “Bridging the Is-Ought Divide: Life is. Life ought to act to remain so,” by Edward Gibney who argues (roughly) that the naturalistic fallacy has no force. Gibney is not a professional philosopher, but I found myself receptive to his argument nonetheless.

Like most philosophers I was introduced early in my career to the naturalistic fallacy—the idea that you can’t get an ought from an is—but I have never found the argument convincing. This quote from Daniel Dennett expresses my view clearly.

If ‘ought’ cannot be derived from ‘is,’ just what can it be derived from?…ethics must be somehow based on an appreciation of human nature—on a sense of what a human being is or might be, and on what a human being might want to have or want to be.

While it is obvious that our moral behaviors arose in our evolutionary history, philosophers typically object that this is a fact about ethics that doesn’t imply any values. But again, I have never found this objection satisfying. If facts about our nature don’t tell us something about what we should value, then where might we get ethics from? I understand that a straightforward deduction of ought from is doesn’t follow, but surely we can infer something about what we ought to do from what is. However I acknowledge that I am in a minority on this question, as most philosophers accept the naturalistic fallacy. Perhaps they just don’t like more of their field being taken over by scientists!

In the end evolutionary ethics is an extension of evolutionary theory into another realm. Our bodies and our minds are now understood best from an evolutionary perspective, and so too should our behaviors in the moral realm. I think that evolutionary epistemology helps resolve the mind/body problem, and now evolutionary ethics helps resolve the is/ought problem.

Still philosophers would object to a number of issue in the paper, including Gibney’s basic syllogism:

1) p exists
2) p wants to continue to exist, thus

3) p ought to act in aid its continued existence.

First, they might object that “just because p is doesn’t mean that p ought to be.” By simply stating this, Gibney is begging the question.

Second, they might say, “if p wants to exist it should act so in ways that help it to continue to exist, but this is a survival imperative and not a moral imperative. And those aren’t the same thing.” In other words Gibney is confusing what behaviors help us survive with moral behaviors. While the two sometimes coincide, often they don’t. (Killing you quickly before you kill me might aid my survival but not be moral.)

I agree that there are more to moral imperatives than survival imperatives; nonetheless survival imperatives are a prerequisite for moral imperatives. In other words, oughts that aid survival are necessary but not sufficient conditions for morality. So while we cant deduce morality from human nature, we can infer a large part of it.

Religion and Superintelligence

I was recently contacted by a staff writer from the online newsmagazine The Daily Dot. He is writing a story at the intersection of computer superintelligence and religion, and asked me a few questions. I only had one day to respond, but here are my answers to his queries.

Dear Dylan:

I see you’re on a tight deadline so I’ll just answer your questions off the top of my head. A disclaimer though, all these questions really demand a dissertation length response.

1) Is there any religious suggestion (Biblical or otherwise) that humanity will face something like the singularity?

There is no specific religious suggestion that we’ll face a technological singularity. In fact, ancient scriptures from various religions say virtually nothing about science and technology, and what they do say about them is usually wrong (the earth doesn’t move, is at the center of the solar system, is 6,000 years old, etc.)

Still people interpret their religious scriptures, revelations, and beliefs in all sorts of ways. So a fundamentalist might say that the singularity is the end of the world as foretold by the Book of Revelations or something like that. Also there is a Christian Transhumanist Association and a Mormon Transhumanist Association  and some religious thinkers are scurrying to claim the singularity for their very own. But a prediction of a technological singularity—absolutely not. The simple fact is that the authors of ancient scriptures in all religious traditions obviously knew nothing of modern science. Thus they couldn’t predict anything like a technological singularity.

2) How realistic do you personally think the arrival of some sort of superintelligence (SI) is? How “alive” would it seem to you?

The arrival of SI is virtually inevitable, assuming we avoid all sorts of extinction scenarios—killer asteroids, out of control viruses, nuclear war, deadly climate change, a new Dark Ages that puts an end to science, etc. Once you adopt an evolutionary point of view and recognize the exponential growth of culture, especially of science and technology, it is easy to see that we will create intelligences must smarter than ourselves. So if we survive and science advances, then superintelligence (SI) is on the way. And that is some why very smart people like Bill Gates, Stephen Hawking, Nick Bostrom, Ray Kurzweil and others are talking about SI.

I’m not exactly sure what you mean by your “How alive would it seem to you” question, but I think you’re assuming we would be different from these SIs. Instead there is a good chance we’ll become them through neural implants, or by some uploading scenario. This raises the question of what its like to be superintelligent, or in your words, how alive you would feel as one. Of course I don’t know the answer since I’m not superintelligent! But I’d guess you would feel more alive if you were more intelligent. I think dogs feel more alive than rocks, humans more alive than dogs, and I think SIs would feel more alive than us because they would have greater intelligence and consciousness.

If the SIs are different from us—imagine say a super smart computer or robot—our assessment of how alive it would be would depend on: 1) how receptive we were to attributing consciousness to such beings; and 2) how alive they actually seemed to be. Your laptop doesn’t seem too alive to you, but Honda’s Asimo seems more alive, and Hal from 2001 or Mr. Data from Star Trek seem even more alive, and a super SI, like most people’s god is supposed to be, would seem really alive.

But again I think we’ll merge with machine consciousness. In other words SIs will replace us or we’ll become them, depending on how you look at it.

3) Assuming we can communicate with such a superintelligence in our own natural human language, what might be the thinking that goes into preaching to and “saving” it? 

Thinkers disagree about this. Zoltan Istvan thinks that we will inevitably try to control SIs and teach them our ways, which may include teaching them about our gods. Christopher J. Benek, co-founder and Chair of the Christian Transhumanist Association, thinks that AI, by possibly eradicating poverty, war, and disease, might lead humans to becoming more holy. But other Christian thinkers believe AIs are machines without souls, and cannot be saved.

Of course, like most philosophers, I don’t believe in souls, and the only way for there to be a good future is if we save ourselves. No gods will save us because there are no gods—unless we become gods.

4) Are you aware of any “laws” or understandings of computer science that would make it impossible for software to hold religious beliefs?

No. I assume you can program a SI to “believe” almost anything. (And you can try to program humans to believe things too.) I suppose you could also write programs without religious beliefs. But I am a philosopher and I don’t know much about what computer scientists call “machine learning.” You would have to ask one of them on this one.

5) How might a religious superintelligence operate? Would be it benign?

It depends on what you mean by “religious.” I can’t imagine a SI will be impressed by the ancient fables or superstitions of provincial people from long ago. So I can’t imagine a Si will find its answers in Jesus or Mohammed. But if by religious you mean loving your neighbor, having compassion, being moral or searching for the meaning of life, I can imagine SIs that are religious in this sense. Perhaps their greater levels of consciousness will lead them to being more loving, moral, and compassionate. Perhaps such beings will search for meaning—I can imagine our intelligent descendents doing this. In this sense you might say they are religious.

But again they won’t be religious if you mean they think Jesus died for their sins, or an angel led Joseph Smith to uncover and translate gold plates, or that Mohammed flew into heaven in a chariot. SIs would be too smart to accept such things.

As for “benign,” I suppose this would depend on its programming. So for example Eliezer Yudkowsky has written an book-length guide to creating  “friendly AI.” (As a non-specialist I am in no position to judge the feasibility of such a project.) Or perhaps something like Asimov’s 3 laws of robotics would be enough. This might also depend on whether morality follows from super-rationality. In other words would SIs conclude that it is rational to be moral. Most moral philosophers think morality is rational in some sense. Let’s hope that as SIs become more intelligent, they’ll also become more moral. Or, if we merge with our technology, let’s hope that we become more moral.

And that is the future survival and flourishing of our descendents. We must become more intelligent and more moral. Traditional religion will not save us, and it will disappear in its current form like so much else after SIs arrive.  In the end, only we can save ourselves.

JGM