Summary of Charles T. Rubin’s, “Artificial Intelligence and Human Nature,”

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 17, 2016.)

Charles T. Rubin is a professor of political science at Duquesne University. His 2003 article, “Artificial Intelligence and Human Nature,” is a systematic attack on the thinking of Ray Kurzweil and Hans Moravec, thinkers we have discussed in recent posts.[i]

Rubin finds nearly everything about the futurism of Kurzweil and Moravec problematic. It involves metaphysical speculation about evolution, complexity, and the universe; technical speculation about what may be possible; and philosophical speculation about the nature of consciousness, personal identity, and the mind-body problem. Yet Rubin avoids attacking the futurists, whom he calls “extinctionists,” on the issue of what is possible, focusing instead on their claim that a future robotic-type state is necessary or desirable.

Rubin argues that the argument that there is an evolutionary necessity for our extinction seems thin. Why should we expedite our own extinction? Why not destroy the machines instead? And the argument for the desirability of this vision raises another question. What is so desirable about a post-human life? The answer to this question, for Kurzweil, Moravec, and the transhumanists, is the power over human limitations that would ensue. The rationale that underlies this desire is the belief that we are but an evolutionary accident to be improved upon, transformed, and remade.

But this leads to another question: will we preserve ourselves after uploading into our technology? Rubin objects that there is a disjunction between us and the robots we want to become. Robots will bear little resemblance to us, especially after we have shed the bodies so crucial to our identities, making the preservation of a self all the more tenuous. Given this discontinuity, how can we know that we would want to be in this new world, or whether it would be better, any more than one of our primate ancestors could have imagined what a good human life would be like. Those primates would be as uncomfortable in our world, as we might be in the post-human world. We really have no reason to think we can understand what a post-humans life would be like, but it is not out of the question that the situation will be nightmarish.

Yet Rubin acknowledges that technology will evolve, moved by military, medical, commercial, and intellectual incentives, hence it is unrealistic to limit technological development. The key in stopping or at least slowing the trend is to educate individuals about the unique characteristics of being human which surpass machine life in so many ways. Love, courage, charity, and a host of other human virtues may themselves be inseparable from our finitude. Evolution may hasten our extinction, but even if it did not there is no need to pursue the process, because there is no reason to think the post-human world will be better than our present one. If we pursue such Promethean visions, we may end up worse off than before.

Summary – We should reject transhumanist ideals and accept our finitude.


[i] Charles T. Rubin, “Artificial Intelligence and Human Nature,” The New Atlantis, No. 1, spring 2003.

Leave a Reply

Your email address will not be published. Required fields are marked *