Descartes thought machines couldn’t think because they couldn’t speak or understand language. That is no longer true. [If you doubt this go to and converse with Alice: http://www.alicebot.org/downloads/programs.html This is an old program, much newer ones are available.]
An Argument that Machines Could Think – If your biological brain is replaced, piece by piece by non-biological parts, and still functions the same, then machines can think (and you would essentially be a machine.) And if mechanical parts could sustain consciousness for you, then they could do so for a robot too.
Objection – Computers Only Do What They Are Programmed To Do
Response – It is true that computers today aren’t conscious of what they do in the way that we are; but it is false that they can do only what they are programmed to do. Furthermore, what machines can do today is irrelevant to what they will be able to do tomorrow, or a million years from now. To say they can’t think is to beg the question. In fact, maybe we only execute a program. But if a machine could do everything a human could do, there would be no good reason to insist that it wasn’t conscious.
The Turing Test – The idea is that a machine passes the “turing test” if a human cannot tell whether they are talking with a person or computer. Just last year it was announced that a computer program had passed the test.(Although some doubt this claim.)
Why the Turing Test Fails – But is this test valid for determining if something is conscious? One reason to think not is that the test rests on behavioristic assumptions—mental life is demonstrated by behaviors—but behaviorism is generally discredited. A second reason has to do with the “Chinese room argument.”
Chinese Room Argument – You pass a note in Chinese thru a slot, and on the inside of the room a person follows instructions that send back answers in Chinese, even though the person inside doesn’t understand Chinese. Isn’t this analogous to a computer which receives inputs, executes a program, but doesn’t “understand” what it’s doing? Don’t computers only understand syntactical rules, but not semantics? (Philosophers tend to be very impressed with this argument, computer scientists not so much.)
Objection – What More Do You Want? – If it walks and talks like a duck, its’ probably a duck. If machines do what humans do, then we have as much evidence they are conscious as we do that other people are conscious. REPLY – But consciousness isn’t deduced exclusively from behavior, we know our own consciousness “from the inside.” If we knew how the brain gives rise to consciousness, then we could see if computer had similar features.