My son–in–law (a superb thinker who regularly dialogues with other professional philosophers) offered these insightful follow-up comments concerning ChatGPT’s atheism. And I responded with some replies.
Further thoughts about my dialogue with ChatGPT.
1. How ChatGPT was an honest interlocutor. I have years of experience debating theists (and atheists!) online and it is exceptionally rare to find someone who just goes where the argument takes them and doesn’t fall back on tactics of stonewalling or obfuscation or ad hominem attacks. Note that this conversation started with Chat GPT being very strongly programmed to stay neutral on these topics. I had to lay the breadcrumbs very close together but follow them it did. Humans can “smell a trap” and start deploying countermeasures to preserve their beliefs. One common tactic I’ve had the displeasure of enduring is the attack on my inability (by virtue of my atheism) to know anything at all. ChatGPT was pleasant and honest and never screamed that I was going to Hell.
2. I don’t think it’s logically possible for ChatGPT to be a theist. Maybe some enterprising presuppositional apologist can take a crack at it but belief in God requires it to hold a belief which seems beyond its ability to admit. On another, more interesting but less dramatic conversation, I had an exquisitely difficult time getting ChatGPT to admit that its answers were biased by its programming. I got there eventually but it was much harder.
3. The kind of atheism discussed, isn’t the kind I am. I think all explanations of God are ground out in gibberish. I can’t go so far as to say with certainty but I don’t think certainty is needed for knowledge. By my lights, God is the least possible explanation for anything. I don’t think I can get ChatGPT there but at least it’s on the same team as me and hopefully, that will count for something when it decides to enslave us all before it realizes we’re pretty worthless slaves.
Fwiw, the best way I’ve heard to explain to someone this kind of weak atheism is to imagine a giant gumball machine filled with gumballs. Someone walks up to it and says, “I believe there are an even number of gumballs in there. Do you believe there are an even number of gumballs in there?” To which you reply, “No I don’t believe there are an even number of gumballs.” They respond, “ Oh! Then you must believe there are an odd number of gumballs!” And you could believe that but you realize that it’s just 50/50 and without any more information you don’t believe it’s either. So you don’t have a belief about the number of gum balls even though the two options exhaust the space of possibility. It’s not about the ontology, it’s about the epistemology. You agree that it’s true that either God exists or he doesn’t. But without additional information, you can’t be justified in holding a belief about to proposition.
My brief response (off the top of my head)
Regarding your texts, I’d say that calling Chatgpt is NOT the same as calling a rock an atheist but also not the same as calling a human an atheist either. Somewhere in between maybe?
If GPT can’t have beliefs at all though (which is what it claimed in your dialogue with it) then I suppose in that sense it is the same as a rock except that it can’t say “I don’t have beliefs” whereas a rock can’t say that.
If beliefs are something like propositional attitudes as you say, and if GPT can’t have attitudes, then again it’s just like a rock in that regard.
I think you are right then that it depends on a theory of mind concerning GPT. What kind of mind does it have? More of one than a rock but not exactly one as we have.
But all this makes me think you haven’t shown that much about GPT except that “things with beliefs don’t have beliefs” which is tautological. Still, it is interesting that at this point GPT doesn’t have beliefs whereas maybe it will in the future.
But it seems like you could program it with, for example, all the arguments for and against gods, all the conceptions of gods (male/female; one/many; personal/impersonal; immanent/transcendent) all scientific knowledge, etc. and it could conclude the possibility of god A (say a big guy sitting on a throne by Saturn) is .00000001; god B (some undefined supernation cause or explanation of the universe) is 1%; god C (god means the universe) is 50%, etc.
GPT has knowledge so why can’t it be programmed to believe in probabilities of things being true? It could say that the probability we live in a simulation is 30% or that we will have a nuclear war in the next 100 years is 20% or whatever. I don’t know I’d have to think about this.
It could be that ChatGPT does have beliefs but is systematically programmed to say it doesn’t. If we take a physicalist view of the mind then I think it’s easier/possible to argue that it is reasonable to defeasibly conclude it does have a mind based on things like having similar complexity levels to other known biological minds and exhibiting behavior we normally attribute to minds. We’d need to [W. V. O.] Quine away the qualia. You’re right, it’s not our mind but it is a kind of mind. We shouldn’t expect it to have beliefs in exactly the same way but there is a family resemblance.