In my previous post, I claimed to be a fallibilist. This is a somewhat technical philosophical term which I have described to students as “the belief that any idea I have could be wrong.” Here is a more precise definition.
Fallibilism (from medieval Latin fallibilis, “liable to err”) is the philosophical principle that human beings could be wrong about their beliefs, expectations, or their understanding of the world, and yet still be justified in holding their incorrect beliefs. In the most commonly used sense of the term, this consists in being open to new evidence that would disprove some previously held position or belief, and in the recognition that “any claim justified today may need to be revised or withdrawn in light of new evidence, new arguments, and new experiences.” This position is taken for granted in the natural sciences.
FALLIBILISM AND SKEPTICISM
Perhaps the most important issue is to distinguish fallibilism from skepticism—the doctrine that no idea, view, or claim is ever well justified or is definitely known. Generally, skepticism is thought to be a stronger claim than fallibilism. Skepticism implies that we should assert nothing, suspend all judgment, or doubt the reliability of the senses, whereas fallibilists generally accept the existence of knowledge or justified belief. (Most contemporary epistemologists are fallibilists, not skeptics.)
But how can we reconcile these two views? May we say, with consistency, that our ideas might be mistaken, yet we are still justified in believing them? If John claims to know x but admits that the truth might be not x, then how is what he claims to know in fact knowledge? To say you know something, but at the same time admit you might be in error seems mistaken.
At this point, the reader is welcome to consider sophisticated replies to this problem such as David Lewis on “epistemic contextualism” or P. Rysiew on “concessive knowledge attributions“—i.e., sentences of the form ‘S knows that p, but it is possible that q’ (where q entails not-p).
FALLIBILISM AS CRITICAL THINKING
But let’s approach this issue more simply. If you buy a lottery ticket and the odds of winning are 1 in 10 million, do you know you won’t win? No, you don’t know this for sure but you can be very certain you won’t win. If you play the lottery and buy two tickets you have a slightly greater chance of winning, but again you can be very confident you won’t win. And the same thing if you buy a thousand tickets. (Of course, if you buy every combination of tickets you will almost certainly win, although maybe not—the lottery could be rigged!) Even if you buy a thousand tickets you can justifiably say, “I know I won’t win,” if by know you mean very, very certain.
Now if I say that I know that evolutionary, quantum, atomic, relativity or gravitational theories are true, this is short-hand for “they are true beyond any reasonable doubt; meaning they are true unless gods, intelligent aliens or a computer simulation is deceiving my cognitive and sensory apparatuses, i.e., they are true unless something really weird is going on. Of course, something weird could be going on and aliens may be having fun at my expense, say by making evolution look true when it isn’t. There may be deceiver gods or aliens or computer programs. But no one should believe this.
This is the essence of good thinking; proportioning our assent to the evidence. There is overwhelming evidence for the framework ideas of modern science, but none that people who play the lottery generally win. In fact, the evidence shows that almost everyone who plays the lottery loses. A well-developed mind learns to distinguish the almost certainly true from the probably true from the equally likely to be true from the probably not true to the almost certainly false. To better understand, consider some simple examples.
Suppose I say, as one born in the US and a current resident of Seattle WA, one of the following:
1. I have been to Jupiter.
2. I have been to the South Pole multiple times.
3. I have been to the South Pole once.
4. I have been to Russia.
5. I have been to Europe.
6. I have been to Portland.
7. I have been to Seattle.
It should be easy to see that as we proceed down the list the probability that I have been to any of these places increases. In the beginning, the chance was practically zero—although as a fallibilist you must concede that I may be an alien who has taken human form and in fact have been to Jupiter. At the bottom of the list, the chance is 100% that I’ve been there unless I’m lying to you or am being deceived by gods or aliens or simulations as to my whereabouts. If I tell you #1, then you know (beyond a reasonable doubt) that the claim is false. If I tell you #7, while standing next to you at the Space Needle, then you know (beyond a reasonable doubt) that the claim is true. Finally, if I tell you #4, you just don’t know and have to examine the evidence to determine the probability my claim is true.
And this is how one can be a fallibilist and claim to know things simultaneously. Any idea I have could be wrong, but I feel amazingly confident that #1 is false and #7 is true. If I am justified in being amazingly confident by the evidence, that counts as knowledge. And here is another example. Suppose I say:
1. If they play a football game, the Seattle Seahawks will beat a Pop Warner team.
2. If they play a football game, the Seattle Seahawks will beat a high school team.
3. If they play a football game, the Seattle Seahawks will beat a college team.
4. If they play a football game, the Seattle Seahawks will beat an NFL team.
5. If they play a football game, the Seattle Seahawks will beat a team of omnipotent, omniscient, godlike, super-duper football players.
You should say to me, I know #1 is true beyond a reasonable doubt (although the Seahawks could lose on purpose, all simultaneously have heart attacks during the game, or die on the way to the game in an accident and forfeit), and that #5 is false beyond a reasonable doubt because the Seahawks opponents are godlike football players.
So I am a fallibilist. Any idea I have could be wrong but some ideas are more likely to be true than others. All one can do, as a rational person, is proportion their assent to the evidence. You might win the lottery, I might have been on Jupiter, and the Pop Warner team might beat the Seahawks … but don’t bet on it.
- Nikolas Kompridis, “Two kinds of fallibilism”, Critique and Disclosure (Cambridge: MIT Press, 2006), 180.
- Kuhn, Thomas S. The Structure of Scientific Revolutions. 3rd ed. Chicago, IL: University of Chicago Press, 1996