Category Archives: Fallibilism

What is Fallibilism?

FALLIBILISM

In a previous post, I claimed to be a fallibilist. This technical philosophical term refers (roughly) to “the belief that any idea we have could be wrong.” Or, more precisely,

Fallibilism (from medieval Latin fallibilis, “liable to err”) is the philosophical principle that human beings could be wrong about their beliefs, expectations, or their understanding of the world, and yet still be justified in holding their incorrect beliefs. In the most commonly used sense of the term, this consists in being open to new evidence that would disprove some previously held position or belief, and in the recognition that “any claim justified today may need to be revised or withdrawn in light of new evidence, new arguments, and new experiences.”[1] This position is taken for granted in the natural sciences.[2]

FALLIBILISM AND SKEPTICISM

Perhaps the most important issue is to distinguish fallibilism from skepticism—the doctrine that no idea, belief, or claim is ever well justified or is definitely known. Generally, skepticism is thought to be a stronger claim than fallibilism. Skepticism implies that we should assert nothing, suspend all judgment, or doubt the reliability of the senses, whereas fallibilists generally accept the existence of knowledge or justified belief. 

But how can we reconcile these two views? May we say, with consistency, that our ideas might be mistaken, yet we are still justified in believing them? If John claims to know x but admit that x  might nor be true, then how is what he claims to know knowledge? To say you know something, but at the same time admit you might be in error seems mistaken.

[The reader is welcome to consider sophisticated replies to this problem such as David Lewis on “epistemic contextualism” or P. Rysiew on “concessive knowledge attributions“—i.e., sentences of the form ‘S knows that p, but it is possible that q’ (where entails not-p).]

FALLIBILISM AS CRITICAL THINKING

But let’s approach this issue more simply. If you buy a lottery ticket and the odds of winning are 1 in 10 million, do you know you won’t win? No, you don’t know this with 100% certainty but you do know you won’t win with a very high degree of probability. Now if you play the lottery and buy two tickets you have a slightly greater chance of winning, but again you still can be very confident you won’t win. And the same thing if you buy a thousand tickets. Even if you buy a thousand tickets you can justifiably say, “I know I won’t win,” if by know you mean very, very certain.

Now if I say that I know that evolutionary, quantum, atomic, relativity or gravitational theories are true, this is short-hand for “they are true beyond any reasonable doubt; meaning they are true unless gods, intelligent aliens or a computer simulations are deceiving my cognitive and sensory apparatuses, i.e., they are true unless something really weird is going on. Now something weird could be going on and aliens may be having fun at our expense, say by making evolution look true when it isn’t. There may be gods or aliens or computer programs or something else deceiving us. But no one should believe this.

This is the essence of good thinking; proportioning our assent to the evidence. There is overwhelming evidence for the basic ideas of modern science, but no evidence that people who play the lottery generally win. In fact, the evidence shows that almost everyone who plays the lottery loses. A well-developed mind learns to distinguish the almost certainly true from the probably true from the equally likely to be true from the probably not true to the almost certainly false. To better understand, consider some simple examples.

EXAMPLES

Suppose I say, as one born in the US and a current resident of Seattle WA, one of the following:

1. I have been to Jupiter.
2. I have been to the South Pole.
3. I have been to Russia.
4. I have been to Europe.
5. I have been to Portland.
6. I have been to Seattle.

It is easy to see that as we proceed down the list the probability that I have been to one these places increases. In the beginning, the chance was practically zero—although as a fallibilist you should concede that I may be an alien who has been to Jupiter. At the bottom of the list, the chance is 100% that I’ve been there unless I’m lying to you or am being deceived by gods, aliens, simulations, etc.  as to my whereabouts. If I tell you #1, then you know (beyond a reasonable doubt) that the claim is false. If I tell you #6, while standing next to you at the Space Needle, then you know (beyond a reasonable doubt) that the claim is true. Finally, if I tell you #2 thru #5 then you don’t know and have to examine the evidence to determine the probability my claim is true.

And this is how one can be a fallibilist and claim to know things simultaneously. Any idea I have could be wrong, but I feel amazingly confident that #1 is false and #6 is true in the above examples. If I am justified in being amazingly confident by the evidence, that counts as knowledge.

Here is another example. Suppose I say:

1. If they play a football game, the Seattle Seahawks will beat a Pop Warner team.
2. If they play a football game, the Seattle Seahawks will beat a high school team.
3. If they play a football game, the Seattle Seahawks will beat a college team.
4. If they play a football game, the Seattle Seahawks will beat an NFL team.
5. If they play a football game, the Seattle Seahawks will beat a team of omnipotent, omniscient, football players.

You should say to me, I know #1 is true beyond a reasonable doubt (although the Seahawks could lose on purpose, all simultaneously have heart attacks during the game, or die on the way to the game in an accident and forfeit, etc.) and that #5 is false beyond a reasonable doubt because the Seahawks can’t beat godlike football players.

So I am a fallibilist. Any idea I have could be wrong but some ideas are more likely to be true than others. All one can do, as a rational person, is proportion their assent to the evidence. You might win the lottery, I might have been on Jupiter, and the Pop Warner team might beat the Seahawks … but don’t bet on it.

__________________________________________________________________________

  1. Nikolas Kompridis, “Two kinds of fallibilism”, Critique and Disclosure (Cambridge: MIT Press, 2006), 180.
  2. Kuhn, Thomas S. The Structure of Scientific Revolutions. 3rd ed. Chicago, IL: University of Chicago Press, 1996