What is Fallibilism?


In a previous post, I claimed to be a fallibilist. This technical philosophical term refers (roughly) to “the belief that any idea we have could be wrong.” Or, more precisely,

Fallibilism (from medieval Latin fallibilis, “liable to err”) is the philosophical principle that human beings could be wrong about their beliefs, expectations, or their understanding of the world, and yet still be justified in holding their incorrect beliefs. In the most commonly used sense of the term, this consists in being open to new evidence that would disprove some previously held position or belief, and in the recognition that “any claim justified today may need to be revised or withdrawn in light of new evidence, new arguments, and new experiences.”[1] This position is taken for granted in the natural sciences.[2]


Perhaps the most important issue is to distinguish fallibilism from skepticism—the doctrine that no idea, belief, or claim is ever well justified or is definitely known. Generally, skepticism is thought to be a stronger claim than fallibilism. Skepticism implies that we should assert nothing, suspend all judgment, or doubt the reliability of the senses, whereas fallibilists generally accept the existence of knowledge or justified belief. 

But how can we reconcile these two views? May we say, with consistency, that our ideas might be mistaken, yet we are still justified in believing them? If John claims to know x but admit that x  might nor be true, then how is what he claims to know knowledge? To say you know something, but at the same time admit you might be in error seems mistaken.

[The reader is welcome to consider sophisticated replies to this problem such as David Lewis on “epistemic contextualism” or P. Rysiew on “concessive knowledge attributions“—i.e., sentences of the form ‘S knows that p, but it is possible that q’ (where entails not-p).]


But let’s approach this issue more simply. If you buy a lottery ticket and the odds of winning are 1 in 10 million, do you know you won’t win? No, you don’t know this with 100% certainty but you do know you won’t win with a very high degree of probability. Now if you play the lottery and buy two tickets you have a slightly greater chance of winning, but again you still can be very confident you won’t win. And the same thing if you buy a thousand tickets. Even if you buy a thousand tickets you can justifiably say, “I know I won’t win,” if by know you mean very, very certain.

Now if I say that I know that evolutionary, quantum, atomic, relativity or gravitational theories are true, this is short-hand for “they are true beyond any reasonable doubt; meaning they are true unless gods, intelligent aliens or a computer simulations are deceiving my cognitive and sensory apparatuses, i.e., they are true unless something really weird is going on. Now something weird could be going on and aliens may be having fun at our expense, say by making evolution look true when it isn’t. There may be gods or aliens or computer programs or something else deceiving us. But no one should believe this.

This is the essence of good thinking; proportioning our assent to the evidence. There is overwhelming evidence for the basic ideas of modern science, but no evidence that people who play the lottery generally win. In fact, the evidence shows that almost everyone who plays the lottery loses. A well-developed mind learns to distinguish the almost certainly true from the probably true from the equally likely to be true from the probably not true to the almost certainly false. To better understand, consider some simple examples.


Suppose I say, as one born in the US and a current resident of Seattle WA, one of the following:

1. I have been to Jupiter.
2. I have been to the South Pole.
3. I have been to Russia.
4. I have been to Europe.
5. I have been to Portland.
6. I have been to Seattle.

It is easy to see that as we proceed down the list the probability that I have been to one these places increases. In the beginning, the chance was practically zero—although as a fallibilist you should concede that I may be an alien who has been to Jupiter. At the bottom of the list, the chance is 100% that I’ve been there unless I’m lying to you or am being deceived by gods, aliens, simulations, etc.  as to my whereabouts. If I tell you #1, then you know (beyond a reasonable doubt) that the claim is false. If I tell you #6, while standing next to you at the Space Needle, then you know (beyond a reasonable doubt) that the claim is true. Finally, if I tell you #2 thru #5 then you don’t know and have to examine the evidence to determine the probability my claim is true.

And this is how one can be a fallibilist and claim to know things simultaneously. Any idea I have could be wrong, but I feel amazingly confident that #1 is false and #6 is true in the above examples. If I am justified in being amazingly confident by the evidence, that counts as knowledge.

Here is another example. Suppose I say:

1. If they play a football game, the Seattle Seahawks will beat a Pop Warner team.
2. If they play a football game, the Seattle Seahawks will beat a high school team.
3. If they play a football game, the Seattle Seahawks will beat a college team.
4. If they play a football game, the Seattle Seahawks will beat an NFL team.
5. If they play a football game, the Seattle Seahawks will beat a team of omnipotent, omniscient, football players.

You should say to me, I know #1 is true beyond a reasonable doubt (although the Seahawks could lose on purpose, all simultaneously have heart attacks during the game, or die on the way to the game in an accident and forfeit, etc.) and that #5 is false beyond a reasonable doubt because the Seahawks can’t beat godlike football players.

So I am a fallibilist. Any idea I have could be wrong but some ideas are more likely to be true than others. All one can do, as a rational person, is proportion their assent to the evidence. You might win the lottery, I might have been on Jupiter, and the Pop Warner team might beat the Seahawks … but don’t bet on it.


  1. Nikolas Kompridis, “Two kinds of fallibilism”, Critique and Disclosure (Cambridge: MIT Press, 2006), 180.
  2. Kuhn, Thomas S. The Structure of Scientific Revolutions. 3rd ed. Chicago, IL: University of Chicago Press, 1996

4 thoughts on “What is Fallibilism?

  1. This piece left me blinking; as you point out, fallibilism is taken for granted in the sciences. With my training in science, I couldn’t understand why any rational person would not embrace fallibilism. This triggered a lengthy cogitation; why is this an issue? The answer I concocted was inspired by J. C. Maxwell’s statement “The logic of this world is the calculus of probabilities”. He was using ‘calculus’ not in the current sense of differentiation and integration but rather in the older sense of ‘calculation’.

    Upper division physics majors are taught Schroedinger’s Equation, which shows that EVERYTHING that happens is probabilistic. Absolutely nothing is certain. Any conceivable event has a probability of happening. This comment that I am just now writing could transmogrify into the Encyclopedia Brittanica on its electronic journey to your website — but the probability of that happening is really, really close to zero.

    So life is simpler for the physicist than for the philosopher: it’s all just a matter of probability. Every possible statement could be true and every possible statement could be false. Each statement has a probability of being either true or false.

    You can get really wild and talk about the probability that your calculation of the probability of the truth of a statement is itself correct. Even better, you could talk about the uncertainty in the probability calculation — the standard deviation. And you could go to an even deeper level of abstraction by talking about the standard deviation of the standard deviation, and so on ad infinitum. What is the uncertainty of the uncertainty of the uncertainty of our knowledge? 😜

  2. Chris – You are of course correct about science but unfortunately many students no longer take any science. You would be amazed at how difficult such an obvious idea is to many students. JGM

  3. Wed, Sep 5, 9:13 AM (1 day ago)
    to me

    Hi John:

    I like your piece on falliblism, but it seems to me that the examples you provide in the last part of the piece don’t really show fallibalism but “abduction.”

    Pierce originated both of these ideas, I believe (though with some differences in meaning than they have taken on). The ideas, in Pierce and others, are connected to one another. But I think that piece needs to be clearer in your piece.


    Various critical rationalists, like Hans Albert, argue that fallabilism requires that there be some empirical content that can be shown to be true one way or another. He is critical of much of what goes on in the field of economics for example, because economists often make statements “ceteris paribus,” and then argue that any counter-evidence can be explained away because of this or that special condition. Various metaphysical ideas on this count are not really fallablistic, either. On this count, Albert argues that we should thus reject them.

    I’m supposing others would argue that though we cannot have knowledge about those things, faith would still be possible. Such non-fallablist, metaphysical ideas could still be reasonable or plausible.

    Extended to the realm of metaphysics, we have to question whether there could be a good enough reason to believe in such ideas (supported by plausible arguments) when opposing arguments (as Kant shows) may be equally plausible.


    Dr. Darrell Arnold

    Assistant Professor of Philosophy

    Arts and Philosophy

    Miami Dade College, North Campus

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.