Internalist views could be said to lead to considering normativity as an issue in Epistemology. That is, what it is we ‘should’ or ‘ought’ to believe in various situations such as defining justification or knowledge. For example, whether we should deny situations like the Evil Demon one below, simply because we have to grant ourselves certain beliefs or foundations to even talk about Epistemology in any sensible manner. Let’s start by considering a problem if we take justification to be internal to us.
Evil Demon Problem
One problem for Internalists is that of the Evil Demon. Internalists want to hold that justification is something internal to the subject and not dependent on external factors. But under such conditions, we can imagine a scenario whereby we have many justified beliefs (for example, from my senses and reasoning) that are entirely internal to me, but all of them could be false. The example is well illustrated here (taken from The Stanford Encyclopedia of Philosophy):
First, consider an evil demon victim’s false belief that he has hands. By the victim’s own lights, it certainly looks as though he has hands. Surely, the victim would take himself to know that he has hands. Since he has no hands, he is mistaken in thinking he knows he has hands. His failure to know, however, is not directly recognizable to him. For unless his evidential situation were to change radically, no amount of reflection will enable him to figure out that he has no hands. So because of the truth condition, it is not always directly recognizable whether or not one knows. Knowledge, therefore, is essentially external.
By this argument, it seems as though we need some external method of verifying we aren’t just being tricked.
Epistemic Regress Problem
When talking about internalism about justification, we justify our beliefs by mental processes or some other way that is not reliant on external sources. But some of the justified beliefs that we form are dependent on other beliefs. For example (from the Oxford Handbook of Epistemology p210):
“Suppose I tell you as you approach your fiftieth birthday that you will shortly go insane. I offer as my evidence that you have a genetic defect that, like a time bomb, goes off at the age of 50. Naturally alarmed, you ask me what reason I have for concluding that you have the gene. I respond that it is just a hunch on my part. As soon as you discover that I have no epistemic justification at all for believing that you have the gene, you will immediately conclude that my bizarre conclusion about your impending insanity is wildly irrational.”
This seems to lead to an infinite regress, because each belief in turn requires another belief to justify it. In Philosophy, this is usually a sure sign that something has gone horribly amiss. (Note: For externalists, this needn’t be a problem, because what justifies one’s belief is something independent of us. A belief doesn’t rely on other beliefs for its justification).
So, how to answer this problem? We can use one of the following methods:
Foundationalism: The regress stops with something that is not a belief itself or stops with a belief that can justify other beliefs without needing to be justified itself.
Coherentism: The regress goes in a loop so that there are sets of beliefs that justify each other. Another way to think of it is that when considered together, the set of beliefs is coherent and justified. (Singularly considered, the beliefs might not be justified).
Infinitism: ‘Biting the bullet’ as it is known in Philosophy. We just accept that the regress is infinite and every belief is in turn justified by another. This seems highly unlikley however, as the human mind is finite, so it doesn’t seem possible that it could contain an infinite
amount of beliefs.
The idea in Foundationalism is that there are some basic beliefs that are ‘non-inferentially justified’. The idea could be expressed something like this: my belief that I see a crow is justified by my seeming to see a crow. In other words, there isn’t a further reason behind it, we have to just take it as self-justifying. This sort of statement is regarded as empiricist (as it is a statement about observation – discernible by means of the senses). A rationalist such as Descartes would say that certain beliefs can be held as primary because reason can tell us that the belief has certain characteristics that make it primary. For example: ‘I think, therefore I am’.
The problem with foundationalism is that it seems rather vague how we can know what sort of beliefs are self-justifying and attempts to escape real justification. In other words, we can ask, what reasons can the foundationalist give to show us that these ‘seemings’ or basic beliefs are true? There seems to be none other than that they state they are self-justifying, which seems to be an epistemically irresponsible reply. The foundationalist is arguing is that ‘seemings’ count as epistemic reasons for believing something. Can we really accept this bizarre way of justifying our beliefs?
Consider: I am justified in my seeing a black crow because I ‘seem’ to see a black crow. On what basis am I justified in believing this fact? For the foundationalist, none, because it is a basic belief. But we seem to have to have a belief about the reliability of our sense of perception in order to come to the belief about seeing a crow. To put this clearly:
(P1) I seem to see a black crow
(C) So I am justified in having a belief that I see a black crow
(Tacit assumption) My senses are reliable and I am not being deceived
This is very similar to the circularity of the Closure argument. Without assuming my senses are reliable, I can’t justify my seeming to see anything. The blindingly obvious solution to this sort of circularity is to say that we are entitled to trust certain things, such as our senses. This is the doxastic assumption (doxa = belief). All the information that we hold about the world is contained within our beliefs. If we don’t accept this then we cannot account for anything at all, so there cannot be an exit from this circle. Basically, we have to accept certain things as primitive.
The argument continues that, generally speaking, we’re no worse off assuming such things are true even if they are not. This is the idea of dominant strategy. That is, if nature is as we perceive it, then it’s the best way to analyse things. But if nature isn’t quite as we perceive it, then we are still better off accepting it anyway, because we couldn’t know anything if we didn’t.
This is pragmatic justification (i.e. what we ‘ought’ to do do or ‘should’ do). So in accepting this sort of argument, we are saying that epistemic justification is constrained by pragmatic justification – in order to continue talking about what is epistemically justified, one must use pragmatic reason to escape becoming trapped.
Coherentism seeks to avoid that problem by saying that we can have justification for any statement. It is a holistic process, so that no singular belief justifies itself. Justification comes from the fact that a set of beliefs all mutually support and cohere with each other (and hence avoids making the mistake of having circular justification).
This theory seems rather plausible and in line with our common sense. When we have lots of similar beliefs about something together, the reliability of that belief tends to increase. The difficulty is that we can’t just say that logically consistent beliefs are coherent as two completely unrelated beliefs may be logically consistent. Furthermore, coherentism is in danger of the ‘Isolation Objection’. We can easily imagine scenarios where a set of beliefs all cohere, but they are not real (i.e. a work a fiction). Indeed, it’s the ‘imagining’ part that causes the problem. Once again we’re stuck to find a way to show that the beliefs in our mind match the world, which seems to point to externalism.
And what’s more, why take the belief that I see a black crow anymore coherent than I see a white crow? In other words, what’s stopping me guessing about things and having them cohere as a set? It seems again that we need some sort of foundationalism to show that certain beliefs of mine can be taken as basic.
The coherentist (such as Davidson) might try to reply to this sort of problem by saying that we need to apply a principle of charity when interpreting sets of beliefs. That is, assuming that most of their beliefs are true. The idea being that the majority of a person’s beliefs are highly likely to be true, even if they have some crazy beliefs that manage to cohere with them. So if most of my beliefs are true, then each individual belief is likely to be true.
But this still seems odd. Why says that the majority of a person’s beliefs are true? We could imagine a person epistemically challenged in some way so that she would form a majority of false beliefs. So it would be charitable to hold that most of her beliefs are false. We still need to show that we are in a good epistemic situation (i.e. not being deceived by an Evil Demon or the like).
What about a compromise between foundationalism and coherentism? Susan Haack gives a good analogy. We can have weak foundationlism which provides the very basic beliefs (sensory perceptions and the like), and then coherentism to fit them all together. The basic beliefs are like crossword clues and the grid in which they all fit is coherentism. Or, we could compromise by saying that coherence fits best in our description of justification, but we need to be externalists about knowledge.