We have our basic Justified True Belief (JTB) account of knowledge.
Subject (S) knows something (p) iff:
(P1) p is true
(P2) S believes that p
(P3) S is justified in believing that p.
Note: ‘iff’ means ‘if and only if’
(P1) Is true by means of an external fact. (P2) S believes p by way of an internal fact. But (P3) is more difficult. Is justification external or internal? What makes the difference between true-belief and knowledge? Here we will look at theories aiming to show justification – the extra condition to have knowledge – is external to the believer’s mind. This means that Closure will fail. By Closure, if S believes p and correctly deduces that p entails q, then logically S should know q too. But varieties of Externalism will show that sometimes this connection can fail.
Types of externalism (listed below in bold headings). I haven’t addressed the Causal account due to time constraints:
The basic concept of Justification-Reliabilism is that, whether or not a belief of ours is justified depends on the method or process that led us to have that belief. We need to ask ourselves if that belief-forming process is reliable or not. So this version of Reliabilism takes the JTB account of knowledge, but seeks to analyse the justification condition.
Contrast this to Knowledge-Reliabilism which says that to have knowledge, one needn’t have justification, only a true belief made by a reliable belief-forming process. Why do away with justification? Well, consider this. We talk of animals as knowing things (a dog might know how to give its paw, or a parrot how to talk). But it seems strange to go as far as saying that animals are justified in their having knowledge of something. So why have justification as a condition for knowledge?
This can be used as an argument against internalism, but one might reply to it with something like this: We could distinguish between human knowledge and animal knowledge. The former is just reliably formed true belief, but the latter – the human, reflective knowledge – is internally justified true belief only available to beings like us capable of mental reflection.
The sources of our beliefs can be many and varied. But imagine for a second, where can our beliefs come from? We can gain beliefs by our senses, or by somebody telling us something (testimony). We can use our reason to sort out matters and arrive at beliefs, as well as memories. This isn’t an exhaustive list, but it should suffice to give an idea of what belief-forming processes we have. Now, the problem, how to determine their reliability?
For example (from The Internet Encyclopedia of Philosophy):
“[U]sing vision to determine the color of an object which is well-lit and relatively near is a reliable belief-forming process for a person with normal vision, but not for a color-blind person. Forming beliefs on the basis of the testimony of an expert is likely to yield true beliefs, but forming beliefs on the basis of the testimony of compulsive liars is not. In general, if a belief is the result of a cognitive process which reliably (most of the time – we still want to leave room for human fallibility) leads to true beliefs, then that belief is justified.”
It is this business of trying to explain how we can ensure our beliefs are reliable that leads us to the Generality Problem. I have some token (an object, for example) which I come to hold a belief about by way of a type (a belief-forming process). So for example, I come to hold a belief about some token (say an apple) by way of a process type (in this case, seeing it). I could be very specific about this process, for example, saying I come to hold the belief there is an apple under daylight conditions without obstruction by other objects, which would be very reliable.
But this process type is very specific, and could only be applied to apples under these conditions. We need to find a way to generalise types so we can use them more practically, but by doing so some beliefs will become unjustified because the process type (seeing or smelling – etc) fails me. It is precisely how to go about specifying this degree of generality that is the problem.
Another big problem for Reliabilism (and indeed, all externalist theories) is the Truetemp case:
Truetemp has a temperature sensor that has been implanted in his head (unbeknownst to himself). This device enables him to have accurate (true) beliefs about the temperature whenever he wonders to himself what the temperature is. So Truetemp clearly knows the temperature, but he seems to lack justification for knowing it. Hence we have a problem for Reliablism, as it claims that justification is based on reliable methods external to the subject (i.e. the temperature sensor), but we claim that Truetemp is not justified in knowing.
See Nozick’s truth-tracking for a detailed account.
One alternative externalist account of knowledge is Dretske’s ‘Relevant Alternatives’ approach. Like Nozick’s tracking account, using this theory will render Closure unusable. Why is this?
If subject (S) knows that p, and believes q by correctly deducing it from her belief that p, then S knows that q.
If I want to say that Closure works and that I know p (some proposition), I have to exclude every alternative to p. This is because under Closure, p entails q (q can be correctly deduced from p). This means that every alternative to q will be an alternative to p. Remember (from Closure):
“So in our Closure Argument, if we say p entails q and if for some reason q turns out to be false, p is false too.”
Since we don’t want to exclude all alternatives to p, any theory where we only want to exclude relevant alternatives will not sit well with Closure. What we want to suggest is that there might be relevant alternatives to q that are not relevant alternatives to p.
A relevant alternatives account needs to show two things:
1) What exactly a ‘relevant alternative’ is
2) That it rules out not all alternatives to p, but only all relevant alternatives to p
Let’s look at an example (from Wikipedia):
You take your son to the zoo, see several zebras, and, when questioned by your son, tell him they are zebras. Do you know they are zebras? Well, most of us would have little hesitation saying that we did know this. We know what zebras look like, and, besides, this is the city zoo and the animals are in a pen clearly marked “Zebras.” Yet, something’s being a zebra implies that it is not a mule and, in particular, not a mule cleverly disguised by the zoo authorities to look like a zebra. Do you know that these animals are not mules cleverly disguised by the zoo authorities to look like zebras?
Here, if we agree that the possibility the animal is a cleverly painted mule is irrelevant to whether or not I know I’m looking at a zebra, even if this possibility is pointed out to us, Closure fails. Here’s our p’s and q’s:
p (I know that zebra)
I correctly deduce q from my belief that p.
q (being a zebra means not mule)
Under Closure this would have to be so. But imagine the animal in the cage is a cleverly painted mule (i.e. not[not mule]). Then q would be false. But we would still see the animal in the cage as a zebra. Therefore we have a relevant alternative to q, but that is irrelevant to p (that is: p remains the same). Therefore p doesn’t entail q, so Closure fails.
As we have done here, at first glance it seems like the Relevant Alternatives approach refutes Closure, but there are more tricky arguments that can be construed in a way that supports Closure too! So perhaps Dretske’s view isn’t so reliable after all. For a more detailed discussion, check The Stanford Encyclopedia of Philosophy: Relevant Alternatives.