Epistemology (original) (raw)

The term “epistemology” comes from the Greek words “episteme” and “logos”. “Episteme” can be translated as “knowledge” or “understanding” or “acquaintance”, while “logos” can be translated as “account” or “argument” or “reason”. Just as each of these different translations captures some facet of the meaning of these Greek terms, so too does each translation capture a different facet of epistemology itself. Although the term “epistemology” is no more than a couple of centuries old, the field of epistemology is at least as old as any in philosophy.[1] In different parts of its extensive history, different facets of epistemology have attracted attention. Plato’s epistemology was an attempt to understand what it was to know, and how knowledge (unlike mere true opinion) is good for the knower. Locke’s epistemology was an attempt to understand the operations of human understanding, Kant’s epistemology was an attempt to understand the conditions of the possibility of human understanding, and Russell’s epistemology was an attempt to understand how modern science could be justified by appeal to sensory experience. Much recent work in formal epistemology is an attempt to understand how our degrees of confidence are rationally constrained by our evidence, and much recent work in feminist epistemology is an attempt to understand the ways in which interests affect our evidence, and affect our rational constraints more generally. In all these cases, epistemology seeks to understand one or another kind of cognitive success(or, correspondingly, cognitive failure). This entry surveys the varieties of cognitive success, and some recent efforts to understand some of those varieties.

1. The Varieties of Cognitive Success

There are many different kinds of cognitive success, and they differ from one another along various dimensions. Exactly what these various kinds of success are, and how they differ from each other, and how they are explanatorily related to each other, and how they can be achieved or obstructed, are all matters of controversy. This section provides some background to these various controversies.

1.1 What Kinds of Things Enjoy Cognitive Success?

Cognitive successes can differ from each other by virtue of qualifying different kinds of things. For instance, a cognitive success—like that of making a discovery—may be the success of a person (e.g., Marie Curie), or of a laboratory (Los Alamos), or of a people (the Hopi), or even, perhaps, of a psychological fragment of a person (the unconscious). But some kinds of cognitive success—like that of having successfully cultivated a highly discriminating palate, say—may be the success of a person, and perhaps even of a people, but cannot be the success of a laboratory or of a psychological fragment. And other kinds of cognitive success—like that of being conclusively established by all the available evidence—may be the success of a theory, but cannot be the success of a person—or like that of being epistemically fruitful—may be the success of a research program, or of a particular proof-strategy, but not of a theory. Indeed, there is a vast range of things, spanning different metaphysical categories, that can enjoy one or another kind of cognitive success: we can evaluate the cognitive success of a mental state (such as that of believing a particular proposition) or of an act (such as that of drawing a particular conclusion), or of a procedure (such as a particular procedure for revising degrees of confidence in response to evidence, or a particular procedure for acquiring new evidence), or of a relation (such as the mathematical relation between an agent’s credence function in one evidential state and her credence function in another evidential state, or the relation of trust between one person and another).

Some of the recent controversies concerning the objects of cognitive success concern the metaphysical relations among the cognitive successes of various kinds of objects: Does the cognitive success of a process involve anything over and above the cognitive success of each state in the succession of states that comprise the execution of that process?[2] Does the cognitive success of a particular mental state, or of a particular mental act, depend upon its relation to the larger process in which it exists?[3] Is the cognitive success of an organization constituted merely by the cognitive successes of its members, or is it something over and above those individual successes?[4] Is the cognitive success of a doxastic agent completely explicable in terms of the successes of its doxastic states, or vice versa? And either way, what sorts of doxastic states are there, and with respect to what kinds of possible success are they assessible? The latter dispute is especially active in recent years, with some epistemologists regarding beliefs as metaphysically reducible to high credences,[5] while others regard credences as metaphysically reducible to beliefs the content of which contains a probability operator (see Buchanan and Dogramaci forthcoming), and still others regard beliefs and credences as related but distinct phenomena (see Kaplan 1996, Neta 2008).

Other recent controversies concern the issue of whether it is a metaphysically fundamental feature of the objects of cognitive success that they are, in some sense, supposed to enjoy the kind of cognitive success in question. For instance, we might think that what it is for some group of people to constitute a_laboratory_ is that the group is, in some sense,supposed to make discoveries of a certain kind: that is the point of bringing that group into collaboration in a particular way, even if the individuals are spread out across different continents and their funding sources diverse. But even if a laboratory is plausibly characterized by a norm to which it is answerable, is something analogous true of the other objects that can enjoy cognitive success? Is it, for instance, a metaphysically fundamental feature of a belief that it is, in some sense, supposed to be knowledge?[6] Or can belief be metaphysically characterized without appeal to this norm? Is it, for instance, a metaphysically fundamental feature of a person that such a creature is, in some sense, supposed to be rational?[7] Or can persons be metaphysically characterized without appeal to this norm? Similar disputes arise for the other objects of cognitive success: to what extent can we understand what these objects are without appeal to the kinds of success that they are supposed to enjoy?

In speaking, as we have just now, of the kinds of success that objects are “supposed” to enjoy, we have left it open in what sense the objects of cognitive success are “supposed” to enjoy their success: is it that their enjoyment of that success is good? (If so, then how is it good?) Or is it rather that their enjoyment of that success is demanded? (If so, then what demands it, and why?) We turn to that general topic next.

1.2 Demands and Values

Some kinds of cognitive success involve compliance with a_demand_, while others involve the realization or promotion of_values_. We can contrast these two kinds of success by contrasting the associated kinds of failure: failure to comply with a demand results in impermissibility, whereas failure to realize some values results in_sub-optimality_.[8] Of course, if sub-optimality is always impermissible and vice versa, then the extension of these two categories ends up being the same, even if the two categories are not themselves the same. But it is implausible to regard all sub-optimality as epistemically impermissible: cognitive success does not_require_ us to be perfectly cognitively optimal in every way. If cognitive success is ever achievable even in principle, then at least some degree of cognitive sub-optimality must be permissible. Achieving greater optimality than what’s required for cognitive permissibility could then be understood as cognitive_supererogation_. If such supererogation is possible, at least in principle, then the permissible can fall short of the optimal.

Recent controversies concern not merely the relation between permissibility and optimality, but also the metaphysical basis of each kind of success. In virtue of what is some state, or act, or process, or relation, epistemically permissible? And in virtue of what is it optimal to whatever degree it is? Epistemic consequentialists take the answer to the former question to be determined by appeal to the answer to the latter. For instance, one popular form of epistemic consequentialism claims that a particular way of forming one’s beliefs about the world is epistemically permissible just in so far as it promotes the possession of true belief and the avoidance of false belief.[9] Another form of consequentialism, consistent with but distinct from the first, says that a “credence function” (i.e., a function from propositions to degrees of confidence) is optimal just in so far as it promotes a single parameter—overall accuracy—which is measured in such a way that, the higher one’s confidence in true propositions and the lower one’s confidence in false propositions, the greater one’s overall accuracy.[10] There are also some forms of epistemic consequentialism according to which optimality involves promotion of ends that are practical rather than simply alethic.[11] An important controversy in the recent literature concerns the question of whether epistemic consequentialism is true (see Berker 2013, which develops a line of argument found in Firth 1978 [1998]). Another prominent controversy is carried on among consequentialists themselves, and concerns the question of what values are such that their realization or promotion constitutes optimality.

We’ve used the term “constraint” to denote the bounds of what is epistemically permissible. Of course, as a matter of deontic logic, what is permissible must include at least what is required: for a condition to be required is simply for the complement of that condition to not be permissible. But this leaves it open whether, in a particular domain, what is permissible includes more than what is required. Permissivists argue that it does (see Schoenfield 2014 and Titelbaum and Kopec 2019 for defenses of permissivism), while anti-permissivists argue that it does not (see White 2005 and Schultheis 2018 for arguments against permissivism). Anti-permissivists concerning constraints on our credences are sometimes described as holding a “uniqueness” view, but this label can easily mislead. A philosopher who thinks that the range of permissible credences is no wider than the range of required credences is an anti-permissivist—but an anti-permissivist view, so understood, is consistent with the claim that the credences we are required to have are not point-valued but are rather interval-valued. Such a philosopher could, for instance, claim that there is only one credence that you are permitted to assign to the proposition that the cat is on the mat, and this required credence is neither .6 nor .7, but is rather the open interval (.6, .7).

1.3 Substantive and Structural

Compare the following two rules:

The first rule, MP-Narrow, is obviously not a rule with which we ought to comply: if q is obviously false, then it’s not the case that I ought to believe that _q is true_—not even if I believe that p is true, and that if p is true then q is true. Nonetheless, if q is obviously false, then (perhaps) I ought not both believe that p is true and also believe that ifp is true then q is true. That’s because, even if MP-Narrow is not a rule with which we ought to comply, MP-Wide may still be such a rule. The difference between the two rules is in the scope of the “ought”: in MP-Narrow, its scope includes only one belief (viz., the belief that q is true), whereas in MP-Wide, its scope includes a combination of two beliefs (viz., that_p is true, and that if p is true then q is true) and one lack of belief (viz., that q is true).

This linguistic distinction between wide scope and narrow scope “oughts” is one expression of a general metaphysical distinction between two kinds of cognitive success. On one side of this distinction are those kinds of cognitive success that qualify particular objects, e.g., a particular belief, or a particular procedure, or a particular credence function, or a particular research program. Examples of such success include a belief’s being justified, a procedure’s being rationally required, a credence function’s being optimal. In each case, some object enjoys a particular cognitive success, and this success obtains by virtue of various features of that object: the features in question may be intrinsic or relational, synchronic or diachronic, biological or phenomenological, etc. We can call such cognitive successes “substantive”.

On the other side of this distinction are those kinds of cognitive success that qualify the relations between various things, each of which is itself individually assessable for cognitive success: e.g., the relation between a set of beliefs all held by the same agent at a particular time, or the relation between the use of a particular procedure, on the one hand, and one’s beliefs about that procedure, on the other, or the relation between an agent’s credence function just before receiving new evidence, and her credence function just after receiving new evidence. Examples of this latter kind of success include an agent’s beliefs at a moment all being consistent, or the coherence between the procedures an agent uses and her beliefs about which procedures she ought to use. In each case, a particular cognitive success qualifies the relations among various objects, quite independently of whether any particular one of those objects itself enjoys substantive cognitive success. We can call such cognitive successes “structural”. Some epistemologists have attempted to reduce substantive successes of a particular kind to structural successes.[12] Others have attempted to reduce structural successes of some kind to substantive ones (see, for instance, Kiesewetter 2017, Lasonen-Aarnio 2020, and Lord 2018). And still others have denied that any such reduction is possible in either direction (see, for instance, Worsnip 2018 and Neta 2018). In recent years, this controversy has been most active in connection with rational permissibility of beliefs, or of credences. But such a controversy could, in principle, arise concerning any of the varieties of cognitive success that we’ve distinguished so far.

1.4. What Explains What?

Many epistemologists attempt to explain one kind of cognitive success in terms of other kinds. For instance, Chisholm tries to explain all cognitive success notions in terms of just one primitive notion: that of one attitude being more reasonable than another, for an agent at a time (see Chisholm 1966). Williamson, in contrast, treats knowledge of facts as an explanatory primitive, and suggests that other kinds of cognitive success be explained in terms of such knowledge (see Williamson 2002). Several prominent philosophers treat the notion of a normative reason as primitive (see Scanlon 1998). And so on. In each case, what is at issue is which kinds of cognitive success are explicable in terms of which other kinds of cognitive success. Of course, whether this issue is framed as an issue concerning the explication of some concepts in terms of other concepts, or in terms of the grounding of some properties by other properties, or in some other terms still, depends on the metaphilosophical commitments of those framing the issue.

The issue of which kinds of cognitive success explain which other kinds of cognitive success is orthogonal to the issue of which_particular_ cognitive successes explain which other particular cognitive successes. The former issue concerns whether, for instance, the property of knowledge is to be explained in terms of the relation of one thing being a reason for another, or whether the relation of being a reason for is to explained in terms of knowledge. But the latter issue concerns whether, for instance, I am justified in holding some particular belief—say, that the cat is on the mat—in virtue of my knowing various specific things, e.g., that my vision is working properly under the present circumstances, and that the object that I am looking at now is a cat, etc. This latter issue is at the heart of various epistemological regress puzzles, and we will return to it below. But those regress puzzles are largely independent of the issue of metaphysical priority being discussed here.

1.5 What Makes It Success?

What makes it the case that something counts as a form of cognitive_success_? For instance, why think that knowing the capital of Pakistan is a cognitive success, rather than just another cognitive state that an agent can occupy, like having 70% confidence that Islamabad is the capital of Pakistan? Not every cognitive state enjoys cognitive success. Knowing, understanding, mastering—these are cognitive successes. But being 70% confident in a proposition is not, in and of itself, a cognitive success, even if that state of confidence may be partly constitutive of an agent’s cognitive success when the agent holds it in the right circumstances and for the right reason. What makes the difference?

Recent work on this issue tends to defend one of the following three answers to this question: contractualism, consequentialism, or constitutivism. The contractualist says that a particular cognitive state counts as a kind of success because the practice of so counting it serves certain widely held practical interests. For instance, according to Craig (1990), we describe a person as “knowing” something as a way of signaling that her testimony with respect to that thing is to be trusted. This contractualist view is elaborated more fully in Dogramaci 2012, and employed to solve a puzzle about deductive reasoning in Dogramaci 2015. The consequentialist says that a particular cognitive state counts as a kind of success because it tends to constitute or tends to promote some crucial benefit. According to some consequentialists, the benefit in question is that of having true beliefs and lacking false beliefs (see BonJour 1985, Audi 1993, Singer 2023). According to others, it is the benefit of having a comprehensive understanding of reality. According to others, it is a benefit that is not narrowly epistemic, e.g., living a good life, or being an effective agent (see Gibbard 2008), or spreading one’s gene pool (see Lycan 1988). Finally, the constitutivist may say that a particular cognitive state counts as a kind of success if it is the constitutive aim of some feature of our lives to achieve that state (see Korsgaard 2009 for a defense of constitutivism concerning norms of rationality). For instance, the constitutivist might say that knowledge is a kind of cognitive success by virtue of being the constitutive aim of belief, or that understanding is a kind of cognitive success by virtue of being the constitutive aim of reasoning, or that practical wisdom is a kind of cognitive success by virtue of being the constitutive aim of all human activity. Of course, there are philosophers who count as “constitutivists” by virtue of thinking, say, that knowledge is the constitutive aim of belief, or that the generation of knowledge is the constitutive aim of assertion (see Kelp and Simion 2021)—but these same philosophers are not thereby committed to the constitutivism described here, since they are not committed to this explanation of what makes knowledge a kind of cognitive success.

Of course, it’s possible that one of the three answers mentioned above is correct for some kinds of success, while another of the three answers is correct for other kinds of success. Consider, for instance, the difference between the kind of success involved in having a state that is fitting (for instance, holding a belief_knowledgeably_), and the kind of success involved in having a state that is valuable (for instance, holding a belief the holding of which is beneficial). Perhaps the constitutivist can explain the former kind of success better than the consequentialist can, but the consequentialist can explain the latter kind of success better than the constitutivist can. Of course, if and when the demands of these different kinds of success conflict, the agent will face the question of how to proceed. Much recent work in epistemology has attempted to adjudicate that question, or to interrogate the assumption of possible conflict that gives rise to it (see, for instance, Marušić 2015, McCormick 2015, and Rinard 2017a and 2019b).

These different ways of understanding cognitive success each give rise to a different understanding of the range of ways in which cognitive success can be obstructed, and so a different understanding of the range in which agents may be harmed, and sometimes even wronged, by such obstructions. For instance, on the contractualist view, epistemic harms may be built into the terms of the “contract”. That is to say, such harms may be done not merely by the specific ways in which we interpret or implement our practice of epistemic appraisal, but rather in the fundamental features of that practice itself. For instance, a practice that grants the status of knowledge to a belief formed on the basis of clearly conceptualized sense perception, but not to a belief formed on the basis of a less clearly conceptualized sense of a personal need, is a practice that systematically discredits beliefs formed by exercises of empathy, relative to beliefs formed in other ordinary ways.[13]

1.6 Epistemic Harms and Epistemic Wrongs

Obstructing an agent’s cognitive success constitutes an epistemic harm. Wrongly obstructing an agent’s cognitive success constitutes an epistemic wrong. In a situation in which false testimony would be an epistemic harm, dishonest testimony would be an epistemic wrong. But the range of epistemic harms and epistemic wrongs can be much broader than those involving falsehood and deception. Insinuation, inattention, and indoctrination can all constitute epistemic harms or epistemic wrongs: each one can obstruct, and sometimes wrongly obstruct, an agent’s cognitive success. For instance, I can mislead you into drawing false conclusions, even if what I say is true: for instance, when I say “the victims were killed by an immigrant”, even if what I say is literally true, it can mislead my hearer into thinking that the killer’s being an immigrant was in some way explanatorily relevant to her crime. (See Gardiner 2022 for a discussion of such cases.) Alternatively, I can harm you, and perhaps even wrong you, by getting you to think poorly of your own capacity to grasp a subject by not paying attention to what you think or say. And finally, I can harm you, and perhaps even wrong you, by indoctrinating you in a view so strongly that you lose the ability to consider alternative views.

The epistemic harms and wrongs that we’ve just mentioned occur frequently in the course of daily life, and they are typically constituted by some particular act that we perform (e.g., lending greater credence to the word of a man over that of a woman, or using rhetorical devices to insinuate things that one doesn’t know to be true). But some of these harms and wrongs are constituted not by any particular act, but rather by the procedures that give rise to those acts: for instance, when a research program in the life sciences implicitly assumes an ideologically-driven conception of human nature (see Longino 1990 and Anderson 2004 for fascinating case studies). And sometimes, the harms and wrongs might even be built into our practice of epistemic appraisal—perhaps even a tendency that is somehow constitutive of that very practice. Suppose, for instance, that it is constitutive of our practice of epistemic appraisal to count someone as knowing a fact only if they possess concepts adequate to conceptualize that fact. Whatever may be said in favor of our practice’s having such a feature, one of its effects is clear: those individuals who are cognitively most sensitive to facts for which adequate conceptual resources have not yet been devised (e.g., someone living long before Freud who is sensitive to facts about repression, or someone living in the nineteenth century who is sensitive to facts about sexual harassment) will find that the deliverances of their unique cognitive sensitivities are not counted as knowledge. And so, these same individuals will not be granted the same authority or credibility as other individuals, even when those latter are less cognitively sensitive to the range of facts in question. Recent work in feminist epistemology has helped us to gain an appreciation of just how widespread this phenomenon is, and of its varieties (see the seminal discussion of epistemic injustice in M. Fricker 2007, and the development of that account in Dotson 2014).

2. What is Knowledge?

Knowledge is among the many kinds of cognitive success that epistemology is interested in understanding. Because it has attracted vastly more attention in recent epistemology than any other variety of cognitive success, we devote the present section to considering it in some detail. But the English word “knowledge” lumps together various states that are distinguished in other languages: for instance, the verb “to know” can be translated into French either as “_connaitre_” or as “_savoir_”, and the noun “knowledge” can be translated into Latin as either “_cognitio_” or as “_scientia_”. Exactly how to individuate the various kinds of cognitive success is not something that can be determined solely by appeal to the lexicon of any particular natural language. The present section provides a brief survey of some of the kinds of cognitive success that are indicated by the use of “knowledge” in English, but this is not intended to signal that these kinds of cognitive success are all species of some common genus. Neither, however, is it intended to signal that these kinds of cognitive success are not all species of some common genus: at least some philosophers have taken there to be a genus, awareness, of which the various kinds of knowledge are all species, and with respect to which these various kinds may all be explained (see Silva 2019 for a defense of “awareness first” epistemology).

2.1 Knowing Individuals

Even if you know many facts about Napoleon, it doesn’t follow that you know Napoleon. You couldn’t ever have known Napoleon, since he died long before you were born. But, despite not having ever known Napoleon, you could still know a great many facts about Napoleon—perhaps you know even more facts about Napoleon than did those who knew him most intimately. This shows that knowing a person is not the same as knowing a great many facts about the person: the latter is not sufficient for the former. And perhaps the former is not even sufficient for the latter, since I might know my next door neighbor, and yet not realize that he is an undercover agent, and that almost everything he tells me about himself is false.

Knowing a person is a matter of being acquainted with that person, and acquaintance involves some kind of perceptual relation to the person. What kind of perceptual relation? Clearly, not just any perceptual relation will do: I see and hear thousands of people while walking around a bustling city, but it doesn’t follow that I am acquainted with any of them. Must acquaintance involve an ability to distinguish that individual from others? It depends upon what such an ability amounts to. I am acquainted with my next door neighbor, even though, in some sense, I cannot distinguish him from his identical twin: if they were together I couldn’t tell who was who.

Just as we can be acquainted with a person, so too can we be acquainted with a city, a species of bird, a planet, 1960s jazz music, Watson and Crick’s research, transphobia, and so on. If it’s not clear precisely what acquaintance demands in the case of people, it’s even less clear what it demands across all of these various cases. If there is a genus of cognitive success expressed by the verb “to know” with a direct object, or by the French “connaitre”, we have not yet understood that genus.

2.2 Knowing How

In his groundbreaking book, The Concept of Mind, Gilbert Ryle argued that knowing how to do something must be different from knowing any set of facts. No matter how many facts you might know about swimming, say, it doesn’t follow from your knowledge of these facts that you know how to swim. And, of course, you might know how to swim even without knowing very many facts about swimming. For Ryle,knowing how is fundamentally different from knowing that.

This Rylean distinction between knowing how and knowing that has been prominently challenged, beginning in 1975 with the publication of Carl Ginet’s Knowledge, Perception, and Memory. Ginet argued that knowing how to do something was simply knowing that a particular act was a way to do that thing. This challenge was extended and systematized by Boër and Lycan (1975), who argued that knowing who, knowing which,knowing why, knowing where, knowing when, and knowing how_—all of the varieties of knowing wh-, as they called it—were all just different forms of knowing that. To know who is F, for instance, was simply to_know that a particular person is F. To know why_p_ is simply to know that a particular thing is the reason why p. And to know how to F was simply to_know that a particular act is a way to F_. This view was elaborated in considerable detail by Stanley and Williamson 2001, and then challenged or refined by many subsequent writers (see, for instance, the essays in Bengson and Moffett 2011, and also Pavese 2015 and 2017).

2.3 Knowing Facts

Whenever a knower (S) knows some fact (p), several conditions must obtain. A proposition that S doesn’t even believe cannot be, or express, a fact that S knows. Therefore, knowledge requires belief.[14] False propositions cannot be, or express, facts, and so cannot be known. Therefore, knowledge requires truth. Finally,_S_’s being correct in believing that p might merely be a matter of luck. For example, if Hal believes he has a fatal illness, not because he was told so by his doctor, but solely because as a hypochondriac he can’t help believing it, and it turns out that in fact he has a fatal illness, Hal’s being right about this is merely accidental: a matter of luck (bad luck, in this case).[15] Therefore, knowledge requires a third element, one that excludes the aforementioned luck, and so that involves _S_’s belief being, in some sense, justifiably or _appropriately_held. If we take these three conditions on knowledge to be not merely necessary but also sufficient, then: S knows that _p_if and only if p is true and S justifiably believes that p. According to this account, the three conditions—truth, belief, and justification—are individually necessary and jointly sufficient for knowledge of facts.[16]

Recall that the justification condition is introduced to ensure that_S_’s belief is not true merely because of luck. But what must justification be, if it can ensure that? It may be thought that_S_’s belief that p is true not merely because of luck when it is reasonable or rational, from _S_’s own point of view, to take p to be true. Or it may be thought that _S_’s belief is true not merely because of luck if that belief has a high objective probability of truth, that is, if it is formed or sustained by reliable cognitive processes or faculties. But, as we will see in the next section, if justification is understood in either of these ways, it cannot ensure against luck.

It turns out, as Edmund Gettier showed, that there are cases of JTB that are not cases of knowledge. JTB, therefore, is not_sufficient_ for knowledge. Cases like that—known as_Gettier cases_[17]—arise because neither the possession of adequate evidence, nor origination in reliable faculties, nor the conjunction of these conditions, is sufficient for ensuring that a belief is not true merely because of luck. Consider the well-known case of barn-facades: Henry drives through a rural area in which what appear to be barns are, with the exception of just one, mere barn facades. From the road Henry is driving on, these facades look exactly like real barns. Henry happens to be looking at the one and only real barn in the area and believes that there’s a barn over there. So Henry’s belief is true, and furthermore his visual experience makes it reasonable, from his point of view, to hold that belief. Finally, his belief originates in a reliable cognitive process: normal vision of ordinary, recognizable objects in good lighting. Yet Henry’s belief is true in this case merely because of luck: had Henry noticed one of the barn-facades instead, his belief would have been false. There is, therefore, broad agreement among epistemologists that Henry’s belief does not qualify as knowledge.[18]

To state conditions that are jointly sufficient for knowledge, what further element must be added to JTB? This is known as the Gettier problem. Some philosophers attempt to solve the Gettier problem by adding a fourth condition to the three conditions mentioned above, while others attempt to solve it by either replacing or refining the justification condition. How we understand the contrast between replacing the justification condition and refining it depends, of course, on how we understand the justification condition itself, which is the topic of the next section.

Some philosophers reject the Gettier problem altogether: they reject the aspiration to understand knowledge by trying to add to JTB. Some such philosophers try to explain knowledge in terms of virtues: they say that to know a fact is for the truth of one’s belief to manifest epistemic virtue (see Zagzebski 1996 and Sosa 1997). Other such philosophers try to explain knowledge by identifying it as a genus of many familiar species: they say that knowledge is the most general factive mental state operator (see Williamson 2002). And still other such philosophers try to explain knowledge by explaining its distinctive role in some other activity. According to some, to know a fact is for that fact to be a reason for which one can do or think something.[19] According to others, to know a fact is to be entitled to assert that fact (see Unger 1975, Williamson 2002, DeRose 2002 for defenses of this view; see Brown 2008b and 2010 for dissent). According to still others, to know a fact is to be entitled to use it as a premise in reasoning (see Hawthorne & Stanley 2008 for defense of this view; see Neta 2009 and Brown 2008a for dissent). And according to still others, to know a fact is to be a trustworthy informant concerning whether that fact obtains. Finally, there are those who think that the question “what is it to know a fact?” is misconceived: the verb “to know” does not do the work of denoting anything, but does a different kind of work altogether, for instance, the work of assuring one’s listeners concerning some fact or other, or the work of indicating to one’s audience that a particular person is a trustworthy informant concerning some matter (see Lawlor 2013 for an articulation of the assurance view, and Craig 1990 for an articulation of the trustworthy informant view).

3. What is Justification?

Whatever precisely is involved in knowing a fact, it is widely recognized that some of our cognitive successes fall short of knowledge: an agent may, for example, conduct herself in a way that is intellectually unimpeachable, and yet still end up thereby believing a false proposition. Julia has every reason to believe that her birthday is July 15: it says so on her birth certificate and all of her medical records, and everyone in her family insists that it is July 15. Nonetheless, if all of this evidence is the result of some time-keeping mistake made at the time of her birth, her belief about her birthday could be false, despite being so thoroughly justified. Debates concerning the nature of justification[20] can be understood as debates concerning the nature of such non-knowledge-guaranteeing cognitive successes as the one that Julia enjoys in this example.[21]

3.1 Deontological and Non-Deontological Justification

How is the term “justification” used in ordinary language? Here is an example: Tom asked Martha a question, and Martha responded with a lie. Was she justified in lying? Jane thinks she was, for Tom’s question was an inappropriate one, the answer to which was none of Tom’s business. What might Jane mean when she thinks that Martha was justified in responding with a lie? A natural answer is this: She means that Martha was under no obligation to refrain from lying. Due to the inappropriateness of Tom’s question, it wasn’t Martha’s duty to tell the truth. This understanding of justification, commonly labeled_deontological_, may be defined as follows: S is justified in doing x if and only if S is not obliged to refrain from doing_x_.[22]

If, when we apply the word justification not to actions but to beliefs, we mean something analogous, then the following holds:

Deontological Justification (DJ)
S is justified in believing that p if and only if_S_ is not obliged to refrain from believing that_p_.[23]

What kind of obligations are relevant when we wish to assess whether a_belief_, rather than an action, is justified or unjustified? Whereas when we evaluate an action, we are interested in assessing the action from either a moral or a prudential point of view, when it comes to beliefs, what matters may be something else,[24] e.g., the pursuit of truth, or of understanding, or of knowledge.

Exactly what, though, must we do in the pursuit of some such distinctively epistemic aim? According to one answer, the one favored by evidentialists, we ought to believe in accord with our evidence.[25] For this answer to be helpful, we need an account of what our evidence consists of, and what it means to believe in accord with it. Other philosophers might deny this evidentialist answer, but still say that the pursuit of the distinctively epistemic aims entails that we ought to follow the correct epistemic norms. If this answer is going to help us figure out what obligations the distinctively epistemic aims impose on us, we need to be given an account of what the correct epistemic norms are.][26]

The deontological understanding of the concept of justification is common to the way philosophers such as Descartes, Locke, Moore and Chisholm have thought about justification. Recently, however, two chief objections have been raised against conceiving of justification deontologically. First, it has been argued thatDJ presupposes that we can have a sufficiently high degree of control over our beliefs. But beliefs—this objection alleges—are akin not to actions but rather things such as digestive processes, sneezes, or involuntary blinkings of the eye. The idea is that beliefs simply arise in or happen to us. Therefore, beliefs are not suitable for deontological evaluation (see Alston 1985 & 1988; also, see Chrisman 2008). To this objection, some advocates of DJ have replied that lack of control over our beliefs is no obstacle to thinking of justification as a deontological status (see R. Feldman 2001a). Other advocates of DJ have argued that we enjoy no less control over our beliefs than we do over our intentional actions (see Ryan 2003; Sosa 2015; Steup 2000, 2008, 2012, 2017; and Rinard 2019b).

According to the second objection toDJ, deontological justification cannot suffice for an agent to have a justified belief. This claim is typically supported by describing cases involving either a benighted, culturally isolated society or subjects who are cognitively deficient. Such cases involve subjects whose cognitive limitations make it the case that they are under no obligation to refrain from believing as they do, but whose limitations nonetheless render them incapable of forming justified beliefs (for a response to this objection, see Steup 1999).

Those who rejectDJ think of justification not deontologically, but rather as a property that that a belief has when it is, in some sense, sufficiently likely to be true.[27] We may, then, define justification as follows:

Sufficient Likelihood Justification (SLJ)
S is justified in believing that p if and only if_S_ believes that p in a way that makes it sufficiently likely that her belief is true.

If we wish to pin down exactly what the likelihood at issue amounts to, we will have to deal with a variety of tricky issues.[28] For now, let us just focus on the main point. Those who prefer SLJ toDJ would say that sufficient likelihood of truth and deontological justification can diverge: it’s possible for a belief to be deontologically justified without being sufficiently likely to be true. This is just what cases involving benighted cultures or cognitively deficient subjects are designed to show (for elaboration on the non-deontological concept of justification, see Alston 1988).

3.2 What Justifies Belief?

What makes a belief that p justified, when it is? Whether a belief is justified or unjustified, there is something that_makes_ it so. Let’s call the things that make a belief justified or unjustified J-factors. Which features of a belief are J-factors?

According to “evidentialists”, it is the believer’s possession of evidence for p. What is it, though, to possess evidence for p? Some evidentialists (though not all) would say it is to be in an experience that presents p as being true. According to these evidentialists, if the coffee in your cup tastes sweet to you, then you have evidence that the coffee is sweet. If you feel a throbbing pain in your head, you have evidence that you have a headache. If you have a memory of having had cereal for breakfast, then you have evidence about what you had for breakfast. And when you clearly “see” or “intuit” that the proposition “If Jack had more than four cups of coffee, then Jack had more than three cups of coffee” is true, then you have evidence for that proposition. On this view, evidence consists of perceptual, introspective, memorial, and intuitional experiences, and to possess evidence is to have an experience of that kind. So according to this “experientialist” version of evidentialism, what makes you justified in believing that _p_is your having an experience that represents p as being true (see Conee and Feldman 2008 and McCain 2014 for defenses of such a view). Other versions of evidentialism might identify other factors as your evidence, but would still insist that those factors are the J-factors.

Evidentialism is often contrasted with reliabilism, which is the view that a belief is justified by resulting from a reliable source, where a source is reliable just in case it tends to result in mostly true beliefs. Reliabilists, of course, can also grant that the experiences mentioned in the previous paragraph can matter to the justification of your beliefs. However, they deny that justification is_essentially_ a matter of having suitable experiences. Rather, they say, those experiences matter to the justification of your beliefs not merely by virtue of being evidence in support of those beliefs, but more fundamentally, by virtue of being part of the reliable source of those beliefs. Different versions of reliabilism have been defended: some philosophers claim that what justifies a belief is that it is produced by a process that is reliable (for instance, see Goldman 1986), others claim that what justifies a belief is that it is responsive to grounds that reliably covary with the truth of that belief, other claim that what justifies a belief is that it is formed by the virtuous exercise of a capacity, and so on.

3.3 Internal vs. External

Consider a science fiction scenario concerning a human brain that is removed from its skull, kept alive in a vat of nutrient fluid, and electrochemically stimulated to have precisely the same total series of experiences that you have had. Call such a brain a “BIV”: a BIV would believe everything that you believe, and would (it is often thought) be justified in believing those things to precisely the same extent that you are justified in believing them. Therefore, justification is determined solely by those internal factors that you and your envatted brain doppelganger share. This view is what has come to be called “internalism” about justification.[29]

Externalism is simply the denial of internalism. Externalists say that what we want from justification is the kind of likelihood of truth needed for knowledge, and the internal conditions that you share with your BIV doppelganger do not generate such likelihood of truth. So justification involves external conditions.[30]

Among those who think that justification is internal, there is no unanimity on how to understand the notion of internality—i.e., what it is about the factors that you share with your BIV doppelganger that makes those factors relevant to justification. We can distinguish between two approaches. According to the first, justification is internal because we enjoy a special kind of access to J-factors: they are always recognizable on reflection.[31] Hence, assuming certain further premises (which will be mentioned momentarily), justification itself is always recognizable on reflection.[32] According to the second approach, justification is internal because J-factors are always mental states (see Conee and Feldman 2001). Let’s call the former accessibility internalism and the latter mentalist internalism.

Evidentialism is typically associated with internalism of at least one of these two varieties, and reliabilism with externalism.[33] Let us see why. Evidentialism says, at a minimum, two things:

E2 seems to make evidentialism a version of mentalist internalism. I should note, however, that the conjunction of E1 and E2 is not always internalist. Williamson (2002) defends a version of evidentialism on which evidence is not shared by you and your corresponding BIV. Whether evidentialism is also an instance of accessibility internalism is a more complicated issue. The conjunction of E1 and E2 by itself implies nothing about the accessibility of justification. But mentalist internalists who endorse the first principle below will also be committed to accessibility internalism, and evidentialists who also endorse the second principle below will be committed to the accessibility of justification:

Luminosity
One’s own mind is cognitively luminous: Whenever one is in a particular mental state, one can always recognize on reflection what mental states one is in, and in particular, one can always recognize on reflection what evidence one possesses.[34]

Necessity
The principles that determine what is evidence for what are_a priori_ recognizable.[35] Relying on a priori insight, one can therefore always recognize on reflection whether, or the extent, to which a particular body of evidence is evidence for_p_.[36]

AlthoughE1 andE2 by themselves do not imply access internalism, their conjunction with Luminosity and Necessity may imply access internalism.[37]

Next, let us consider why reliabilism is an externalist theory. Reliabilism says that the justification of one’s beliefs is a function of the reliability of one’s belief sources such as memorial, perceptual and introspective states and processes. Even if the operations of the sources are mental states, their reliability is not itself be a mental state. Therefore, reliabilists reject mentalist internalism. Moreover, insofar as the reliability of one’s belief sources is not itself recognizable by means of reflection, how could reflection enable us to recognize when such justification obtains?[38] Reliabilists who take there to be no good answer to this question also reject access internalism.[39]

4. The Structure of Knowledge and Justification

Anyone who knows anything necessarily knows many things. Our knowledge forms a body, and that body has a structure: knowing some things requires knowing other things. But what is this structure? Epistemologists who think that knowledge involves justification tend to regard the structure of our knowledge as deriving from the structure of our justifications. We will, therefore, focus on the latter.

4.1 Foundationalism

According to foundationalism, our justified beliefs are structured like a building: they are divided into a foundation and a superstructure, the latter resting upon the former. Beliefs belonging to the foundation are basic. Beliefs belonging to the superstructure are nonbasic and receive justification from the justified beliefs in the foundation.[40]

Before we evaluate this foundationalist account of justification, let us first try to spell it out more precisely. What is it for a justified belief to be basic? According to one approach, what makes a justified belief basic is that it doesn’t receive its justification from any other beliefs. The following definition captures this thought:

Doxastic Basicality (DB)
_S_’s justified belief that p is basic if and only if _S_’s belief that p is justified without owing its justification to any of _S_’s other beliefs.

Let’s consider what would, according to DB, qualify as an example of a basic belief. Suppose you notice (for whatever reason) someone’s hat, and you also notice that that hat looks blue to you. So you believe

Unless something very strange is going on, (B) is an example of a justified belief. DB tells us that (B) is basic if and only if it does not owe its justification to any other beliefs of yours. So if (B) is indeed basic, there might be some item or other to which (B) owes its justification, but that item would not be another belief of yours. We call this kind of basicality “doxastic” because it makes basicality a function of how your doxastic system (your belief system) is structured.

Let us turn to the question of where the justification that attaches to(B) might come from, if we think of basicality as defined byDB. Note that DB merely tells us how (B) is not justified. It says nothing about how (B) is justified. DB, therefore, does not answer that question. What we need, in addition to DB, is an account of what it is that justifies a belief such as (B). According to one strand of foundationalist thought, (B) is justified because it can’t be false, doubted, or corrected by others. On such a view, (B) is justified because (B) carries with it an_epistemic privilege_ such as infallibility, indubitability, or incorrigibility (for a discussion of various kinds of epistemic privilege, see Alston 1971 [1989]).

Note that(B) is a belief about how the hat appears to you. So (B) is a belief about a perceptual experience of yours. According to the version of foundationalism just considered, a subject’s basic beliefs are introspective beliefs about the subject’s own mental states, of which perceptual experiences make up one subset. Other mental states about which a subject can have basic beliefs may include such things as having a headache, being tired, feeling pleasure, or having a desire for a cup of coffee. Beliefs about external objects cannot qualify as basic, according to this kind of foundationalism, for it is impossible for such beliefs to enjoy the kind of epistemic privilege necessary for being basic.

According to a different version of foundationalism,(B) is justified by some further mental state of yours, but not by a further belief of yours. Rather, (B) is justified by the very_perceptual experience_ that (B) itself is about: the hat’s looking blue to you. Let “(E)” represent that experience. According to this alternative proposal, (B) and (E) are distinct mental states. The idea is that what justifies (B) is (E). Since (E) is an experience, not a belief of yours, (B) can, according toDB, still be basic.

Let’s call the two versions of foundationalism we have distinguished privilege foundationalism and experiential foundationalism. Privilege foundationalism is generally thought to restrict basic beliefs so that beliefs about contingent, mind-independent facts cannot be basic, since beliefs about such facts are generally thought to lack the privilege that attends our introspective beliefs about our own present mental states, or our beliefs about a priori necessities. Experiential foundationalism is not restrictive in the same way. Suppose instead of(B), you believe

Unlike(B), (H) is about the hat itself, and not the way the hat appears to you. Such a belief is not one about which we are infallible or otherwise epistemically privileged. Privilege foundationalism would, therefore, classify (H) as nonbasic. It is, however, quite plausible to think that (E) justifies not only (B) but (H) as well. If (E) is indeed what justifies (H), and (H) does not receive any additional justification from any further beliefs of yours, then (H) qualifies, according toDB, as basic.

Experiential Foundationalism, then, combines two crucial ideas: (i) when a justified belief is basic, its justification is not owed to any other belief; (ii) what in fact justifies basic beliefs are experiences.

Under ordinary circumstances, perceptual beliefs such as(H) are not based on any further beliefs about one’s own perceptual experiences. It is not clear, therefore, how privilege foundationalism can account for the justification of ordinary perceptual beliefs like (H).[41] Experiential foundationalism, on the other hand, has no trouble at all explaining how ordinary perceptual beliefs are justified: they are justified by the perceptual experiences that give rise to them. This could be viewed as a reason for preferring experiential foundationalism to privilege foundationalism.

DB articulates one conception of basicality. Here’s an alternative conception:

Epistemic Basicality (EB)
_S_’s justified belief that p is basic if and only if _S_’s justification for believing that _p_does not depend on any justification S possesses for believing a further proposition,q.[42]

EB makes it more difficult for a belief to be basic thanDB does. To see why, we turn to the chief question (let’s call it the “J-question”) that advocates of experiential foundationalism face:

The J-Question
Why are perceptual experiences a source of justification?

One way of answering the J-question is as follows: perceptual experiences are a source of justification only when, and only because, we have justification for taking them to be reliable.[43] Note that your having justification for believing that _p_doesn’t entail that you actually believe p. Thus, your having justification for attributing reliability to your perceptual experiences doesn’t entail that you actually believe them to be reliable.

What might give us justification for thinking that our perceptual experiences are reliable? That’s a complicated issue. For our present purposes, let’s consider the following answer: We remember that they have served us well in the past. We are supposing, then, that justification for attributing reliability to your perceptual experiences consists of memories of perceptual success. On this view, a perceptual experience (E) justifies a perceptual belief only when, and only because, you have suitable track-record memories that give you justification for considering (E) reliable. (Of course, this raises the question why those memories give us justification, but there are many different approaches to this question, as we’ll see more fully below.)

If this view is correct, then it is clear howDB andEB differ. Your having justification for(H) depends on your having justification for believing something else in addition to (H), namely that your visual experiences are reliable. As a result (H) is not basic in the sense defined by EB. However, (H) might still be basic in the sense defined by DB. If you are justified in believing (H) and your justification is owed solely to (E) and (M), neither of which includes any beliefs, then your belief is doxastically—though not epistemically—basic.

We’ve considered one possible answer to theJ-question, and considered howEB andDB differ if that answer is correct. But there are other possible answers to the J-question. Another answer is that perceptual experiences are a source of justification when, and because, they are of types that reliably produce true beliefs.[44] Another answer is that perceptual experiences are a source of justification when, and because, they are of types that reliably indicate the truth of their content. Yet another answer is that perceptual experiences are a source of justification when, and because, they have a certain phenomenology: that of presenting their content as true.[45]

To conclude this section, let us briefly consider how justification is supposed to be transferred from basic to nonbasic beliefs. There are two options: the justificatory relation between basic and nonbasic beliefs could be deductive or non-deductive. If we take the relation to be deductive, each of one’s nonbasic beliefs would have to be such that it can be deduced from one’s basic beliefs. But if we consider a random selection of typical beliefs we hold, it is not easy to see from which basic beliefs they could be deduced. Foundationalists, therefore, typically conceive of the link between the foundation and the superstructure in non-deductive terms. They would say that, for a given set of basic beliefs, B, to justify a nonbasic belief, B*, it isn’t necessary that B entails B*. Rather, it is sufficient that, the inference from B to B* is a rational one—however such rationality is to be understood.[46]

4.2 Coherentism

Foundationalism says that knowledge and justification are structured like a building, consisting of a superstructure that rests upon a foundation. According to coherentism, this metaphor gets things wrong. Knowledge and justification are structured like a web where the strength of any given area depends on the strength of the surrounding areas. Coherentists, then, deny that there are any basic beliefs. As we saw in the previous section, there are two different ways of conceiving of basicality. Consequently, there are two corresponding ways of construing coherentism: as the denial of doxastic basicality or as the denial of epistemic basicality. Consider first coherentism as the denial of doxastic basicality:

Doxastic Coherentism
Every justified belief receives its justification from other beliefs in its epistemic neighborhood.

Let us apply this thought to the hat example we considered inSection 3.1. Suppose again you notice someone’s hat and believe

Let’s agree that (H) is justified. According to coherentism, (H) receives its justification from other beliefs in the epistemic vicinity of (H). They constitute your evidence or your reasons for taking (H) to be true. Which beliefs might make up this set of justification-conferring neighborhood beliefs?

We will consider two approaches to answering this question. The first is known as inference to the best explanation. Such inferences generate what is called explanatory coherence (see chapter 7 in Harman 1986). According to this approach, we must suppose you form a belief about the way the hat appears to you in your perceptual experiences, and a second belief to the effect that your perceptual experience, the hat’s looking blue to you, is best explained by the hypothesis that (H) is true. So the relevant set of beliefs is the following:

There are of course alternative explanations of why you have (E). Perhaps you are hallucinating that the hat is blue. Perhaps an evil demon makes the hat look blue to you when in fact it is red. Perhaps you are the sort of person to whom hats always look blue. An explanatory coherentist would say that, compared with these, the hat’s actual blueness is a superior explanation. That’s why you are justified in believing(H). Note that an explanatory coherentist can also explain the_lack_ of justification. Suppose you remember that you just took a hallucinatory drug that makes things look blue to you. That would prevent you from being justified in believing (H). The explanatory coherentist can account for this by pointing out that, in the case we are considering now, the truth of (H) would not be the_best_ explanation of why you are having experience (E). Rather, your having taken the hallucinatory drug would explain your having (E) at least as well as the hypothesis (H) would explain it. That’s why, according to the explanatory coherentist, in this variation of our original case you are not justified in believing (H).

One challenge for explanatory coherentists is to explain what makes one explanation better than another. Let’s use the evil demon hypothesis to illustrate this challenge. What we need is an explanation of why you are having (E). According to the evil demon hypothesis, you are having (E) because the evil demon is causing you to have (E), in order to trick you. The explanatory coherentist would say that, if the bulk of our beliefs about the mind-independent world are justified, then this “evil demon” hypothesis is a bad explanation of why you are having (E). But why is it bad? What we need to answer this question is a general and principled account of what makes one explanation better than another. Suppose we appeal to the fact that you are not justified in believing in the existence of evil demons. The general idea would be this: If there are two competing explanations, E1 and E2, and E1 consists of or includes a proposition that you are not justified in believing whereas E2 does not, then E2 is better than E1. The problem with this idea is that it puts the cart before the horse. Explanatory coherentism is supposed to help us understand what it is for beliefs to be justified. It doesn’t do that if it accounts for the difference between better and worse explanations by making use of the difference between justified and unjustified belief. If explanatory coherentism were to proceed in this way, it would be a circular, and thus uninformative, account of justification. So the challenge that explanatory coherentism must meet is to give an account, without using the concept of justification, of what makes one explanation better than another.

Let us move on to the second way in which the coherentist approach might be carried out. Recall what a subject’s justification for believing p is all about: possessing a link between the belief that p and _p_’s truth. Suppose the subject knows that the origin of her belief that p is reliable. So she knows that beliefs coming from this source tend to be true. Such knowledge would give her an excellent link between the belief and its truth. So we might say that the neighborhood beliefs which confer justification on(H) are the following:

Call coherentism of this kind reliability coherentism. If you believe (1) and (3), you are in possession of a good reason for thinking that the hat is indeed blue. So you are in possession of a good reason for thinking that the belief in question,(H), is true. That’s why, according to reliability coherentism, you are justified in believing (H).

Like explanatory coherentism, this view faces a circularity problem. If(H) receives its justification in part because you also believe (3), (3) itself must be justified. But where would your justification for (3) come from? One answer would be: from your memory of perceptual success in the past. You remember that your visual experiences have had a good track record. They have rarely led you astray. The problem is that you can’t justifiably attribute a good track record to your perceptual faculties without using your perceptual faculties. So if reliability coherentism is going to work, it would have to be legitimate to use a faculty for the very purpose of establishing the reliability of that faculty itself. But it is not clear that this is legitimate.[47]

We have seen that explanatory coherentism and reliability coherentism each face its own distinctive circularity problem. Since both are versions of doxastic coherentism, they both face a further difficulty: Do people, under normal circumstances, really form beliefs like (1), (2), and (3)? It would seem they do not. It could be objected, therefore, that these two versions of coherentism make excessive intellectual demands of ordinary subjects who are unlikely to have the background beliefs that, according to these versions of coherentism, are needed for justification. This objection could be avoided by stripping coherentism of its doxastic element. The result would be the following version of coherentism, which results from rejectingEB (the epistemic conception of basicality):

Dependence Coherentism
Whenever one is justified in believing a proposition_p_1, one’s justification for believing_p_1 depends on justification one has for believing some further propositions, _p_1,_p_2, … pn.

An explanatory coherentist might say that, for you to be justified in believing(H), it’s not necessary that you actually believe (1) and(2). However, it is necessary that you have justification for believing (1) and (2). It is your having justification for (1) and (2) that gives you justification for believing (H). A reliability coherentist might make an analogous point. She might say that, to be justified in believing (H), you need not believe anything about the reliability of your belief’s origin. You must, however, have justification for believing that your belief’s origin is reliable; that is, you must have justification for (1) and(3). Both versions of dependence coherentism, then, rest on the supposition that it is possible to have justification for a proposition without actually believing that proposition.

Dependence coherentism is a significant departure from the way coherentism has typically been construed by its advocates. According to the typical construal of coherentism, a belief is justified, only if the subject has certain further beliefs that constitute reasons for the given belief. Dependence coherentism rejects this. According to it, justification need not come in the form of beliefs. It can come in the form of introspective and memorial experience, so long as such experience gives a subject justification for beliefs about either reliability or explanatory coherence. In fact, dependence coherentism allows for the possibility that a belief is justified, not by receiving any of its justification from other beliefs, but solely by suitable perceptual experiences and memory experience.[48]

Next, let us examine some of the reasons provided in the debate over foundationalism and coherentism.

4.3 Why Foundationalism?

The main argument for foundationalism is called the regress argument. It’s an argument from elimination. With regard to every justified belief, B1, the question arises of where B1’s justification comes from. If B1 is not basic, it would have to come from another belief, B2. But B2 can justify B1 only if B2 is justified itself. If B2 is basic, the justificatory chain would end with B2. But if B2 is not basic, we need a further belief, B3. If B3 is not basic, we need a fourth belief, and so forth. Unless the ensuing regress terminates in a basic belief, we get two possibilities: the regress will either loop back to B1 or continue ad infinitum. According to the regress argument, both of these possibilities are unacceptable. Therefore, if there are justified beliefs, there must be basic beliefs.[49]

This argument suffers from various weaknesses. First, we may wonder whether the alternatives to foundationalism are really unacceptable. In the recent literature on this subject, we actually find an elaborate defense of the position that infinitism is the correct solution to the regress problem.[50] Nor should circularity be dismissed too quickly. The issue is not whether a simple argument of the form _p thereforep can justify the belief that p. Of course it cannot. Rather, the issue is ultimately whether, in the attempt to show that trust in our faculties is reasonable, we may make use of the input our faculties deliver. Whether such circularity is as unacceptable as a _p-therefore-p inference is an open question. Moreover, the avoidance of circularity does not come cheap. Experiential foundationalists claim that perception is a source of justification. Hence they need to answer theJ-question:Why is perception a source of justification? As we saw above, if we wish to answer this question without committing ourselves to the kind of circularity dependence coherentism involves, we must choose between externalism and an appeal to brute necessity.

The second weakness of the regress argument is that its conclusion merely says this: If there are justified beliefs, there must be justified beliefs that do not receive their justification from other beliefs. Its conclusion does not say that, if there are justified beliefs, there must be beliefs whose justification is independent of any justification for further beliefs. So the regress argument, if it were sound, would merely show that there must be _doxastic_basicality. Dependence coherentism, however, allows for doxastic basicality. So the regress argument merely defends experiential foundationalism against doxastic coherentism. It does not tell us why we should prefer experiential foundationalism to dependence coherentism.

Experiential foundationalism can be supported by citing cases like the blue hat example. Such examples make it plausible to assume that perceptual experiences are a source of justification. But they do not arbitrate between dependence coherentism and experiential foundationalism, since both of those views appeal to perceptual experiences to explain why perceptual beliefs are justified.

Finally, foundationalism can be supported by advancing objections to coherentism. One prominent objection is that coherentism somehow fails to ensure that a justified belief system is in contact with reality. This objection derives its force from the fact that fiction can be perfectly coherent. Why think, therefore, that a belief system’s coherence is a reason for thinking that the beliefs in that system tend to be true? Coherentists could respond to this objection by saying that, if a belief system contains beliefs such as “Many of my beliefs have their origin in perceptual experiences” and “My perceptual experiences are reliable”, it is reasonable for the subject to think that her belief system brings her into contact with external reality. This looks like an effective response to the no-contact-with-reality objection. Moreover, it is not easy to see why foundationalism itself should be better positioned than coherentism when contact with reality is the issue. What is meant by “ensuring” contact with reality? If foundationalists expect a logical guarantee of such contact, basic beliefs must be infallible. That would make contact with reality a rather expensive commodity. Given its price, foundationalists might want to lower their expectations. According to an alternative construal, we expect merely the likelihood of contact with reality. But if coherentists account for the epistemic value of perception in any way, then they can meet that expectation as well as foundationalists can.

Since coherentism can be construed in different ways, it is unlikely that there is one single objection that succeeds in refuting all possible versions of coherentism. Doxastic coherentism, however, seems particularly vulnerable to criticism coming from the foundationalist camp. One of these we considered already: It would seem that doxastic coherentism makes excessive intellectual demands on believers. When dealing with the mundane tasks of everyday life, we don’t normally bother to form beliefs about the explanatory coherence of our beliefs or the reliability of our belief sources. According to a second objection, doxastic coherentism fails by being insensitive to the epistemic relevance of perceptual experiences. Foundationalists could argue as follows. Suppose Kim is observing a chameleon that rapidly changes its colors. A moment ago it was blue, now it’s purple. Kim still believes it’s blue. Her belief is now unjustified because she believes the chameleon is blue even though it_looks_ purple to her. Then the chameleon changes its color back to blue. Now Kim’s belief that the chameleon is blue is justified again because the chameleon once again looks blue to her. The point would be that what’s responsible for the changing justificatory status of Kim’s belief is solely the way the chameleon looks to her. Since doxastic coherentism does not attribute epistemic relevance to perceptual experiences by themselves, it cannot explain why Kim’s belief is first justified, then unjustified, and eventually justified again.[51]

4.4 Why Coherentism?

Coherentism is typically defended by attacking foundationalism as a viable alternative. To argue against privilege foundationalism, coherentists pick an epistemic privilege they think is essential to foundationalism, and then argue that either no beliefs, or too few beliefs, enjoy such a privilege. Against experiential foundationalism, different objections have been advanced. One line of criticism is that perceptual experiences don’t have propositional content. Therefore, the relation between a perceptual belief and the perceptual experience that gives rise to it can only be causal. But it is not clear that this is correct. When you see the hat and it looks blue to you, doesn’t your visual experience—its looking blue to you—have the propositional content that the hat is blue? If it does, then why not allow that your perceptual experience can play a justificatory role?[52]

Another line of thought is that, if perceptual experiences have propositional content, they cannot stop the justificatory regress because they would then be in need of justification themselves. That, however, is a strange thought. In our actual epistemic practice, we never demand of others to justify the way things appear to them in their perceptual experiences. Indeed, such a demand would seem absurd. Suppose I ask you: “Why do you think that the hat is blue?” You answer: “Because it looks blue to me”. There are sensible further questions I might ask at that point. For instance, I might ask: “Why do you think its looking blue to you gives you a reason for believing it is blue?” Or I might ask: “Couldn’t you be mistaken in believing it looks blue to you?” But now suppose I ask you: “Why do you suppose the perceptual experience in which the hat looks blue to you is justified?” In response to that question, you should accuse me of misusing the word “justification”. I might as well ask you what it is that justifies your headache when you have one, or what justifies the itch in your nose when you have one. The latter questions, you should reply, would be as absurd as my request for stating a justifying reason for your perceptual experience.[53]

Experiential foundationalism, then, is not easily dislodged. On what grounds could coherentists object to it? To raise problems for experiential foundationalism, coherentists could press theJ-question: Why are perceptual experiences a source of justification? If foundationalists answer the J-question appealing to evidence that warrants the attribution of reliability to perceptual experiences, experiential foundationalism morphs into dependence coherentism. To avoid this outcome, foundationalists would have to give an alternative answer. One way of doing this would be to adopt the epistemic conception of basicality, and view it as a matter of brute necessity that perception is a source of justification. It remains to be seen whether such a view is sustainable.

5. Sources of Knowledge and Justification

Beliefs arise in people for a wide variety of causes. Among them, we must list psychological factors such as desires, emotional needs, prejudice, and biases of various kinds. Obviously, when beliefs originate in sources like these, they don’t qualify as knowledge even if true. For true beliefs to count as knowledge, it is necessary that they originate in sources we have good reason to consider reliable. These are perception, introspection, memory, reason, and testimony. Let us briefly consider each of these.

5.1 Perception

Our perceptual faculties include at least our five senses: sight, touch, hearing, smelling, and tasting. We must distinguish between an experience that can be classified as perceiving that_p_ (for example, seeing that there is coffee in the cup and tasting that it is sweet), which entails that p is true, and a perceptual experience in which it seems to us as though p, but where p might be false. Let us refer to this latter kind of experience as perceptual seemings. The reason for making this distinction lies in the fact that perceptual experience is fallible. The world is not always as it appears to us in our perceptual experiences. We need, therefore, a way of referring to perceptual experiences in which p seems to be the case that allows for the possibility of p being false. That’s the role assigned to perceptual seemings. So some perceptual seemings that_p_ are cases of perceiving that p, others are not. When it looks to you as though there is a cup of coffee on the table and in fact there is, the two states coincide. If, however, you hallucinate that there is a cup on the table, you have a perceptual seeming that p without perceiving that p.

One family of epistemological issues about perception arises when we concern ourselves with the psychological nature of the perceptual processes through which we acquire knowledge of external objects. According to direct realism, we can acquire such knowledge because we can directly perceive such objects. For example, when you see a tomato on the table, what you perceive is the tomato itself. According to indirect realism, we acquire knowledge of external objects by virtue of perceiving something else, namely appearances or sense-data. An indirect realist would say that, when you see and thus know that there is a tomato on the table, what you really see is not the tomato itself but a tomato-like sense-datum or some such entity.

Direct and indirect realists hold different views about the structure of perceptual knowledge. Indirect realists would say that we acquire perceptual knowledge of external objects by virtue of perceiving sense data that represent external objects. Sense data enjoy a special status: we know directly what they are like. So indirect realists think that, when perceptual knowledge is foundational, it is knowledge of sense data and other mental states. Knowledge of external objects is indirect: derived from our knowledge of sense data. The basic idea is that we have indirect knowledge of the external world because we can have foundational knowledge of our own mind. Direct realists, in contrast, say that perceptual experiences can give you direct, foundational knowledge of external objects.[54]

We take our perceptual faculties to be reliable. But how can we know that they are reliable? For externalists, this might not be much of a challenge. If the use of reliable faculties is sufficient for knowledge, and if by using reliable faculties we acquire the belief that our faculties are reliable, then we come to know that our faculties are reliable. But even externalists might wonder how they can, via argument, show that our perceptual faculties are reliable. The problem is this. It would seem the only way of acquiring knowledge about the reliability of our perceptual faculties is through memory, through remembering whether they served us well in the past. But should I trust my memory, and should I think that the episodes of perceptual success that I seem to recall were in fact episodes of perceptual success? If I am entitled to answer these questions with “yes”, then I need to have, to begin with, reason to view my memory and my perceptual experiences as reliable. It would seem, therefore, that there is no non-circular way of arguing for the reliability of one’s perceptual faculties.[55]

5.2 Introspection

Introspection is the capacity to inspect the present contents of one’s own mind. Through introspection, one knows what mental states one is currently in: whether one is thirsty, tired, excited, or depressed. Compared with perception, introspection appears to have a special status. It is easy to see how a perceptual seeming can go wrong: what looks like a cup of coffee on the table might be just be a clever hologram that’s visually indistinguishable from an actual cup of coffee. But can it introspectively seem to me that I have a headache when in fact I do not? It is not easy to see how it could be. Thus introspection is widely thought to enjoy a special kind of immunity to error. But what does this amount to?

First, it could be argued that, when it comes to introspection, there is no difference between appearance and reality; therefore, introspective seemings infallibly constitute their own success. Alternatively, one could view introspection as a source of certainty. Here the idea is that an introspective experience of _p_eliminates any possible reason for doubt as to whether p is true. Finally, one could attempt to explain the specialness of introspection by examining the way we respond to first-person reports: typically, we attribute a special authority to such reports. According to this approach, introspection is incorrigible: its deliverances cannot be corrected by any other source.

However we construe the special kind of immunity to error that introspection enjoys, such immunity is not enjoyed by perception. Some foundationalists have therefore thought that the foundations of our empirical knowledge can be furnished by introspection of our own perceptual experiences, rather than perception of mind-independent things around us.

Is it really true, however, that, compared with perception, introspection is in some way special? Critics of foundationalism have argued that introspection is not infallible. Might one not confuse an unpleasant itch for a pain? Might I not think that the shape before me appears circular to me when in fact it appears slightly elliptical to me? If it is indeed possible for introspection to mislead, then it is not clear in what sense introspection can constitute its own success, provide certainty, or even incorrigibility. Yet it also isn’t easy to see either how, if one clearly and distinctly feels a throbbing headache, one could be mistaken about that. Introspection, then, turns out to be a mysterious faculty. On the one hand, it does not seem to be an infallible faculty; on the other hand, it is not easy to see how error is possible in many specific cases of introspection.[56]

The definition of introspection as the capacity to know the present contents of one’s own mind leaves open the question of how similar the different exercises of this capacity may be from one another. According to some epistemologists, when we exercise this capacity with respect to our sensations, we are doing something very different from what we do when we exercise this capacity with respect to our own conscious beliefs, intentions, or other rationally evaluable states of mind: our exercises of this capacity with respect to our own conscious, rationally evaluable states of mind is, they claim, partly constitutive of our being in those very states. In support of this claim, they point out that we sometimes address questions of the form “do you believe that p?” by considering whether it is true that p, and reporting our belief concerning p not by inspecting our mind, but rather by making up our mind (see Moran 2001 and Boyle 2009 for defenses of this view; see Gertler 2011 for objections to the view).

5.3 Memory

Memory is the capacity to retain knowledge acquired in the past. What one remembers, though, need not be a past event. It may be a present fact, such as one’s telephone number, or a future event, such as the date of the next elections. Memory is, of course, fallible. Not every experience as of remembering that p is an instance of correctly remembering that p. We should distinguish, therefore, between remembering that p (which entails the truth of p) and seeming to remember that p(which does not entail the truth of p).

What makes memorial seemings a source of justification? Is it a necessary truth that, if one has a memorial seeming that p, one has thereby prima facie justification for p? Or is memory a source of justification only if, as coherentists might say, one has reason to think that one’s memory is reliable? Or is memory a source of justification only if, as externalists would say, it is in fact reliable? Also, how can we respond to skepticism about knowledge of the past? Memorial seemings of the past do not guarantee that the past is what we take it to be. We think that we are older than five minutes, but it is logically possible that the world sprang into existence just five minutes ago, complete with our dispositions to have memorial seemings of a more distant past and items such as apparent fossils that suggest a past going back millions of years. Our seeming to remember that the world is older than a mere five minutes does not entail, therefore, that it really is. Why, then, should we think that memory is a source of knowledge about the past?[57]

5.4 Reason

Some beliefs are (thought to be) justified independently of experience. Justification of that kind is said to be a priori. A standard way of defining _a priori_justification is as follows:

A Priori Justification
S is justified a priori in believing that _p_if and only if S_’s justification for believing that_p does not depend on any experience.

When they are knowledgeably held, beliefs justified in this way are instances of a priori knowledge.[58]

What exactly counts as experience? If by “experience” we mean just perceptual experiences, justification deriving from introspective or memorial experiences would count as a priori. For example, I could then know a priori that I’m thirsty, or what I ate for breakfast this morning. While the term “_a priori_” is sometimes used in this way, the strict use of the term restricts a priori justification to justification derived solely from the use of reason. According to this usage, the word “experiences” in the definition above includes perceptual, introspective, and memorial experiences alike. On this narrower understanding, paragons of what I can know a priori are conceptual truths (such as “All bachelors are unmarried”), and truths of mathematics, geometry and logic.

Justification and knowledge that is not a priori is called “_a posteriori_” or “empirical”. For example, in the narrow sense of “_a priori_”, whether I’m thirsty or not is something I know empirically (on the basis of introspective experiences), whereas I know a priori that 12 divided by 3 is 4.

Several important issues arise about a priori knowledge. First, does it exist at all? Skeptics about apriority deny its existence. They don’t mean to say that we have no knowledge of mathematics, geometry, logic, and conceptual truths. Rather, what they claim is that all such knowledge is empirical.[59]

Second, if a priori justification is possible, exactly what does it involve? What makes a belief such as “All bachelors are unmarried” justified? Is it an unmediated grasp of the truth of this proposition? Or does it consist of grasping that the proposition is necessarily true? Or is it the purely intellectual state of “seeing” (with the “eye of reason”) or “intuiting” that this proposition is true (or necessarily true)? (see Bengson 2015 and Chudnoff 2013 for sophisticated defenses of this view). Or is it, as externalists would suggest, the reliability of the cognitive process by which we come to recognize the truth of such a proposition?

Third, if a priori knowledge exists, what is its extent?Empiricists have argued that a priori knowledge is limited to the realm of the analytic, consisting of propositions true solely by virtue of our concepts, and so do not convey any information about the world. Propositions that convey genuine information about world are called synthetic. a priori knowledge of synthetic propositions, empiricists would say, is not possible. Rationalists deny this. They might appeal to a proposition such as “If a ball is green all over, then it doesn’t have black spots” as an example of a proposition that is both synthetic and yet knowable a priori(see Ichikawa and Jarvis 2009 and Malmgren 2011 for a discussion of the content of such a priori justified judgments; for literature on a priori knowledge, see BonJour 1998, BonJour in BonJour & Devitt 2005 [2013]; Boghossian and Peacocke 2000; Casullo 2003; Jenkins 2008, 2014; and Devitt 2014).

5.5 Testimony

Testimony differs from the sources we considered above because it isn’t distinguished by having its own cognitive faculty. Rather, to acquire knowledge of p through testimony is to come to know that p on the basis of someone’s saying that_p_. “Saying that _p_” must be understood broadly, as including ordinary utterances in daily life, postings by bloggers on their blogs, articles by journalists, delivery of information on television, radio, tapes, books, and other media. So, when you ask the person next to you what time it is, and she tells you, and you thereby come to know what time it is, that’s an example of coming to know something on the basis of testimony. And when you learn by reading the Washington Post that the terrorist attack in Sharm el-Sheikh of 22 July 2005 killed at least 88 people, that, too, is an example of acquiring knowledge on the basis of testimony.

The epistemological puzzle testimony raises is this: Why is testimony a source of knowledge? An externalist might say that testimony is a source of knowledge if, and because, it comes from a reliable source. But here, even more so than in the case of our faculties, internalists will not find that answer satisfactory. Suppose you hear someone saying “p_”. Suppose further that person is in fact utterly reliable with regard to the question of whether_p is the case or not. Finally, suppose you have no clue whatever as to that person’s reliability. Wouldn’t it be plausible to conclude that, since that person’s reliability is unknown to you, that person’s saying “_p_” does not put you in a position to know that p? But if the reliability of a testimonial source is not sufficient for making it a source of knowledge, what else is needed? Thomas Reid suggested that, by our very nature, we accept testimonial sources as reliable and tend to attribute credibility to them unless we encounter special contrary reasons. But that’s merely a statement of the attitude we in fact take toward testimony. What is it that makes that attitude reasonable? It could be argued that, in one’s own personal experiences with testimonial sources, one has accumulated a long track record that can be taken as a sign of reliability. However, when we think of the sheer breadth of the knowledge we derive from testimony, one wonders whether one’s personal experiences constitute an evidence base rich enough to justify the attribution of reliability to the totality of the testimonial sources one tends to trust (see E. Fricker 1994 and M. Fricker 2007 for more on this issue). An alternative to the track record approach would be to declare it a necessary truth that trust in testimonial sources is at least prima facie justified. While this view has been prominently defended, it requires an explanation of what makes such trust necessarily prima facie justified. Such explanations have proven to be controversial.[60]

6. The Limits of Cognitive Success

6.1 General Skepticism and Selective Skepticism

Much of modern epistemology aims to address one or another kind of skepticism. Skepticism is a challenge to our pre-philosophical conception of ourselves as cognitively successful beings. Such challenges come in many varieties. One way in which these varieties differ concerns the different kinds of cognitive success that they target: skepticism can challenge our claims to know, or our claims to believe justifiably, or our claims to have_justification for believing_, or our claims to have any_good reasons_ for belief whatsoever. But another way in which these varieties differ is in whether the skepticism in question is fully general—targeting the possibility of enjoying any instance of the relevant cognitive success—or is selective—targeting the possibility of enjoying the relevant cognitive success concerning a particular subject matter (e.g., the past, the minds of others, the world beyond our own consciousness) or concerning beliefs formed by a particular method (e.g., perception, memory, reasoning, etc.). General skepticism and selective skepticism pose very different sorts of challenges, and use very different kinds of arguments. General skepticism is motivated by reasoning from some apparently conflicting features of the kind of cognitive success in question. For instance, a general skeptic might claim that justification requires a regress of justifiers, but then argue that this regress of justifiers cannot be contained in any finite mind—and thus, the skeptic might conclude, no finite being can be justified in believing anything. Alternatively a general skeptic might claim that knowledge requires certainty, and that nobody can be certain of something unless there is nothing of which she could be even more certain—thus, the skeptic might conclude, we can know virtually nothing (see Unger 1975).

Selective skepticism, in contrast, is typically motivated by appeal to one or another skeptical hypothesis. A skeptical hypothesis is a hypothesis according to which the facts that you claim to know (whether these facts concern the past, or the mind of others, or the mind-independent world, or what have you) may, for all you can tell, be radically different from how they appear to you to be. Thus, a skeptical hypothesis is a hypothesis that distinguishes between the way things appear to you, on the one hand, and the way they really are, on the other; and this distinction is deployed in such a way as to pose a challenge to your cognitive success concerning the latter. Here are some famous examples of skeptical hypotheses:

Skeptics can make use of such hypotheses in constructing various arguments that challenge our pre-philosophical picture of ourselves as cognitively successful. Consider, for instance, the BIV hypothesis, and some ways in which this hypothesis can be employed in a skeptical argument.

Here is one way of doing so. According to the BIV hypothesis, the experiences you would have as a BIV and the experiences you have as a normal person are perfectly alike, indistinguishable, so to speak, “from the inside”. Thus, although it appears to you as if you are a normally embodied human being, everything would appear exactly the same way to a BIV. Thus, the way things appear to you cannot provide you with knowledge that you are not a BIV. But if the way things appear to you cannot provide you with such knowledge, then nothing can give you such knowledge, and so you cannot know that you’re not a BIV. Of course, you already know this much: if you are a BIV, then you don’t have any hands. If you don’t know that you’re not a BIV, then you don’t know that you’re not in a situation in which you don’t have any hands. But if you don’t know that you’re not in a situation in which you don’t have any hands, then you don’t know that you’re not handless. And to not know that you’re not handless is simply to not know that you have hands. We can summarize this skeptical argument as follows:

The BIV-Knowledge Closure Argument (BKCA)

Therefore:

As we have just seen, (C1) and (C2) are very plausible premises. It would seem, therefore, that BKCA is sound. If it is, we must conclude we don’t know we have hands. But surely that conclusion can’t be right: if it turns out that I don’t know that I have hands, that must be because of something very peculiar about my cognitive relation to the issue of whether I have hands—not because of the completely anodyne considerations mentioned in BKCA. So we are confronted with a difficult challenge: The conclusion of the BKCA seems plainly false, but on what grounds can we reject it?[61]

Here are some other ways of using the BIV hypothesis to generate a skeptical argument.

The BIV-Justification Underdetermination Argument(BJUA)

Therefore:

The BIV-Knowledge Defeasibility Argument (BKDA)

Therefore:

Therefore:

The BIV-Epistemic Possibility Argument (BEPA)

Therefore:

Obviously, this list of skeptical arguments could be extended by varying either (a) the skeptical hypothesis employed, or (b) the kind of cognitive success being challenged, or (c) the epistemological principles that link the hypothesis in (a) and the challenge in (b). Some of the resulting skeptical arguments are more plausible than others, and some are historically more prominent than others, but there isn’t space for a comprehensive survey. Here, we will review some of the more influential replies to BKCA, BJUA, BKDA, and BEPA.

6.2 Responses to the Closure Argument

Next, we will examine various responses to theBKCA argument. According to the first, we can see that(C2) is false if we distinguish between relevant and irrelevant alternatives. An alternative to a proposition p is any proposition that is incompatible with p. Your having hands and your being a BIV are alternatives: if the former is true, the latter is false, and vice versa. According to the thought that motivates the second premise of the BIV argument, you know that you have hands only if you can discriminate between your actually having hands and the alternative of being a (handless) BIV. But, by hypothesis, you can’t discriminate between these. That’s why you don’t know that you have hands. In response to such reasoning, a relevant alternatives theorist would say that your inability to discriminate between these two is not an obstacle to your knowing that you have hands, and that’s because your being a BIV is not a relevant alternative to your having hands. What would be a relevant alternative? This, for example: your arms ending in stumps rather than hands, or your having hooks instead of hands, or your having prosthetic hands. But these alternatives don’t prevent you from knowing that you have hands—not because they are irrelevant, but rather because you can discriminate between these alternatives and your having hands. The relevant alternative theorist holds, therefore, that you do know that you have hands: you know it because you can discriminate it from relevant alternatives, like your having stumps rather than hands.

Thus, according to Relevant Alternatives theorists, you know that you have hands even though you don’t know that you are not a BIV. There are two chief problems for this approach. The first is that denouncing the BIV alternative as irrelevant is ad hoc unless it is supplemented with a principled account of what makes one alternative relevant and another irrelevant. The second is that premise 2 is highly plausible. To deny it is to allow that the following conjunction can be true:

Abominable Conjunction
I know that I have hands but I do not know that I am not a (handless) BIV.

Many epistemologists would agree that this conjunction is indeed abominable because it blatantly violates the basic and extremely plausible intuition that you can’t know you have hands without knowing that you are not a BIV.[62]

Next, let us consider a response to BKCA according to which it’s not the second but the first premise that must be rejected. G. E. Moore has pointed out that an argument succeeds only to the extent that its premises are more plausible than the conclusion. So if we encounter an argument whose conclusion we find much more implausible than the denial of the premises, then we can turn the argument on its head. According to this approach, we can respond to the BIV argument as follows:

Counter BIV

Therefore:

Unless we are skeptics or opponents of closure, we would have to concede that this argument is sound. It is valid, and its premises are true. Yet few philosophers would agree that Counter BIV amounts to a satisfying response to the BIV argument. It fails to explain_how_ one can know that one is not a BIV. The observation that the premises of the BIV argument are less plausible than the denial of its conclusion doesn’t help us understand how such knowledge is possible. That’s why the Moorean response, unsupplemented with an account of how one can know that one is not a BIV, is widely thought to be an unsuccessful rebuttal of BKCA.[63]

We have looked at two responses to BKCA. The relevant alternatives response implausibly denies the second premise. The Moorean response denies the first premise without explaining how we could possibly have the knowledge that the first premise claims we don’t have. Another prominent response, contextualism, avoids both of these objections. According to the contextualist, the precise contribution that the verb “to know” makes to the truth-conditions of the sentences in which it occurs varies from one context to another: in contexts in which the BIV hypothesis is under discussion, an agent counts as “knowing” a fact only if she can satisfy some extremely high (typically unachievable) epistemic feat, and this is why (1) is true. But in contexts in which the BIV hypothesis is not under discussion, an agent can count as “knowing” a fact even if her epistemic position vis-à-vis that fact is much more modest, and this is why (3), taken in isolation, appears false.

The contextualist literature has grown vastly over the past two decades: different contextualists have different accounts of how features of context affect the meaning of some occurrence of the verb “to know”, and each proposal has encountered specific challenges concerning the semantic mechanisms that it posits, and the extent to which it explains the whole range of facts about which epistemic claims are plausible under which conditions.[64]

6.3 Responses to the Underdetermination Argument

Both the contextualist and the Moorean responses toBKCA, as discussed in the previous section, leave out one important detail. Both say that one can know that one isn’t a BIV (though contextualists grant this point only for the sense of “know” operational in low-standards contexts), but neither view explains how one can know such a thing. If, by hypothesis, a BIV has all the same states of mind that I have—including all the same perceptual experiences—then how can I be justified in believing that I’m not a BIV? And if I can’t be justified in believing that I’m not a BIV, then how can I know that I’m not?

Of course, the question about how I can be justified in believing that I’m not a BIV is not especially hard for externalists to answer. From the point of view of an externalist, the fact that you and the BIV have the very same states of mind need not be at all relevant to the issue of whether you’re justified in believing that you’re not a BIV, since such justification isn’t fully determined by those mental states anyway.

The philosophers who have had to do considerable work to answer the question how I can be justified in believing that I’m not a BIV have typically done this work not directly in reply to BKCA, but rather in reply to BJUA.

What might justify your belief that you’re not a BIV? According to some philosophers, you are justified in believing that you’re not a BIV because, for instance, you know perfectly well that current technology doesn’t enable anyone to create a BIV. The proponent of the BIV hypothesis might regard this answer as no better than the Moorean response to BKCA: if you are allowed to appeal to (what you regard as your) knowledge of current technology to justify your belief that you’re not a BIV, then why can’t the Moorean equally well rely on his knowledge that he has hands to justify his belief that he’s not a BIV? Philosophers who accept this objection, but who don’t want to ground your justification for believing that you’re not a BIV in purely externalistic factors, may instead claim that your belief is justified by the fact that your own beliefs about the external world provide a better explanation of your sense experiences than does the BIV hypothesis (see Russell 1912 and Vogel 1990 for influential defenses of this argument against skepticism, and see Neta 2004 for a rebuttal).

6.4 Responses to the Defeasibility Argument

The most influential reply toBKDA is to say that, when I acquire evidence that I don’t have hands, such evidence makes me cease to know that I have hands. On this view, when I acquire such evidence, the argument above is sound. But prior to my acquiring such evidence, (4) is false, and so the argument above is not sound. Thus, the truth of (4), and consequently the soundness of this argument, depends on whether or not I have evidence that I don’t have hands. If I do have such evidence, then the argument is sound, but of course it has no general skeptical implications: all it shows that I can’t know some fact whenever I have evidence that the fact doesn’t obtain (versions of this view are defended by Harman 1973 and Ginet 1980).

Plausible as this reply has seemed to most philosophers, it has been effectively challenged by Lasonen-Aarnio (2014b). Her argument is this: presumably, it’s possible to have more than enough evidence to know some fact. But if it’s possible to have more than enough evidence to know some fact, it follows that one might still know that fact even if one acquires some slight evidence against it. And yet, it would be wrong to leave one’s confidence entirely unaffected by the slight evidence that one acquires against that fact: though the evidence might be too slight to destroy one’s knowledge, it cannot be too slight to diminish one’s confidence even slightly. So long as one could continue to know a fact while rationally diminishing one’s confidence in it in response to new evidence, the most popular reply to the defeasibility argument fails.

Other replies to the defeasibility argument include the denial of premise (2),[65] the denial of (4) (McDowell 1982, Kern 2006 [2017]), and the claim that the context-sensitivity of “knows” means that (4) is true only relative to contexts in which the possibility of future defeaters is relevant (see Neta 2002). But neither of these replies has yet received widespread assent.

6.5 Responses to the Epistemic Possibility Argument

The most common reply toBEPA is either to deny premise (1), or to deny that we are justified in believing that premise (1) is true. Most writers would deny premise (1), and would do so on whatever grounds they have for thinking that I can know that I’m not a BIV: knowing that something is not the case excludes that thing’s being epistemically possible for you.[66]

But a couple of influential writers—most notably Rogers Albritton and Thompson Clarke (see Albritton 2011 and Clarke 1972)—do not claim that premise (1) is false. Rather, they deny that we are justified in believing that premise (1) is true. According to these writers, what normally justifies us in believing that something or other is epistemically possible is that we can conceive of discovering that it is true. For instance, what justifies me in believing, say, that it’s possible that Donald Trump has resigned is that I can clearly conceive of discovering that Donald Trump has resigned. But if I attempt to conceive of discovering that I’m a BIV, it’s not clear that I can succeed in this attempt. I may conceive of coming upon some evidence that I’m a BIV—but, insofar as this evidence tells in favor of the hypothesis that I’m a BIV, doesn’t it also undermine its own credibility? In such a case, is there anything at all that would count as “my evidence”? (see Neta 2019 for an elaboration of this point). Without being able to answer this question in the affirmative, it’s not clear that I can conceive of anything that would amount to discovering that I’m a BIV. Of course, from the fact that I cannot conceive of anything that would amount to discovering that I’m a BIV, it doesn’t follow that I’m not a BIV—and so it doesn’t even follow that it’s not possible that I’m a BIV. But, whether or not it is possible that I’m a BIV, I can’t be justified in thinking that it is. And that’s to say that I can’t be justified in accepting premise (1) of BEPA.