Abstract and Keywords
What is the significance of consciousness? This book argues that consciousness has unique epistemic significance: all epistemic justification ultimately depends on consciousness. Section 1.1 clarifies the concept of consciousness by invoking Ned Block’s distinction between phenomenal consciousness and access consciousness. Section 1.2 raises a challenge to the significance of consciousness by arguing that unconscious creatures—zombies—can conceivably do everything conscious creatures can do. Section 1.3 situates this challenge in the context of David Chalmers’s distinction between the hard problem of explaining phenomenal consciousness and the easy problem of explaining its associated psychological functions. Section 1.4 explores the research program of putting phenomenal consciousness first: that is, explaining psychological functions in terms of phenomenal consciousness. Section 1.5 outlines the program developed in this book, which explains epistemic justification in terms of phenomenal consciousness. Section 1.6 concludes with chapter summaries and some guidelines for reading the book.
Consciousness is a puzzling phenomenon. In fact, it is puzzling in at least two different ways. First, there are puzzles about the nature of consciousness. And second, there are puzzles about the significance of consciousness. Puzzles of the first kind are about what consciousness is, while puzzles of the second kind are about what consciousness does.
We can illustrate both of these puzzles by considering what philosophers like to call zombies.1 Philosophical zombies are not the same as Hollywood zombies. They look and act just like you or me. The key difference is that you and I are conscious, whereas zombies are not. By definition, a zombie is an unconscious creature—that is, a creature that has no conscious states at all. As we might say, there is “nothing it is like” to be a zombie.
Zombies generate a puzzle about the nature of consciousness. It seems possible that a perfect physical duplicate of mine could be a zombie—that is, a creature entirely lacking in consciousness. But if so, then how can consciousness be a physical phenomenon? Physicalism says that every phenomenon is a physical phenomenon. And yet the apparent possibility of zombies suggests that consciousness is a counterexample. Hence, we seem forced to deny that consciousness is a physical phenomenon after all.
Zombies also generate a puzzle about the significance of consciousness. It seems possible that a perfect physical duplicate of mine could do everything that I can do, but without the assistance of consciousness. But if so, then what is the point of being conscious? What can conscious creatures do that cannot be done without consciousness? If zombies can do everything we can do, then the answer seems to be: nothing at all. Hence, we seem forced to accept that consciousness plays no indispensable role in our lives.
Much recent work in philosophy has been preoccupied with the first puzzle, whereas this book is exclusively concerned with the second. To highlight this distinction, let me contrast the following pair of questions:
(1) A metaphysical question: What is the nature of consciousness and how is it physically realized in the brain?
This book is about the epistemological question, rather than the metaphysical question: it is about what consciousness does, rather than what consciousness is. It makes no attempt to engage with metaphysical issues about the nature of consciousness or its physical realization in the brain. Instead, it is primarily concerned with epistemological issues about the role of consciousness as a source of knowledge and justified belief. The conclusions of this book are metaphysically neutral in the sense that you can accept them whatever your views about the nature of consciousness and its physical realization in the brain. Even so, I will need to begin by explaining what I mean by the word ‘consciousness’ so that you know what this book is about.
1.1. What Is Consciousness?
If you’re reading this book, then it’s probably safe to assume that you’re conscious. What I mean is not just that you’re awake and alert, although I hope that’s true, but also that you’re the subject of various conscious mental states, including thoughts, feelings, and sensory experiences. These mental states are conscious in the sense that there is “something it’s like” for you to have them. As Thomas Nagel puts the point, “an organism has conscious mental states if and only if there is something it is like to be that organism—something it is like for that organism” (1974: 436).
This is what Ned Block calls the phenomenal concept of consciousness. As he defines it, “Phenomenal consciousness is experience; what makes a state phenomenally conscious is that there is something ‘it is like’ to be in that state” (1995: 228). One problem with this definition, as Block acknowledges, is that if you don’t already know what he means by the word ‘consciousness’, then using synonymous expressions like ‘experience’ or ‘what it’s like’ is unlikely to prove helpful. Moreover, this problem seems unavoidable, since the phenomenal concept of consciousness cannot be defined in more basic terms. Like many other concepts, it is primitive and indefinable. Our only option is to define the concept ostensively—that is, by giving examples that illustrate when it applies and when it doesn’t.
Here are some paradigmatic examples of phenomenal consciousness: consider the experience of feeling pain, visualizing red, or thinking about mathematics. All of these experiences have phenomenal character: that is, there is something it’s like for you to have them. Moreover, they all differ in their phenomenal character, since what it’s like to feel pain is different from what it’s like (p.5) to visualize red or to think about mathematics. In contrast, there is typically nothing it’s like for you to digest food, to secrete hormones, or to be in a coma. These states are not examples of phenomenal consciousness, since they have no phenomenal character at all.
Block warns against confusing the phenomenal concept of consciousness with any functional concept of consciousness defined in terms of its causal role in cognition or action. This is the main point of his influential distinction between phenomenal consciousness and access consciousness. To a first approximation, a mental state is access conscious just when it is poised for use in the direct control of action, reasoning, or verbal report. The guiding idea is that access consciousness is an “information-processing correlate” of phenomenal consciousness (1997: 384). In other words, a mental state is access conscious when it does what phenomenally conscious mental states normally do. As Block argues, however, it is at least conceivable that phenomenal consciousness and access consciousness can come apart.
Zombies provide the simplest illustration of access consciousness without phenomenal consciousness. By definition, a zombie is not conscious in the phenomenal sense: there is nothing it is like to be a zombie. But it is at least conceivable that a zombie can do everything that you can do: for every phenomenal state in you, there is some corresponding nonphenomenal state in your zombie twin that plays the same kind of causal role in the control of action, reasoning, and verbal report. If so, then your zombie twin is conscious in the functional sense but not the phenomenal sense. Zombies can have access consciousness, but not phenomenal consciousness.
Personally, I doubt that there is any good sense in which zombies are conscious. Block maintains that our ordinary concept of consciousness is a “mongrel concept” that conflates two distinct kinds of consciousness—namely, phenomenal consciousness and access consciousness. On this view, there is one sense in which zombies are conscious and another sense in which they are not. As others have complained, however, this is really quite implausible: there is no ordinary sense in which zombies are conscious.2 If zombies satisfy Block’s definition of access conscious, then we shouldn’t conclude that there are two kinds of consciousness. Instead, we should conclude that access consciousness is merely an ersatz functional substitute for phenomenal consciousness. In my view, the phenomenal concept is our most basic concept of consciousness and all other concepts of consciousness are defined explicitly or implicitly in terms of this one.3 In any case, this book is about the phenomenal concept of (p.6) consciousness. Whenever I speak about consciousness without qualification, it is phenomenal consciousness that I have in mind.
What can we learn from Block’s distinction? He may be wrong to claim that there are two kinds of consciousness, but he is quite right to warn us against confusing consciousness—the real thing—with mere ersatz functional substitutes that occupy the same causal role. We shouldn’t confuse consciousness with its function, since it’s conceivable that everything consciousness does can be done equally well without it. This is what zombies teach us. Block’s contribution is to highlight this distinction between what consciousness is and what it does. As I will now explain, we can use this distinction to raise a challenging question about the significance of consciousness and its role in our mental lives.
1.2. The Significance of Consciousness
What is the significance of consciousness? Does consciousness play any significant role in our lives that cannot equally be played without consciousness? If so, then we can say that consciousness has unique significance.
To make this question vivid, imagine that we all suddenly become zombies. Are we thereby guaranteed to lose anything of significance in our lives? How much of our lives could remain intact? As we’ve seen, it’s at least conceivable that zombies can do everything we can do. For example, it’s conceivable that we might wake up tomorrow without consciousness and yet still be able to do everything we could do before. If so, then there is nothing we conscious creatures can do that cannot equally be done without consciousness. This makes it hard to resist the conclusion that consciousness has no unique significance in our lives. For future reference, let’s call this the zombie challenge.
The main aim of this book is to answer the zombie challenge by arguing that consciousness has unique epistemic significance. First, however, I want to clarify the nature of the challenge by addressing two replies: the “scientific” reply and the “metaphysical” reply. This will clarify my project and disentangle it from scientific debates about the function of consciousness and philosophical debates about the metaphysical basis of consciousness. While these debates are interesting and important in their own right, they are largely orthogonal to my central concerns in this book.
The scientific reply to the zombie challenge is that the science of consciousness will tell us what we can and cannot do without consciousness. This is Fred Dretske’s view:
The function of experience, the reason animals are conscious of objects and their properties, is to enable them to do all those things that those (p.7) who do not have it cannot do. This is a great deal indeed. If . . . there are many things people with experience can do that people without experience cannot do, then that is a perfectly good answer to questions about what the function of experience is. (1997: 14)
Dretske’s main goal is to argue that questions about the function of consciousness are empirical questions that can be answered experimentally by studying the differences between conscious and unconscious vision. His own hypothesis is that the function of conscious vision is to enable visual identification and recognition of objects. As he puts the point, “Remove visual sensations of X and S might still be able to tell where X is, but S will not be able to tell what X is” (1997: 13).
Dretske’s specific hypothesis can be questioned on empirical grounds, but let’s just assume it’s correct for the sake of argument, since the exact details won’t matter.4 Suppose we’re built in such a way that we cannot visually identify and recognize objects without consciousness. Even so, the question remains: couldn’t we have been built differently? It seems possible, at least in principle, that a zombie could identify and recognize objects in the absence of conscious vision. Perhaps there are no such zombies. Still, we can ask, is this just an accident of evolutionary history or is there some principled reason why only conscious vision can play this functional role?
Dretske doesn’t argue that consciousness has unique significance in the sense that nothing else can play the same functional role. Instead, he argues that even if something else can play the same functional role, it doesn’t follow that consciousness is an epiphenomenon that has no function at all. After all, functional roles can be multiply realized. He writes:
Maybe something else besides experience would enable us to do the same things, but this would not show that experience didn’t have a function. All it would show is that there was more than one way to skin a cat—more than one way to get the job done. (1997: 14)
This point is well taken, but whether it constitutes an adequate response to the zombie challenge depends on how exactly the challenge is understood. Dretske is primarily concerned with a scientific question about the function of consciousness: is it merely an epiphenomenon or does it play some causal role in our lives? In contrast, I am primarily concerned with a more distinctively (p.8) philosophical question about whether consciousness has any unique function: does it play any role in our lives that cannot in principle be played by anything else? The scientific facts about our constitution simply don’t address this question, since they leave open the possibility that we could in principle have been built differently. For this reason, we can now set aside the scientific reply to the zombie challenge.
The metaphysical reply to the zombie challenge is that it has no force because zombies are impossible. If so, then it’s trivial that zombies cannot do everything we can do, since they cannot exist at all. The objection is that we cannot use zombies to raise a challenge for the significance of consciousness without assuming that zombies are possible and thereby taking a controversial stance on the metaphysics of consciousness.
I’ll make two points in response. The first point is that whether zombies are possible depends on what kind of zombies we’re talking about. If zombies are defined as unconscious creatures, then not only can they exist, but in fact they do. Examples include coma patients, human embryos, paramecia, oak trees, and laptop computers. What’s more controversial is whether a zombie can be just like a conscious creature in all other respects.
David Chalmers (1996) argues that there could be a physical zombie who resembles a conscious creature in all physical respects. This is highly controversial, of course, since it implies that physicalism is false. A weaker assumption is that there could be a functional zombie who resembles a conscious creature in abstract functional respects; for instance, a silicon robot that duplicates the functional organization of our brain without being conscious.5 This assumption is much less controversial, but it is still not completely innocuous: it is consistent with physicalism, but inconsistent with functionalist versions of physicalism.
This brings me to my second point. We can remain neutral on these metaphysical issues about the status of physicalism and functionalism by framing the zombie challenge in terms of conceivability rather than possibility.6 Our question is whether consciousness plays any significant role in our mental lives that cannot conceivably be played without it. If zombies are so much as conceivable, then we seem forced to conclude that there is no significant role in our lives that cannot conceivably be played without consciousness. After all, it’s (p.9) conceivable that zombies can do everything we can do. This makes it hard to resist the conclusion that consciousness has no unique significance.
It’s relatively uncontroversial that zombies are conceivable in the sense that they cannot be ruled out on a priori grounds alone. What is much more controversial is whether there is a valid argument from the premise that zombies are conceivable to the conclusion that zombies are possible. Many proponents of physicalism and functionalism accept the premise about conceivability, while rejecting the inference from conceivability to possibility. The relationship between conceivability and possibility is exactly the kind of disputed metaphysical issue that I want to set aside in this book. That is one reason why I prefer to understand the zombie challenge in terms of conceivability rather than possibility.7
Another reason is that this book is primarily concerned with conceptual questions, rather than metaphysical questions, about the significance of consciousness. My question is whether there is any conceptual, analytic, or a priori connection between the phenomenal concept of consciousness and our other psychological and epistemic concepts, including concepts of mental representation, belief, and knowledge. That is why I’m asking how much of our mental lives could be preserved in zombies. Is it conceivable—in the sense that it’s not ruled out on a priori grounds—that a zombie could have the capacity for mental representation, belief, or knowledge?
Our initial reflections on the zombie challenge suggest that much of our mental life can be preserved in zombies in the absence of consciousness. After all, zombies can do everything we can do. If our psychological and epistemic capacities are functionally defined in terms of their causal roles, then it is inconceivable that zombies lack the same capacities as conscious creatures. I’ll argue, however, that our psychological and epistemic capacities cannot be functionally defined in terms of their causal roles. Instead, they are defined in terms of their connections with phenomenal consciousness. On this view, it is inconceivable that zombies share our mental lives.
In the next section, I’ll situate this proposal in the context of contemporary debates about how to solve the “hard problem” of explaining phenomenal consciousness. As I’ll explain, metaphysical puzzlement about the nature of phenomenal consciousness leads many philosophers to marginalize its role in theories of mental representation, cognition, and knowledge. In this way, metaphysical perplexity tends to result in epistemological distortion. Although this (p.10) book takes no stand on contemporary debates about the metaphysics of consciousness, it is important to recognize how they have shaped the intellectual background for contemporary debates in epistemology.
1.3. The Hard Problem of Consciousness
For much of the twentieth century, phenomenal consciousness occupied a curious status within the philosophy of mind: it was absolutely central in some ways, and yet largely peripheral in others. On the one hand, much of the preoccupation with the mind-body problem was fueled by metaphysical puzzles about the nature of phenomenal consciousness and its place in the physical world. On the other hand, these metaphysical puzzles provided much of the impetus for a research program of understanding the mind as far as possible without making reference to phenomenal consciousness. One defining characteristic of this research program was the idea that the “hard problem” of explaining phenomenal consciousness could be divorced from the comparatively “easy problems” of explaining mental representation, cognition, and knowledge of the external world.
In a classic discussion, David Chalmers (1996) explains the distinction between the hard and easy problems in terms of a distinction between two concepts of mind. On the one hand, we have the phenomenal concept of mind: this is the concept of mind as conscious experience. A state is mental in the phenomenal sense just in case there is “something it is like” for the subject of that mental state to have it. On the other hand, we have the psychological concept of mind: this is the concept of mind as the causal or explanatory basis of behavior. A state is mental in the psychological sense just in case it plays the right kind of role in the causal explanation of behavior. Chalmers sums up the distinction as follows:
On the phenomenal concept, mind is characterized by the way it feels; on the psychological concept, mind is characterized by what it does. There should be no question of competition between these aspects of mind. Neither of them is the correct analysis of mind. They cover different phenomena, both of which are quite real. (1996: 11)
Consider, for example, the concept of pain. We have a phenomenal concept of pain as a mental state that feels a certain way—it feels painful. But we also have a psychological concept of pain as a mental state that is caused by bodily damage and causes aversive behavior. We use the word ‘pain’ to express both of these concepts. And while these concepts are normally coextensive, they (p.11) are nevertheless distinct. It is at least conceivable that a zombie might engage in pain behavior without feeling pain or, conversely, that a “madman” (Lewis 1980b) might feel pain without engaging in pain behavior.
According to Chalmers, we can give a functional analysis of our psychological concepts, but not our phenomenal concepts. Psychological states can be functionally defined in terms of their causal roles, which can be abstractly described in nonpsychological terms.8 In contrast, phenomenal states cannot be defined in terms of their causal role, since it’s conceivable that zombies can have nonphenomenal states that play the same causal role as our phenomenal states. The mere conceivability of functional zombies is enough to undermine the functional analysis of phenomenal concepts: no further inference from conceivability to possibility is required.
With this conceptual distinction in hand, Chalmers divides the mind-body problem into a hard problem and an easy problem. Explaining the phenomenal aspects of mind is a hard problem because there is an “explanatory gap” (Levine 1983) between physical facts and phenomenal facts: it is conceivable that the same physical facts could give rise to different phenomenal facts or to none at all. In contrast, explaining the psychological aspects of mind is an easy problem because there is no such explanatory gap between physical facts and psychological facts. We just need to specify a physical mechanism that plays the causal role in terms of which the psychological facts are defined. In the case of phenomenal facts, however, we cannot do this because they are not defined in terms of their causal role. Chalmers sums up the situation like this:
There is no great mystery about how a state might play some causal role, although there are certainly technical problems there for science. What is mysterious is why that state should feel like something; why it should have a phenomenal quality. (1996: 15)
In sum, the problem of explaining psychological aspects of mind is an easy problem because the psychological concept of mind can be functionally defined, whereas the problem of explaining phenomenal consciousness is a hard problem because the phenomenal concept of mind cannot be functionally defined.
(p.12) Which aspects of mind generate hard problems and which generate easy problems? That depends on which concepts of mind can be functionally defined. As I use the terms, it’s not true by definition that our psychological concepts can be functionally defined. It’s a substantive question—not one that can be settled by terminological stipulation—whether our ordinary psychological concepts (including our concepts of mental representation, belief, and knowledge) can be functionally defined. In the rest of this section, I’ll contrast three distinct theoretical perspectives on this question and explain how they bear on our initial question about the significance of consciousness.9
The first view is bifurcationism: it says that there is no conceptual connection between our phenomenal concepts and our psychological concepts. Although we cannot give any functional definition of our phenomenal concepts, our ordinary psychological concepts can be functionally defined without mentioning phenomenal consciousness at all. On this view, it’s conceivable that there could be a functional zombie with no phenomenal states. However, it’s inconceivable that a functional zombie has no psychological states, since its nonphenomenal states play all the causal roles in which psychological states are defined. As Jaegwon Kim puts the point, “It would be incoherent to withhold states like belief, desire, knowledge, action, and intention from these creatures” (2005: 165).
Bifurcationism has important consequences for the metaphysical project of solving the mind-body problem. If bifurcationism is true, then the hard problem of explaining phenomenal consciousness can be divorced from the easy problems of explaining mental representation, cognition, and knowledge. We don’t need to explain phenomenal consciousness in order to make progress in explaining these other aspects of mind. Instead, we can explain mental representation, cognition, and knowledge in purely causal terms without mentioning phenomenal consciousness at all. Ned Block gives expression to this viewpoint when he writes, “We cannot now conceive how psychology could explain qualia, though we can conceive how psychology could explain believing, desiring, hoping, etc.” (1978: 307).
Bifurcationism also bears on our initial question about the significance of consciousness. If bifurcationism is true, then phenomenal consciousness has no uniquely significant role to play in our psychological lives. It’s conceivable that we can remove phenomenal consciousness entirely while leaving our psychological lives perfectly intact. This is because the psychological roles played by our phenomenal states can conceivably be played by the nonphenomenal (p.13) states of our functional zombie twins. Hence, bifurcationism implies that there is no role in our psychological lives that cannot conceivably be played without phenomenal consciousness. In other words, phenomenal consciousness has no unique significance in our psychological lives.
If bifurcationism is false, however, then the significance of consciousness begins to look rather different. Bifurcationism contrasts with unificationism, which says that there is some conceptual, analytic, or a priori connection between our phenomenal and psychological concepts of mind. The nature of this connection is disputed: some argue that we should analyze phenomenal consciousness in terms of its psychological role, while others argue that we should analyze our psychological states and processes in terms of their connections with phenomenal consciousness. As I’ll explain, these opposing views have very different consequences for questions about the nature and significance of consciousness.
One version of unificationism says that psychological functions come first. On this view, phenomenal consciousness can be defined in terms of its causal role in the psychological processes that produce action, cognition, or metacognition. Moreover, the psychological concepts used in the definition of phenomenal consciousness can be functionally defined in purely causal terms. Putting these two claims together, the result is that both phenomenal and psychological concepts can be functionally defined in purely causal terms.10
On this view, we cannot divorce the hard problem of explaining phenomenal consciousness from the easy problem of explaining its associated psychological functions. On the contrary, these problems are intimately connected; moreover, they are connected in ways that make the hard problem easier to solve. To explain phenomenal consciousness, we just need to explain the psychological functions in terms of which it is defined. Since explaining these psychological functions is an easy problem, explaining phenomenal consciousness is an easy problem too.
Another consequence of this view is that phenomenal consciousness has a uniquely significant role to play in our psychological lives. On this view, phenomenal consciousness is functionally defined in terms of its causal role in our psychology. Therefore, it’s inconceivable that there can be a functional zombie who does everything we conscious creatures can do. Any functional duplicate of a conscious creature is thereby a psychological duplicate of that creature, and any psychological duplicate of a conscious creature is thereby a phenomenal (p.14) duplicate of that creature. Hence, the causal role that phenomenal consciousness plays in our psychological lives cannot conceivably be played in zombies.
This is also the main problem with this view: it rules out the conceivability of functional zombies. This is sufficiently implausible that many proponents of functionalism about phenomenal consciousness prefer to recast it as an empirical or metaphysical hypothesis, rather than a conceptual analysis. We needn’t conclude, however, that there is no analytic connection at all between phenomenal and psychological concepts. Instead, we can simply reverse the direction of analysis. Rather than analyzing phenomenal consciousness in terms of its role in our psychology, we can analyze our psychological states and processes in terms of phenomenal consciousness.
This alternative version of unificationism says that phenomenal consciousness comes first. On this view, the phenomenal concept of consciousness is our basic concept of mind. Our concepts of psychological states, including mental representation, belief, and knowledge, are analyzed in terms of their connections with phenomenal consciousness. As John Searle writes, “All of the processes that we think of as especially mental—whether perception, learning, inference, decision making, problem solving, the emotions, etc.—are in one way or another crucially related to consciousness” (1992: 227).
On this view, neither our phenomenal concepts nor our psychological concepts of mind are amenable to functional analysis. If our psychological concepts are analyzed in terms of their relations to phenomenal consciousness, and phenomenal consciousness resists functional definition, then our psychological concepts resist functional definition too. It’s conceivable that there could be a functional zombie, but it’s inconceivable that a functional zombie could have the same psychology as a conscious creature. This is because neither phenomenal states nor psychological states can be functionally defined in terms of their abstract causal structure.
Again, this means that we cannot we cannot divorce the hard problem of explaining phenomenal consciousness from the easy problem of explaining its associated psychological functions. These problems are intimately connected. Rather than making the hard problem easier to solve, however, this makes the easy problems much harder to solve. If our psychological states are analyzed in terms of their connections with phenomenal consciousness, then we cannot explain our psychological lives without explaining phenomenal consciousness too.
This view also has the consequence that phenomenal consciousness has a uniquely significant role to play in our psychological lives. This is not because functional zombies are inconceivable, but rather because they cannot conceivably share our psychology. They don’t have the same psychological states and processes as we do, since our psychological states and processes are defined in terms of their connections with phenomenal consciousness. It’s conceivable (p.15) that functional zombies can do everything we can do, but it’s inconceivable that their psychological lives are just like ours. This is the kind of response to the zombie challenge that I will develop in this book. In the next section, I’ll distinguish several different versions of this response.
1.4. Putting Consciousness First
This section explores the program of putting phenomenal consciousness first. There are many different versions of this program. One version says that zombies cannot represent the world because phenomenal consciousness is the basis of all mental representation. Another version says that zombies cannot think about the world because phenomenal consciousness is the basis of all conceptual thought. My own version says that zombies cannot know anything about the world, since they have no epistemic justification to form beliefs: on this view, phenomenal consciousness is the basis of all epistemic justification. This section explains what’s distinctive about my response to the zombie challenge by distinguishing it from others in the same vicinity.
1.4.1. Mental Representation
Proponents of bifurcationism tend to regard the problem of explaining mental representation as an easy problem, which can be divorced from the hard problem of explaining phenomenal consciousness. Consider the research program of “naturalizing” mental representation: the aim is to analyze mental representation in nonrepresentational terms that can be stated without mentioning consciousness. Naturalistic theories of mental representation typically appeal to internal causal relations between physical states of the brain or external causal relations between physical states of the brain and the external world. According to a simple causal theory, for example, a mental state represents that p just in case it is caused by the fact that p under optimal conditions in which the representational system fulfills its biological function.11
If mental representation can be analyzed without mentioning consciousness, then perhaps consciousness can be analyzed in terms of mental representation. This is the goal of representational theories of consciousness, which come in at least three kinds. First-order representational theories say that a mental state (p.16) is conscious just in case it represents the external world in a way that plays the right kind of functional role in the control of action, reasoning, and verbal report. Higher-order representational theories say that a mental state is conscious just in case it is the target of some higher-order mental representation, such as a higher-order thought or perception. Finally, self-representational theories say that a mental state is conscious just in case it represents itself in the right way.12
Combining these two ideas yields an influential strategy for solving the hard problem of consciousness. According to the representational solution, we can close the explanatory gap between physical and phenomenal concepts by combining a causal theory of mental representation with a representational theory of phenomenal consciousness. For example, David Armstrong (1968) combines a higher-order representational theory of phenomenal consciousness with a causal-informational theory of mental representation. The first step is that consciousness results from a process of “self-scanning” that carries information about your own informational states. The second step is that physical states carry information about other physical states when there is a systematic causal dependence of the former on the latter. The result is a version of unificationism that puts psychological functions first: it defines phenomenal consciousness in terms of its causal role in the psychological process of self-scanning.
Unfortunately, there are problems with both steps in the representational solution. The main problem with causal theories of mental representation is that they tend to face an underdetermination problem. This is the problem of explaining what makes it the case that a mental representation has the content that it does, rather than some deviant alternative. For example, Quine (1960: ch. 2) asks what makes it the case that our word ‘rabbit’ refers to rabbits, rather than undetached rabbit parts. Similarly, Kripke (1982) asks what makes it the case that our word ‘plus’ refers to the plus function, rather than the deviant quus function. These problems were originally raised for theories of linguistic meaning, but the same problems arise for theories of mental representation. The problem is that causal relations between me and my environment underdetermine whether I refer to rabbits or undetached rabbit parts, plus or quus, and so on. So, if mental representation is grounded in causal relations to the environment, then its content is radically indeterminate.
Even if this problem can be solved, the conceivability of zombies presents a problem for the representational analysis of consciousness. After all, it’s conceivable that a zombie could have representational states that play the same kind (p.17) of abstract causal role as our own conscious states. In the case of Armstrong’s self-scanning theory, for example, it’s conceivable that zombies could have higher-order states that carry information about their own first-order informational states. Some proponents of representational theories deny that zombies are conceivable, but this is rather hard to swallow. A more popular response is to block the inference from conceivability to possibility—say, by endorsing some version of the phenomenal concept strategy. But this is to abandon the representational strategy for solving the hard problem of consciousness. On this view, we cannot close the explanatory gap by giving a representational analysis of our phenomenal concepts.13
We can summarize these problems in the form of a dilemma. Is it conceivable that zombies can have mental representations or not? If so, then the representational analysis of consciousness is false, since zombies can satisfy all the relevant representational conditions without being conscious. If not, then the causal analysis of mental representation is false, since zombies can satisfy all the relevant causal conditions without having mental representations. Either way, the representational solution to the hard problem fails: we cannot close the explanatory gap by combining a representational analysis of consciousness with a causal analysis of mental representation.
Why does the representational solution to the hard problem fail? One diagnosis is that phenomenal consciousness comes first: mental representation should be analyzed in terms of phenomenal consciousness, rather than vice versa. The argument for this view is that only phenomenal consciousness provides a sufficiently determinate ground of mental representation. Zombies cannot have mental representations because only phenomenal consciousness can ground mental representation in a way that avoids radical indeterminacy. John Searle puts the point in terms of the claim that all intentional states have “aspectual shape”—that is, they present their intentional objects under some aspects, rather than others. He argues, “For a zombie, unlike a conscious agent, there simply is no fact of the matter as to exactly which aspectual shapes its alleged intentional states have” (1990: 595).
On this view, consciousness is the unique source of all mental representation: in other words, all mental representation is either conscious or otherwise grounded in consciousness. As Terry Horgan and George Graham (2012) put the point, consciousness is the “anchor point” for all mental representation. (p.18) This view provides a striking answer to our question about the significance of consciousness. If we become zombies, then we cannot represent the external world at all.14
In chapter 2, I’ll argue against the proposal that consciousness is the unique source of all mental representation. More specifically, I’ll argue that it cannot account for the indispensable explanatory role of unconscious mental representation in cognitive science. Following Tyler Burge (2010), I’ll recommend that we solve the indeterminacy problems by treating mental representation as an autonomous scientific kind. Even if we cannot give a naturalistic reduction of unconscious mental representation in nonrepresentational terms, we are committed to its existence insofar as it plays an indispensable explanatory role in our best scientific theories. On this view, zombies can have mental representations so long as they play an indispensable role in explaining their behavior.
Our question about the significance of consciousness therefore remains. Is there any significant distinction to be drawn between the kind of mental representation that has its source in consciousness and the kind that has its source elsewhere? One answer is that consciousness is the ultimate source of our capacity to think about the empirical world. Bertrand Russell (1912) argues that our capacity to think about objects and properties in the external world depends ultimately on conscious acquaintance with our own experience. A contemporary version of this Russellian program says that all our empirical thought depends on demonstrative thought about objects and properties in the external world, and all such demonstrative thought depends upon conscious acquaintance with those objects and properties. On this view, zombies can have nonconceptual representations of the external world, but they have no conceptual capacity to think about the external world.15
In chapter 2, I’ll argue that the epistemic role of consciousness is more fundamental in the order of explanation than the role of consciousness in thought. We can explain the role of consciousness in conceptual thought as a consequence of the epistemic role of consciousness together with epistemic constraints on conceptual thought. This yields a more fundamental answer to the zombie question. My claim is that mental representation provides epistemic justification for belief only if it has its source in consciousness. On this view, zombies can represent the world, but they cannot know anything about the world, since they have no epistemic justification to form beliefs about the world.
To put this proposal into perspective, let’s consider how phenomenal consciousness has figured—or otherwise failed to figure—in contemporary epistemology. What role does phenomenal consciousness play in contemporary theories of knowledge and epistemic justification? According to bifurcationism, the problem of explaining knowledge and epistemic justification is regarded as an easy problem that can be divorced from the hard problem of explaining phenomenal consciousness. Here are two prominent examples.
Jerry Fodor (1975) argues that we can explain epistemic rationality in terms of a computational theory of mind. The key idea is that the mind contains mechanisms that are sensitive to the formal properties of symbols. Moreover, there is an isomorphism between the formal properties of symbols and their semantic properties: there is a one-to-one mapping from one set of properties to the other that preserves relations between them. As a result, computational mechanisms that are directly sensitive to the formal properties of symbols are thereby indirectly sensitive also to their semantic properties. So long as we can explain the semantic properties of symbols without mentioning consciousness—say, in terms of a causal theory of mental representation—there is no need to mention consciousness in giving a mechanistic explanation of rational cognition.16
Similarly, Alvin Goldman (1979) and other proponents of reliabilism in epistemology explain knowledge and epistemic justification in terms of reliable connections between the mind and the external world that can be specified without mentioning consciousness. On a simple form of process reliabilism, for example, a belief is epistemically justified just in case it is held on the basis of a process that reliably yields true beliefs. Given this kind of reliabilism, the problem of explaining knowledge and justified belief can be regarded as an easy problem that can be tackled independently of the hard problem of explaining consciousness.17
If epistemic justification can be explained without appealing to phenomenal consciousness, then perhaps phenomenal consciousness can be explained in terms of epistemic justification. This is the goal of epistemic theories of consciousness. Epistemic theories are less popular than representational theories in the recent literature, but they have much the same structure. Higher-order epistemic theories say that a mental state is conscious just in case it serves as a (p.20) basis for higher-order knowledge or epistemically justified belief that you’re in that mental state. First-order epistemic theories, in contrast, say that a mental state is conscious just in case it serves as a basis for first-order knowledge or epistemically justified belief about the external world. According to Fred Dretske’s epistemic criterion for consciousness, you are conscious of an object just in case information about the object is available to you as a justifying reason for belief or action. He writes, “S is aware of x if and only if information about x is available to S as a reason. It is the availability of information for rationalizing and motivating intentional action . . . that makes it conscious” (2006: 174).
Combining these two ideas yields a new strategy for solving the hard problem. According to the epistemic solution, we can close the explanatory gap by combining an epistemic analysis of phenomenal consciousness with a reliabilist analysis of epistemic justification. Unfortunately, the epistemic solution fails for much the same reason as the representational solution. Again, the problem can be posed in the form of a dilemma. Is it conceivable that zombies have epistemically justified beliefs or not? If so, then the epistemic analysis of consciousness is false, since zombies can satisfy all the relevant epistemic conditions without being conscious. If not, then the reliabilist analysis of epistemic justification is false, since zombies can satisfy all the relevant reliability conditions without having epistemically justified beliefs. Either way, the epistemic solution to the hard problem fails: we cannot close the explanatory gap by combining a reliabilist analysis of epistemic justification with an epistemic analysis of consciousness.
Why does the epistemic solution to the hard problem fail? My own diagnosis is that phenomenal consciousness comes first: we should analyze epistemic justification in terms of phenomenal consciousness, rather than vice versa. It is extremely plausible that there is some analytic connection between consciousness and knowledge: indeed, this connection seems to be encoded in the etymology of the word ‘consciousness’, which derives from the Latin words, con (together), and scire (to know). We can preserve this connection by giving a phenomenal analysis of epistemic justification, rather than an epistemic analysis of phenomenal consciousness. That is exactly what I will do in this book.
The main aim of this book is to argue that consciousness has unique epistemic significance. On this view, all epistemic justification has its source in consciousness. More precisely, all mental representations that provide epistemic justification for belief are either conscious or otherwise grounded in consciousness. This generates an answer to the zombie challenge. Zombies can represent the world, but they cannot know anything about the world, since they have no epistemic justification to form beliefs about the world. Only conscious creatures can know anything about the world.
(p.21) Although this book is about the epistemic role of consciousness, I’ve argued elsewhere that the normative significance of consciousness extends in a unified way across epistemic and practical domains.18 Just as consciousness is a unique source of epistemic justification for belief, so it is a unique source of practical justification for action. Conscious experience gives us justifying reasons for belief and action and thereby enables us to believe and act rationally on the basis of those reasons. Moreover, conscious experience is a unique source of epistemic and practical reasons. If we become zombies, then we have no justifying reasons for belief and action, and so we cannot believe or act rationally on the basis of reasons. Let me illustrate the point with a mundane example.
Consider the experience you have when you see a tempting piece of cake and you feel the desire to eat it. Your visual experience seems to present you with cake and thereby gives you an epistemic reason to believe that there is cake before you. Similarly, your affective experience of desire presents the prospect of eating the cake in a positive light and thereby gives you a practical reason to intend to eat the cake. These reasons are defeasible, of course, but the phenomenal character of your experience counts to some degree in favor of eating the cake. In the absence of defeating reasons, these reasons are strong enough to act upon. If you are rational, then you will eat the cake, and you will do so for good reasons.
Now contrast your zombie twin who is in exactly the same external predicament. Your zombie twin has an internal state that represents the presence of cake and an internal state that motivates eating the cake. Even so, there is nothing it is like for the zombie to have these representational and motivational states. The zombie has no visual experience that seems to present it with cake and no experience of desire that seems to present the cake in a positive light. In consequence, the zombie has no epistemic reason to believe there is cake to be eaten and it has no practical reason to intend to eat cake. Of course, we can explain why your zombie twin behaves as it does by citing its unconscious representational and motivational states. We can also explain why acting that way is good for the zombie—for instance, it may need food in order to survive. What we cannot do is to explain the zombie’s behavior in a way that shows it to be rational in light of the zombie’s own reasons for belief or action. This is because the zombie has no conscious experience.
That, in any case, is what I will argue in this book. Your zombie twin doesn’t have the same reasons as you, since consciousness is a unique source of reasons for belief and action. Moreover, it follows—or so I will argue—that your zombie (p.22) twin doesn’t have the same mental states that you do. By stipulation, your zombie twin has no conscious mental states. Still, we can ask, does it have all the same beliefs, desires, and intentions?
Resolving this question depends on how these mental states are defined. Are they defined by their causal role in motivating belief and action or by their normative role in providing reasons for belief and action? If mental states are defined by their causal role, then your zombie twin has the same mental states as you, since their states play the same causal role as yours. If mental states are defined by their normative role, however, then your zombie twin doesn’t have the same mental states as you, since their states don’t play the same normative role as yours. I’ll now suggest that beliefs, desires, and intentions are defined by their normative roles in providing reasons for belief and action, rather than their causal roles in motivating belief and action.
There is often a mismatch between the causal roles of our mental states and their normative roles: we don’t always believe and act on the basis of our reasons. After all, we’re not ideally rational agents. Nevertheless, ideally rational agents can have beliefs, desires, and intentions, just as we can. Their mental states don’t play the same causal role as our mental states in motivating belief and action, since they are much more rational than we are, but their mental states play the same abstract normative role in providing reasons for belief and action. For example, their beliefs and intentions provide reasons for action and are supported by reasons provided by perception and desire. This suggests that these mental states are defined by their normative roles in providing us with reasons for belief and action, rather than their causal roles in motivating us to believe and act for those reasons. In this sense, the mental is normative.
When I say that the mental is normative, I mean that there are normative roles that are both necessary and sufficient for having certain mental states, such as beliefs, desires, and intentions. I’m not making the stronger claim that the most fundamental characterization of these mental states is given by their normative roles. Presumably, there must be some more fundamental explanation that explains why beliefs, desires, and intentions play different kinds of normative roles. In my view, the most fundamental characterization of these standing attitudes concerns their phenomenal dispositions. Beliefs, desires, and intentions are disposed to cause different kinds of phenomenal experience. These phenomenal dispositions explain why beliefs, desires, and intentions play the normative roles that they do. On this view, phenomenal consciousness comes first in explaining the normativity of the mental.19
(p.23) If the mental is normative, and consciousness is a unique source of normativity, then it follows that consciousness plays a unique role in our mental lives. On this view, zombies cannot have exactly the same mental states as conscious creatures, since their unconscious states cannot play the same normative role in giving and receiving support from reasons. In this way, we can use the normative significance of consciousness as a premise in arguing that consciousness plays an essential role in our mental lives.
Zombies pose a challenge to the significance of consciousness because they can do everything we conscious creatures can do: the unconscious states of a zombie play the same abstract causal role as our conscious states. My response to this challenge is to explain the significance of consciousness in terms of its normative role, rather than its causal role. Consciousness has no unique causal significance, but it has unique normative significance: it is inconceivable that the normative role of consciousness can be duplicated by any merely ersatz functional substitute for consciousness. This is because the unique normative significance of consciousness depends on its phenomenal character, rather than its causal role. This means that our epistemic and psychological concepts cannot be functionally defined in terms of their abstract causal role. We cannot understand the mind by putting psychological functions first. Instead, we need to put phenomenal consciousness first.20
1.5. An Overview of This Book
This book is about the connection between phenomenal consciousness and epistemic justification. But what is epistemic justification? Like the concept of phenomenal consciousness, the concept of epistemic justification cannot be defined in more basic terms, but only by giving examples. Justified beliefs include those based on perceptual experience, or inferred from other justified beliefs by good deductive and inductive reasoning. Unjustified beliefs include those based on wishful thinking, hasty generalization, fallacious reasoning, or ungrounded hunches. We all have some intuitive understanding of what it means to say that a belief is justified, which we can use as a starting point in building a theory of epistemic justification. At the same time, we can use more abstract (p.24) theoretical considerations to sharpen and refine our intuitive grasp on the nature of epistemic justification.
It is sometimes said that ‘justified belief’ is a philosopher’s idiom, but a quick search of the internet reveals multiple uses in the news, including the sports pages. Here is a representative example from the soccer fan website Pain in the Arsenal:
Alexis Sanchez is clearly unhappy with life at the Emirates and could well be edging towards a summer exit. Repeated strops and tantrums . . . have led many to conclude that Sanchez is nearing the end of his Arsenal tenure. And that is a justified belief.21
Admittedly, beliefs are more commonly described as ‘reasonable’ or ‘rational’ than ‘justified’, but these are all perfectly good uses of English language, and they are all standardly used with the same meaning. To say that a belief is justified is to say that it is rational or reasonable—in other words, it is based on good reasons. Some epistemologists insist on drawing distinctions between justification, rationality, and reasonableness. I am not opposed to drawing such distinctions; indeed, I’ll draw a distinction between ideal and nonideal senses of these epistemic terms. My claim here is just that these are theoretical distinctions, which are not obviously reflected in our ordinary use of the terms.
‘Epistemic justification’ is a philosopher’s idiom, but this technical terminology is designed to mark an intuitive distinction. An epistemically justified belief is a belief that is justified by evidence—that is, by epistemic reasons, rather than practical reasons. If you believe that God exists because you want to avoid the threat of eternal damnation, then you believe for practical reasons, rather than epistemic reasons. You have no epistemic reason to believe that God exists because your desire gives you no evidence that God exists. Arguably, beliefs cannot be justified by practical reasons at all, since these are the “wrong kinds of reasons” to justify belief. In any case, it seems clear that you cannot know anything on the basis of practical reasons. Epistemic justification is the kind of justification that is necessary for knowledge. All knowledge is justified by evidence.
We can say that all knowledge is justified by evidence without prejudging questions about the nature of evidence. Given the framework of evidentialism, we can define your evidence in terms of its epistemic role in determining which propositions you have epistemic justification to believe. Different theories of evidence disagree about which facts play this epistemic role. This book argues (p.25) for a phenomenal conception of evidence, according to which your evidence is exhausted by the facts about your current phenomenally individuated mental states. Given that epistemic justification is determined by evidence, this yields the central organizing thesis of the book:
Phenomenal Mentalism: Necessarily, which propositions you have epistemic justification to believe at any given time is determined solely by your phenomenally individuated mental states at that time.
These “phenomenally individuated” mental states include not only your experiences, which are individuated by their phenomenal character, but also your standing beliefs, desires, and intentions, which are individuated by their dispositions to cause certain kinds of phenomenal experiences under phenomenal conditions. At the same time, this criterion excludes your “subdoxastic” mental states, which are individuated by their role in unconscious computational processes, and all your mental states that are externally individuated by their relations to the external world.
The book as a whole provides an extended argument for phenomenal mentalism. The book is divided into two parts, which converge on phenomenal mentalism from opposite directions. Part I argues “from below” by using intuitions about cases to build a more general argument for phenomenal mentalism. Part II, in contrast, argues “from above” by using general epistemic principles (such as the JJ principle) to argue for phenomenal mentalism. These two argumentative strategies are mutually reinforcing. The judgments about cases provide intuitive support for the general principles, while the general principles provide theoretical support for the judgments about cases. The result is a theory of epistemic justification that achieves stable reflective equilibrium between intuitions about cases and general principles.
Here, in summary form, are some intuitive considerations that are adduced in support of phenomenal mentalism in the first part of the book:
• It explains why perceptual experience provides epistemic justification for beliefs about the external world.
• It explains why unconscious perceptual information in blindsight doesn’t provide epistemic justification for beliefs about the external world.
• It explains why you and your phenomenal duplicates in skeptical scenarios have epistemic justification to believe the same propositions to the same degree.
• It explains why your beliefs, as well as your perceptual experiences, can affect which propositions you have epistemic justification to believe.
Much of the first part of the book is devoted to motivating these claims, defending them against objections, and explaining how they support phenomenal mentalism.
The second part of the book is designed to address an explanatory challenge for phenomenal mentalism. Why are only phenomenally individuated mental states capable of determining epistemic justification? Phenomenal mentalism is intuitively compelling, but the challenge is to provide some kind of theoretical understanding of why it should be true. The second part of the book develops a form of accessibilism about epistemic justification that is designed to explain why phenomenal mentalism is true. Phenomenal accessibilism is the view that results from combining phenomenal mentalism with accessibilism in the manner that I’ll explain.
My answer to the explanatory challenge appeals to a threefold connection between epistemic justification, phenomenal consciousness, and introspection. What’s special about phenomenally individuated mental states is that they are “luminous” in the sense that you’re always in a position to know by introspection whether or not you’re in those mental states. Moreover, accessibilism makes it plausible that only luminous mental states provide epistemic justification for belief. This explains why only phenomenally individuated mental states provide epistemic justification for belief. Here is the argument in outline:
(1) Only introspectively luminous mental states can provide epistemic justification for belief.
(2) Only phenomenally individuated mental states are introspectively luminous.
(3) Therefore, only phenomenally individuated mental states can provide epistemic justification for belief.
I’ll now briefly comment on each of these premises.
The second premise articulates an epistemic connection between introspection and phenomenal consciousness. Although I’ll motivate this premise and defend it against objections in chapter 5, I won’t attempt to derive it from more fundamental assumptions, since I very much doubt that this can be done. Some philosophers argue that we can explain the epistemic connection between introspection and phenomenal consciousness in terms of metaphysical claims about the nature of phenomenal consciousness. For example, one influential (p.27) view says that phenomenal consciousness is introspectively luminous because it (and it alone) is “self-presenting” in the sense that it constitutes a primitive form of awareness of itself. In my view, however, there is no good motivation for this claim about the nature of phenomenal consciousness. Indeed, I suspect that the epistemic connection between introspection and phenomenal consciousness says much more about the nature of introspection than it does about the nature of phenomenal consciousness. My strategy is to use this epistemic connection as my starting point in explaining a more general connection between epistemic justification and phenomenal consciousness. Of course, this still leaves me with the burden of motivating the first premise.
The first premise says that only introspectively luminous mental states can provide epistemic justification for belief. I motivate this premise by arguing for a form of accessibilism about epistemic justification:
Accessibilism: Epistemic justification is luminously accessible in the sense that, necessarily, you’re always in a position to know which propositions you have epistemic justification to believe at any given time.
If accessibilism is true, then it stands in need of explanation. What explains how you’re always in a position to know which propositions you have epistemic justification to believe? The best explanation, or so I will argue, is that epistemic justification is determined by introspectively luminous facts about your phenomenally individuated mental states. Whenever you’re in some phenomenally individuated mental state M that gives you epistemic justification to believe that p, you’re thereby in a position to know the following:
(1) I’m in M [by introspection].
(2) If I’m in M, then I have justification to believe that p [by a priori reasoning].
(3) Therefore, I have justification to believe that p [by deduction from (1) and (2)].
The upshot is that phenomenal mentalism is an essential part of the best explanation of accessibilism. If accessibilism can be motivated and defended on independent grounds, then phenomenal mentalism is supported by inference to the best explanation.
In the second part of the book, I give three distinct arguments for accessibilism. First, it explains and vindicates the intuitions about cases that I use to motivate phenomenal mentalism in the first part of the book. Second, it is needed for explaining the irrationality of epistemic akrasia—that is, roughly, believing things you believe you shouldn’t believe. And third, it follows from (p.28) the plausible thesis that epistemic justification is what gives a belief the potential to survive an ideally rational process of critical reflection.
I also defend accessibilism against a series of influential objections. These include Timothy Williamson’s anti-luminosity argument, Ernest Sosa’s version of the problem of the speckled hen, David Christensen’s arguments from misleading higher-order evidence, Hilary Kornblith’s arguments against the connection between epistemic justification and reflection, and Eric Schwitzgebel’s arguments for the unreliability of introspection. A central theme in my responses to all these objections is that we need to respect a distinction between ideal and nonideal standards of epistemic rationality. I argue that this is not just an ad hoc move designed to avoid objections, but is independently well motivated: in effect, everyone needs some version of this distinction. What is distinctive about my account is not the appeal to ideal rationality itself, but rather my specific account of what ideal rationality consists in.
Any version of evidentialism says that ideal rationality is a matter of proportioning your beliefs to the evidence. But different versions of evidentialism disagree about which propositions are supported by your evidence. According to accessibilism, you’re always in a position to know what your evidence is and what it supports. Otherwise, your evidence can make it rational to be epistemically akratic—that is, to believe things that you believe you shouldn’t believe. Plausibly, however, epistemic akrasia is never permitted by ideal standards of rationality. The best explanation of accessibilism is that your evidence is exhausted by introspectively luminous facts about your mental states. But only phenomenally individuated facts about your mental states are introspectively luminous in the requisite way. Hence, phenomenal mentalism emerges as an inevitable consequence of an independently motivated account of the nature of ideal rationality.
1.6. Chapter Summaries
This book is divided into two parts. The first part is more closely engaged with issues in the philosophy of mind, including debates about the role of phenomenal consciousness in theories of mental representation, perception, cognition, and introspection. The second part is more exclusively concerned with issues in epistemology, including the debates between internalism and externalism about the nature of epistemic justification. This division is somewhat artificial, however, since one of the main aims of the book is to highlight interconnections between epistemology and philosophy of mind. Throughout this book, epistemology informs and is informed by philosophy of mind.
(p.29) I hope you will read this book from beginning to end. If you’re looking to read more selectively, however, here is some advice about how to proceed. While the book as a whole builds a cumulative argument for the epistemic role of consciousness, each individual chapter is relatively self-standing, and each of the two parts can be read on its own. If you’re primarily interested in issues in the philosophy of mind, including debates about the nature and epistemic role of phenomenal consciousness, then it makes sense to start with chapters 1–7. If you’re primarily interested in issues in epistemology, including the debate between internalism and externalism about epistemic justification, then you could just as well start with chapters 6–12. Either way, chapters 6 and 7 are essential to understanding the overall contours of the view. And if you want to know how it all fits together, then you should probably read the whole thing.
Chapter 2: Representation
Chapter 2 explores the relationship between consciousness and mental representation. Section 2.1 argues for a version of representationalism, the thesis that consciousness is a kind of mental representation. Section 2.2 argues against the representational grounding thesis, which says that all unconscious mental representation is grounded in consciousness. Section 2.3 argues that the representational grounding thesis is not supported by failure of the program of naturalizing mental representation. Section 2.4 examines the conceptual grounding thesis, which says that all conceptual representation is grounded in consciousness. The role of consciousness in thought is best explained as a consequence of the epistemic role of consciousness together with epistemic constraints on conceptual thought. Section 2.5 presents the epistemic grounding thesis, which says that all mental representation that provides epistemic justification for belief is grounded in consciousness. This thesis sets the agenda for the rest of the book.
Chapter 3: Perception
Chapter 3 explores the epistemic role of consciousness in perception. Section 3.1 argues that unconscious perceptual representation in blindsight cannot justify beliefs about the external world. Section 3.2 argues that this is because phenomenal consciousness, rather than access consciousness or metacognitive consciousness, is necessary for perceptual representation to justify belief. Section 3.3 argues that perceptual experience has a distinctive kind of phenomenal character—namely, presentational force—that is not only necessary but (p.30) also sufficient for perception to justify belief. Section 3.4 uses a version of the new evil demon problem to argue that the justifying role of perceptual experience supervenes on its phenomenal character alone. Section 3.5 defends this supervenience thesis against the objection that phenomenal duplicates who perceive distinct objects thereby have justification to believe different de re propositions.
Chapter 4: Cognition
Chapter 4 explores the epistemic role of consciousness in cognition. Section 4.1 argues that all beliefs provide epistemic justification for other beliefs. Section 4.2 contrasts beliefs with subdoxastic states, which provide no epistemic justification for belief. Section 4.3 argues that this epistemic distinction between beliefs and subdoxastic states cannot be explained in terms of the functional criterion of inferential integration. Section 4.4 argues that this epistemic distinction must be explained in terms of the phenomenal criterion of conscious accessibility: the contents of beliefs are accessible to consciousness as the contents of conscious judgments. Section 4.5 argues that conscious judgments have phenomenal contents that supervene on their phenomenal character. Section 4.6 concludes with some proleptic remarks to explain why beliefs can provide epistemic justification for other beliefs only if their contents are accessible to consciousness.
Chapter 5: Introspection
Chapter 5 explores the epistemic role of consciousness in introspection. Section 5.1 presents a simple theory of introspection, which says that some mental states provide introspective justification that puts you in a position to know with certainty that you’re in those mental states. Section 5.2 defends the simple theory against Eric Schwitzgebel’s arguments for the unreliability of introspection. Section 5.3 motivates the simple theory on the grounds that it explains a plausible connection between epistemic rationality and introspective self-knowledge. Section 5.4 argues that all and only phenomenally individuated mental states fall within the scope of the simple theory of introspection. Section 5.5 explores the role of consciousness in explaining our introspective knowledge of what we believe. Section 5.6 concludes with some pessimism about the prospects for explaining the connection between consciousness and introspection in more basic terms.
Chapter 6 develops a theory of epistemic justification designed to capture the epistemic role of phenomenal consciousness: namely, phenomenal mentalism. Section 6.1 defines epistemic justification within the framework of evidentialism. Section 6.2 defines mentalism about epistemic justification and explores its connection with evidentialism. Section 6.3 argues for phenomenal mentalism, the thesis that epistemic justification is determined solely by your phenomenally individuated mental states, by appealing to intuitions about clairvoyance, super-blindsight and the new evil demon problem. Section 6.4 argues for a phenomenal conception of evidence, which says that your evidence is exhausted by facts about your current phenomenally individuated mental states, and defends it against Timothy Williamson’s arguments for the E = K thesis. Finally, section 6.5 outlines an explanatory challenge for phenomenal mentalism, which sets the agenda for the second part of the book.
Chapter 7: Accessibilism
Chapter 7 answers the explanatory challenge by combining phenomenal mentalism with accessibilism to yield phenomenal accessibilism. Section 7.1 defines accessibilism as the thesis that epistemic justification is luminous in the sense that you’re always in a position to know which propositions you have epistemic justification to believe. Section 7.2 argues that phenomenal mentalism is part of the best explanation of accessibilism: if accessibilism can be motivated on independent grounds, then phenomenal mentalism is supported by inference to the best explanation. Sections 7.3 and 7.4 use accessibilism to motivate the intuitions about cases that support phenomenal mentalism—namely, clairvoyance, super-blindsight, and the new evil demon problem. Finally, section 7.5 answers the explanatory challenge for phenomenal mentalism: epistemic justification is determined by your current phenomenally individuated mental states because they are luminous by introspection.
Chapter 8: Reflection
Chapter 8 motivates accessibilism by appealing to William Alston’s hypothesis that the value of epistemic justification is tied to reflection, an activity that is the distinctive mark of persons who can be held responsible for their beliefs and actions. Section 8.1 argues that epistemic justification is what makes our beliefs stable under an idealized process of reflection. Section 8.2 uses this proposal (p.32) in arguing for the JJ principle, which says that you have justification to believe a proposition if and only if you have justification to believe that you have justification to believe it. Sections 8.3–8.6 defend this proposal against a series of objections raised by Hilary Kornblith: the overintellectualization problem, the regress problem, the empirical problem, and the value problem. Section 8.7 concludes with some reflections on the debate between internalism and externalism about epistemic justification.
Chapter 9: Epistemic Akrasia
Chapter 9 argues that accessibilism is needed to explain the epistemic irrationality of epistemic akrasia—roughly, believing things you believe you shouldn’t believe. Section 9.1 defines epistemic akrasia and separates questions about its possibility and its rational permissibility. Section 9.2 argues from the premise that epistemic akrasia is never rationally permissible to the conclusion that the JJ principle is true. The remaining sections motivate the premise that epistemic akrasia is never rationally permissible: section 9.3 appeals to an epistemic version of Moore’s paradox, section 9.4 to the slogan that knowledge is the aim of belief, and section 9.5 to the connection between epistemic justification and reflection.
Chapter 10: Higher-Order Evidence
Chapter 10 explores a puzzle about epistemic akrasia: if you can have misleading higher-order evidence about what your evidence supports, then your total evidence can make it rationally permissible to be epistemically akratic. Section 10.1 presents the puzzle and three options for solving it: Level Splitting, Downward Push, and Upward Push. Section 10.2 argues that we should opt for Upward Push: you cannot have misleading higher-order evidence about what your evidence is or what it supports. Sections 10.3 and 10.4 defend Upward Push against David Christensen’s objection that it licenses irrational forms of dogmatism in ideal and nonideal agents alike. Section 10.5 responds to his argument that misleading higher-order evidence generates rational dilemmas in which you’re guaranteed to violate one of the ideals of epistemic rationality. Section 10.6 concludes with some general reflections on the nature of epistemic rationality and the role of epistemic idealization.
Chapter 11 defends the thesis that some phenomenal and epistemic conditions are luminous in the sense that you’re always in a position to know whether or not they obtain. Section 11.1 draws a distinction between epistemic and doxastic senses of luminosity and argues that some conditions are epistemically luminous even if none are doxastically luminous. Section 11.2 uses this distinction in solving Ernest Sosa’s version of the problem of the speckled hen. The same distinction is applied to Timothy Williamson’s anti-luminosity argument in section 11.3, his argument against epistemic iteration principles in section 11.4, and his argument for improbable knowing in section 11.5. Section 11.6 concludes by explaining why this defense of luminosity is not merely a pointless compromise.
Chapter 12: Seemings
Chapter 12 concludes the book by contrasting phenomenal accessibilism with Michael Huemer’s phenomenal conservatism. Section 12.1 defines phenomenal conservatism as the global principle that you have epistemic justification to believe a proposition just when it seems strongly enough on balance to be true. Section 12.2 explains the concept of a seeming and outlines an argument that there are no nonperceptual seemings. Section 12.3 argues that phenomenal conservatism imposes implausible restrictions on evidence: all seemings are evidence, but not all evidence is seemings. Section 12.4 argues that phenomenal conservatism gives an overly simplistic account of the evidential support relation: it cannot explain why epistemic rationality requires not only perceptual coherence, but also introspective coherence, logical coherence, and metacoherence. Section 12.5 argues that phenomenal accessibilism is needed to explain these essential characteristics of epistemically rational thinkers. Section 12.6 concludes by summarizing why phenomenal accessibilism is superior to phenomenal conservatism.
(4) Milner and Goodale (2006: 221–228) give examples of unconscious perception in the ventral stream, which are hard to square with Dretske’s proposal. They propose instead that the function of consciousness is to serve as an input to working memory.
(6) In other words, we should understand the zombie challenge in terms of epistemic possibility, rather than metaphysical possibility. Chalmers (2002a) defines an epistemic possibility as a hypothesis about the actual world that is ideally conceivable in the sense that it cannot be ruled out conclusively by any ideal process of a priori reasoning.
(7) I’m assuming what Block (2002: 392) and Chalmers (2003: 221) call phenomenal realism, the thesis that the phenomenal concept of consciousness cannot be defined a priori in purely physical or functional terms. This is compatible with a posteriori (but not a priori) versions of physicalism and functionalism about phenomenal consciousness.
(8) As David Lewis (1972) explains, we start by using our mental terms to state the causal connections between mental states, environmental inputs, and behavioral outputs. Next, we generate the “Ramsey sentence” for the theory by systematically replacing each mental term with a variable bound by an existential quantifier. The result of this technique is a reductive analysis of our mental terms in the form of a complex definite description that specifies the causal role of our mental states in nonmental terms.
(9) Chalmers (1996) is officially agnostic on this question. Although some of his remarks are friendly toward bifurcationism, he argues that some aspects of cognition are conceptually tied to phenomenal consciousness, including the contents of perceptual and phenomenal belief. This is a central theme in his later work, including Chalmers (2003) and (2004).
(10) Psychological theories of consciousness are proposed by Armstrong (1968), Dretske (1995), Tye (1995), Lycan (1996), Rosenthal (1997), and Carruthers (2000), although many of these theories are advanced as empirical conjectures, rather than conceptual analyses.
(11) Those who defend causal theories of mental representation include Dretske (1981), Millikan (1984), Stalnaker (1984), and Fodor (1987). See Stich and Warfield (1994) for a volume of essays on the topic and Loewer (1997) for a critical survey.
(12) For first-order representational theories, see Harman (1990), Tye (1995), Dretske (1995), and Jackson (2003). For higher-order representational theories, see Armstrong (1968), Rosenthal (1997), and Carruthers (2000). For self-representational theories, see Kriegel (2009) and the essays in Kriegel and Williford (2006).
(13) Michael Tye (1995) combines a causal theory of mental representation with a representational theory of phenomenal consciousness, but he endorses a version of the phenomenal concept strategy to block the inference from the premise that zombies are conceivable to the conclusion that zombies are possible. This is, in effect, to abandon the project of solving the hard problem by closing the explanatory gap.
(14) McGinn (1989: 235) calls this the “converse Brentano thesis,” but he doesn’t go so far as to endorse it. Proponents include Searle (1990), Strawson (2008), Kriegel (2011), Horgan and Graham (2012), and Mendelovici (2018).
(15) Those who defend some essential connection between consciousness and thought include McDowell (1994), Davies (1995), Brewer (1999), Campbell (2002), Chalmers (2003), Pautz (2013), and Dickie (2015).
(17) Reliabilism was originally proposed as theory of knowledge by Armstrong (1973), Goldman (1976), Dretske (1981), and Nozick (1981). Goldman (1979) was the first to extend reliabilism from knowledge to epistemic justification.
(19) Zangwill (2005) defines “normative functionalism” as the thesis that beliefs and desires have normative essences, but he distinguishes the strong thesis that their whole essence is normative from the weaker thesis that their normative essence is a consequence of some more basic essence. My view is consistent with a weak form of normative functionalism on which the normative essence of belief and desire is consequential upon a more basic essence that is defined in terms of phenomenal dispositions.
(20) Lee (2013, 2019) argues that if reductive physicalism is true, then consciousness has no unique normative significance. In my view, however, his argument relies on a dubious premise about how evaluatively significant distinctions must be grounded in fundamental physical reality. I hope to discuss this argument in future work.