There Are Two Hard Problems of Consciousness, Not One (original) (raw)

David Chalmers’ essay on the hard problem of consciousness has sparked many analyses, arguments, and counterclaims. Here I explain why we should think about the hard problem as two different problems, rather than one.

One problem is the "ontological problem" of how it might be possible to engineer the felt experience of being. The other is the "epistemological problem" of directly knowing another's primary experience.

Before diving into these two hard problems, let’s start by being clear about the difference between the “easy” and “hard” problems. The easy problems are the “neuro-cognitive” problems that provide a functional account for how we overtly behave the way we do.

Consider, for example, that we can surmise that if you are reading this post and thinking about it, the following is happening: (1) light patterns are coming off the screen, (2) flowing into your retina where they are (3) translated into the “language” of neurobiological information. The (4) incoming information is sorted and tracked back into the occipital lobe, where it is sorted further, integrated with higher-order processes, and connected (5) to your semantic-linguistic processing system. We can track all this via the activity of your nervous system. And as psychologists, we can assess your "functional awareness and response" by asking questions to obtain information about how well you processed the information. We can even monitor your affective system and see if you were positively or negatively inclined to what you are reading. All of this can be done via a “third person” perspective that adopts a cognitive-neuroscience functionalist view of human mental processes.

The hard problem of consciousness refers to the fact that we can learn all of this and still not know for certain that you are not a "philosophical zombie." A philosophical zombie is a thought experiment that refers to an individual who looks like us and talks like us but has no inner life. This can be hard to wrap your head around, so some examples might help. To get a handle, think of robots like the Stepford Wives or maybe Commander Data and the Borg from Star Trek. What is it like to be a Borg, from the inside? Maybe nothing. That said, surely the Borg would have information processing systems, such as working memory and central processing units, and the like. In other words, we could functionally analyze Borg's behavior in terms of information processing, awareness and response.

As I note in the post, 10 problems with Consciousness, there are a number of complicated issues swirling around. But here I am honing in on the hard problem and the key point that it really is two different problems. I note this in the 10 problems post, but since writing it, it has become more clear to me that we should definitely be separating the hard problem into two distinct issues.

We can call the first problem the “ontological” problem. This is the theory of how the brain actually produces the first-person experience of being. The common term for this problem is the “neuro-binding” problem. Consider that much of the “neuro-information processing” that goes on in your brain is nonconscious. So, we can ask: What is the magic ingredient that turns the light of experience on? This can also be considered to be an engineering problem, framed this way: How do we build something that actually feels things?

Correlational research has yielded significant insights into how the brain might be producing conscious experience. For example, I have found Dehaene’s work on global neuronal workspace and the P3 ignition wave to be fascinating. In my opinion, it clearly advances our scientific knowledge of conscious perception. But even this work was only indirectly related to the ontological problem. It showed certain wave functions were correlated with human conscious experience and access. But it did not really touch the problem of why and how those kinds of waves actually functioned to produce conscious experience. As far as I am aware, no one has a clue about the specific neuro-informational mechanics that specifically produce our first-person experience of being. This is the ontological problem: a theory of what causes us to experience redness or hunger.

The second hard problem is the "epistemological" problem. This pertains to the fundamental difference between "first person" and "third person" viewpoints on knowing, as described by people like Ken Wilber and his quadrants. It works as follows: A third-person viewpoint is a view that can be taken by an external observer. An easy way to think of a third-person view visually is that it is anything that can be captured by a video camera. In contrast, the first-person view is the view behind your eyes. This is fully "contained" within the individual and of course cannot be filmed by a camera. This containment results in two important epistemological difficulties, which are mirror images of each other. I call this the “epistemological gap” because it pertains to how we can know what we know. The first is the problem of directly knowing another’s subjective experience—the problem being it cannot be done. This is the problem of, “How do I know that you see red the way I see red?” This problem also relates to our knowledge of consciousness in other animals, which we can only know indirectly. This is the point that Nagel makes in his famous What Is It Like to Be a Bat? The second issue is the inversion of this problem. This is the problem that, as individuals, we are in some ways trapped in our subjective perceptual experience of the world. That is, the only way I can know about the world is through my subjective theater of experience.

I am keen on this issue because it carries some important implications for both the nature of scientific knowledge and its limits. Scientific knowledge is framed by an “objective” third-person view of the world, whereby the data are publicly available to observers. The goal of science is to build models/theories of reality that are then tested via measurement and experiment. This means that framed in “cognitive functional” terms, consciousness can be readily analyzed via science. Here is how Dehaene frames the issue in terms of his research, such that his findings are anchored to behaviors that are measured:

In that sense, the behaviorists were right: as a method [for a pure, truth revealing procedure], introspection provides a shaky ground for a science of psychology, because no amount of introspection will tell us how the mind works. However, as a measure, introspection still constitutes… the only, platform on which to build a science of consciousness, because it supplies a crucial half of the equation—namely, how subjects feel about some experience (however wrong they are about the ground truth). To attain a scientific understanding of consciousness, we cognitive neuroscientists "just" have to determine the other half of the equation: Which objective neurobiological events systematically underlie a person's subjective experience?

Despite these advances in the science of consciousness, science still can’t bridge directly into the specific, unique experience of consciousness from the first-person perspective. Put in personal terms, I have much more direct knowledge of my subjective phenomenology than science could ever have. In saying this, I need to note that this does not mean I can explain why I do what I do better than science; that is a different issue.

But what it does mean is that the unique experience of being-in-the-world for each of us as particular individuals is in some important ways an “extra-scientific” domain. That is, our idiographic experience of being resides outside of the purview of scientific knowledge. It is important to note that there are other important “extra-scientific” domains, such as questions of ethics and morality. Science tells us about what likely “is” from a third-person general point of view; that is, it builds models about the behavior of the universe across different dimensions and levels of analysis. But science does not tell us what ought to be. Nor does it give us a definitive theory of the unique, idiographic experience of being-in-the-world from a first-person perspective. Indeed, science struggles to do this both ontologically and epistemologically.

Currently, when I talk about this unique, particularly first-person domain, I use the language of the soul and spirit. (Click here for the educational philosopher and "metapsychologist" Zak Stein using similar terminology.) In using these terms, I don’t mean a supernatural sense of an entity that will leave my body after death. Rather, I use them to talk about our unique selves from the first-person point of view. In this language system, my soul is my unique lifeworld and everyday trials and triumphs, whereas my spirit refers more to transcendental ethical concerns and how I might connect my life quest to them. My point here is that the soul/spirit defined this way plays by a different set of rules than the language game of science. I believe the differences between the language games or domains of science/behavior and of soul/spirit and morality/ethics are crucial for us to keep in mind as we hunt for a more consilient scientific humanistic philosophy that can guide humanity in the 21st century.