Psychology and neuroscience have made staggering progress over the last century, and will doubtless continue to discover fascinating things about the mind. But there’s a common feeling that however much we might unearth about the workings of the brain, the processing of information, the production of behaviour and so on, that still won’t touch the ‘hard problem’: what philosophers sometimes call ‘phenomenal consciousness’, ‘qualia’, ‘what it’s like’ or ‘the way it feels’.
Some then think that this means consciousness must be non-physical; others accept that it may be physical yet utterly mysterious. The basic idea is that it seems coherent that there could be a universe physically identical to our own but without phenomenal consciousness. This is known as the ‘zombie hypothesis’, in which zombies are beings physically just like us – so they have all the perceptual, cognitive and behavioural abilities we do – but they wholly lack any phenomenally conscious experiences. There is nothing it’s like to be a zombie.
David Chalmers, who coined the phrase ‘hard problem’, takes it as a “conceptual point that the explanation of functions does not suffice for the explanation of experience”. Well, this tells us about a certain conception of consciousness - and I think this is really a misconception.
Concepts cohere for either of two reasons: that they are accurate representations of what is, or could be, true; or that they are open-ended and amorphous, too vague to clash with one another. It is not enough to say that something seems possible; one has to explain how it could be actual. So, what one finds ‘conceptually coherent’ is largely a matter of the concepts one has.
By trying to flesh out the notion of zombies, we can start to see that any physical (and therefore perceptual, cognitive and behavioural) replicas of us would be every bit as conscious as we are.
A zombie has several different sense modalities, by which it receives information about the world. It then processes this information in terms of the regularities it has learned exist among the objects in its environment, and categorises the stimuli as trees, chairs, televisions, parents, books, rainbows, broken legs and so forth. It will then act on its sensory judgements to fulfil its desires, perhaps after weighing up a number of alternatives and balancing immediate satisfaction against long-term plans. It has fluent use of language, and can access its own informational states with great speed and accuracy, making them subject to further cognitive processes. But it is not phenomenally conscious.
A zombie may talk about love, religion, the meaning of life, consciousness, and the possibility of zombies. It may pay close attention to great art, and then discuss it intently. When it finds its house has been broken into, its blood pressure will rise, it will clench its fists and teeth, it will shout words of outrage and anger that it does not feel, and ruminate on the event for much time to come. It will go to wine-tasting evenings, paying hard-earned money to swill liquid around the mouth and then spit it out, on the grounds that it likes the taste - not that it can taste anything. On hearing a good joke, it will laugh so much that it becomes aware of a cramped state of its abdomen eliciting mild aversion. It will sincerely claim that the hilarity causes pain, although it cannot feel either. After the death of its spouse, it will cry uncontrollably, lose motivation to get on with life, have little appetite for food, obsessively access memories of the deceased, and complain that it sees no point in living because it cannot stand the anguish that it does not in fact feel. Because, of course, it is not phenomenally conscious.
A zombie would pass the Turing test with flying colours, for which it will exploit its ability for self-monitoring and higher-order awareness of its own cognitive and perceptual states. Daniel Dennett argues:
[It] would (unconsciously) believe that it was in various mental states - precisely the mental states it is in a position to report about should we ask it questions. It would think it was conscious, even if it wasn’t! Any entity that could pass the Turing test would operate under the (mis?)apprehension that it was conscious.
The conclusion Dennett is pushing us towards is that the zombie example shows that the supposed ‘absent qualia’ have absolutely no effect on whether we claim or believe (beliefs are functional states and hence present in zombies) that we are conscious, or have such concepts as ‘consciousness’, ‘qualia’, and ‘phenomenology’. There is no reason, then, to think that these ‘zombies’ lack anything that we possess, or that they are any less conscious than us.
Chalmers picks up this argument, which he calls “the paradox of phenomenal judgment”, and runs with it. If a nonconscious entity such as a zombie or an incredibly sophisticated robot were asked how it knew that there was, say, a red tricycle in front of it, it would reply “I know there is a red tricycle because I see it there”. When asked how it knows it is seeing it, it would say “I just see it”.
When we ask how it knows that the tricycle is red, it would say the same sort of thing that we do: ‘It just looks red.’ If such a system were reflective, it might start wondering about how it is that things look red, and about why it is that red just is a particular way, and blue another. From the system’s point of view [this] is just a brute fact... Of course from our vantage point we know that this is just because red throws the system into one state, and blue throws it into another; but from the machine’s point of view this does not help.
As it reflected, it might start to wonder about the very fact that it seems to have some access to what it is thinking, and that it has a sense of self. ...[It] might very soon start wondering about the mysteries of consciousness... ‘Why is it that heat feels this way?’; ‘Why am I me, and not someone else?’; ‘I know my processes are just electronic circuits, but how does this explain my experience of thought and perception?’
But Chalmers does not accept that we should attribute phenomenal consciousness to such a system on the grounds that it thinks of itself as conscious - has phenomenal beliefs - just as we do. As a property dualist, he takes consciousness to be an explanandum above and beyond this: “It therefore does not matter if it turns out that consciousness is not required to do any work in explaining other phenomena. Our evidence for consciousness never lay with these other phenomena in the first place”. While our phenomenal beliefs are not caused by our conscious experiences, and thus present (but false) in zombies, they are justified in us. I know I am not a zombie “because of my direct first-personal acquaintance with my experiences”. Acquaintance, he says, “is to stand in a relationship to [the experience] more primitive than belief: it provides evidence for our beliefs, but it does not in itself constitute belief”.
I find Chalmers’ response to the paradox of phenomenal judgement far less convincing than his statement of it. ‘Acquaintance’ is defined as a relation between a conscious subject and an experience, not as an intrinsic property. But relational properties are inessential to the entities of which they are properties. Therefore it is possible for there to be an event or state which is a conscious experience of one’s, but of which one is wholly unaware. I take this to be a reductio ad absurdum of the notion.
Returning to the example of the entity that perceives and introspects ‘nonconsciously’, Chalmers argues that when it distinguishes between colours:
All that [its] central processes have access to is the color information itself, which is merely a location in a three-dimensional information space...
Indeed, as far as central processing is concerned, it simply finds itself in a location in this space. The system is able to make distinctions, and it knows it is... but it has no idea how it does it...
It is natural to suppose that [such] a system... will simply label the states as brutely and primitively different, differing in their ‘quality.’ Certainly, we should expect these differences to strike the system in an ‘immediate’ way: it is thrown into these states which in turn are immediately available for the direction of later processing; there is nothing inferential, for example, about its knowledge of which state it is in. And we should expect these states to be quite ‘ineffable’: the system lacks access to any further relevant information, so there is nothing it can say about the states beyond pointing to their similarities and differences with each other, and to the various associations they might have.
But rather than conclude that this is all that is going on in our own case, Chalmers repeatedly insists that consciousness is something more than phenomenal judgement, and that ‘acquaintance’ is our infallible evidence for it.
He seems to be driven by a conviction that to say that consciousness is nothing more than these phenomenal judgements is to say that consciousness is nothing, full stop. Given such a conviction, his manoeuvres are perhaps understandable. But he has no non-question-begging arguments for this conviction (indeed it seems clearly false), and I believe that phenomenal judgements are quite adequate as a conception of conscious experience.
For example, the last major passage cited from Chalmers showed that the purely physical entity with nothing more to its mind than cognitive processes (including phenomenal judgements) does not know how it finds itself in perceptual states - but neither do we. Nor can it describe these other than in terms of the ways they are similar to and different from each other - as is the case with us. Paul Churchland develops this take on ineffabilty (accepting it, whereas Chalmers rejects it) by introducing the notion of discriminational simples:
These are the features of the world where one is unable to say how it is that one discriminates one such feature from another; one simply can…
Such features must exist, if only to prevent an infinite regress of features discriminated by constituting subfeatures discriminated by constituting sub-subfeatures, and so on. ...Given any person at any time, there must be some set of features whose spontaneous or noninferential discrimination is currently basic for that person... In short, there must be something that counts, for that person, as a set of inarticulable qualia.
Such basic discriminatory states are to be expected in any physical organism which perceives; there is nothing special, nonphysical, or inexplicable about them. The ‘ineffable qualia’ which so seem to evade scientific explanation or objective description are simply a necessary consequence of the degree of access introspection can have to perceptual abilities.
Physically instantiated cognitive functioning (of a certain degree of sophistication) will lead to having concepts of consciousness, phenomenal character, and so forth, and to applying these concepts sincerely to the internal representational states that one becomes aware of via the operation of attention. Anything functioning like this will genuinely believe that it is conscious, and I think that these phenomenal judgements are the only phenomena of consciousness that we have reason to believe in.
Furthermore, this hypothesis can resolve the issue of how to relate experiences to the experiencer – an issue that can lead, via notions like ‘acquaintance’, to the infinite regress of the homunculus fallacy.
Representational states, arising as a result of perception, exist in the brain. When the deployment of attention to the stimulus - or even the state itself - brings such a state into awareness, the information that it carries can spread to higher functional areas in the brain, thus conceptualising that information (so any conscious state has conceptual content). Part, at least, of this process, will involve the use of phenomenal concepts. The state, when conceived of phenomenally, is phenomenal. So rather than having ‘acquaintance’ with an ‘experience’, one has higher-order awareness of a sensory representational state - and this just is an experience. This is to deny the view of some (such as Ned Block and Michael Tye) that one can have an experience without awareness; and it is not to say that an experience must be experienced in order to exist (which creates the intrinsic/relational difficulty and also threatens a regress). It is to say that a representation must be attentionally singled out in order for a conscious experience to exist.
[No doubt this could be better written, and better argued, but by undergraduate standards I still think it’s not bad…]