Psychology and neuroscience have made staggering progress over the last century, and will doubtless continue to discover fascinating things about the mind. But there’s a common feeling that however much we might unearth about the workings of the brain, the processing of information, the production of behaviour and so on, that still won’t touch the ‘hard problem’: what philosophers sometimes call ‘phenomenal consciousness’, ‘qualia’, ‘what it’s like’ or ‘the way it feels’.
Some then think that this means consciousness must be non-physical; others accept that it may be physical yet utterly mysterious. The basic idea is that it seems coherent that there could be a universe physically identical to our own but without phenomenal consciousness. This is known as the ‘zombie hypothesis’, in which zombies are beings physically just like us – so they have all the perceptual, cognitive and behavioural abilities we do – but they wholly lack any phenomenally conscious experiences. There is nothing it’s like to be a zombie.
David Chalmers, who coined the phrase ‘hard problem’, takes it as a “conceptual point that the explanation of functions does not suffice for the explanation of experience”. Well, this tells us about a certain conception of consciousness - and I think this is really a misconception.
Concepts cohere for either of two reasons: that they are accurate representations of what is, or could be, true; or that they are open-ended and amorphous, too vague to clash with one another. It is not enough to say that something seems possible; one has to explain how it could be actual. So, what one finds ‘conceptually coherent’ is largely a matter of the concepts one has.
By trying to flesh out the notion of zombies, we can start to see that any physical (and therefore perceptual, cognitive and behavioural) replicas of us would be every bit as conscious as we are.
A zombie has several different sense modalities, by which it receives information about the world. It then processes this information in terms of the regularities it has learned exist among the objects in its environment, and categorises the stimuli as trees, chairs, televisions, parents, books, rainbows, broken legs and so forth. It will then act on its sensory judgements to fulfil its desires, perhaps after weighing up a number of alternatives and balancing immediate satisfaction against long-term plans. It has fluent use of language, and can access its own informational states with great speed and accuracy, making them subject to further cognitive processes. But it is not phenomenally conscious.
A zombie may talk about love, religion, the meaning of life, consciousness, and the possibility of zombies. It may pay close attention to great art, and then discuss it intently. When it finds its house has been broken into, its blood pressure will rise, it will clench its fists and teeth, it will shout words of outrage and anger that it does not feel, and ruminate on the event for much time to come. It will go to wine-tasting evenings, paying hard-earned money to swill liquid around the mouth and then spit it out, on the grounds that it likes the taste - not that it can taste anything. On hearing a good joke, it will laugh so much that it becomes aware of a cramped state of its abdomen eliciting mild aversion. It will sincerely claim that the hilarity causes pain, although it cannot feel either. After the death of its spouse, it will cry uncontrollably, lose motivation to get on with life, have little appetite for food, obsessively access memories of the deceased, and complain that it sees no point in living because it cannot stand the anguish that it does not in fact feel. Because, of course, it is not phenomenally conscious.
A zombie would pass the Turing test with flying colours, for which it will exploit its ability for self-monitoring and higher-order awareness of its own cognitive and perceptual states. Daniel Dennett argues:
[It] would (unconsciously) believe that it was in various mental states - precisely the mental states it is in a position to report about should we ask it questions. It would think it was conscious, even if it wasn’t! Any entity that could pass the Turing test would operate under the (mis?)apprehension that it was conscious.
The conclusion Dennett is pushing us towards is that the zombie example shows that the supposed ‘absent qualia’ have absolutely no effect on whether we claim or believe (beliefs are functional states and hence present in zombies) that we are conscious, or have such concepts as ‘consciousness’, ‘qualia’, and ‘phenomenology’. There is no reason, then, to think that these ‘zombies’ lack anything that we possess, or that they are any less conscious than us.
Chalmers picks up this argument, which he calls “the paradox of phenomenal judgment”, and runs with it. If a nonconscious entity such as a zombie or an incredibly sophisticated robot were asked how it knew that there was, say, a red tricycle in front of it, it would reply “I know there is a red tricycle because I see it there”. When asked how it knows it is seeing it, it would say “I just see it”.
When we ask how it knows that the tricycle is red, it would say the same sort of thing that we do: ‘It just looks red.’ If such a system were reflective, it might start wondering about how it is that things look red, and about why it is that red just is a particular way, and blue another. From the system’s point of view [this] is just a brute fact... Of course from our vantage point we know that this is just because red throws the system into one state, and blue throws it into another; but from the machine’s point of view this does not help.
As it reflected, it might start to wonder about the very fact that it seems to have some access to what it is thinking, and that it has a sense of self. ...[It] might very soon start wondering about the mysteries of consciousness... ‘Why is it that heat feels this way?’; ‘Why am I me, and not someone else?’; ‘I know my processes are just electronic circuits, but how does this explain my experience of thought and perception?’
But Chalmers does not accept that we should attribute phenomenal consciousness to such a system on the grounds that it thinks of itself as conscious - has phenomenal beliefs - just as we do. As a property dualist, he takes consciousness to be an explanandum above and beyond this: “It therefore does not matter if it turns out that consciousness is not required to do any work in explaining other phenomena. Our evidence for consciousness never lay with these other phenomena in the first place”. While our phenomenal beliefs are not caused by our conscious experiences, and thus present (but false) in zombies, they are justified in us. I know I am not a zombie “because of my direct first-personal acquaintance with my experiences”. Acquaintance, he says, “is to stand in a relationship to [the experience] more primitive than belief: it provides evidence for our beliefs, but it does not in itself constitute belief”.
I find Chalmers’ response to the paradox of phenomenal judgement far less convincing than his statement of it. ‘Acquaintance’ is defined as a relation between a conscious subject and an experience, not as an intrinsic property. But relational properties are inessential to the entities of which they are properties. Therefore it is possible for there to be an event or state which is a conscious experience of one’s, but of which one is wholly unaware. I take this to be a reductio ad absurdum of the notion.
Returning to the example of the entity that perceives and introspects ‘nonconsciously’, Chalmers argues that when it distinguishes between colours:
All that [its] central processes have access to is the color information itself, which is merely a location in a three-dimensional information space...
Indeed, as far as central processing is concerned, it simply finds itself in a location in this space. The system is able to make distinctions, and it knows it is... but it has no idea how it does it...
It is natural to suppose that [such] a system... will simply label the states as brutely and primitively different, differing in their ‘quality.’ Certainly, we should expect these differences to strike the system in an ‘immediate’ way: it is thrown into these states which in turn are immediately available for the direction of later processing; there is nothing inferential, for example, about its knowledge of which state it is in. And we should expect these states to be quite ‘ineffable’: the system lacks access to any further relevant information, so there is nothing it can say about the states beyond pointing to their similarities and differences with each other, and to the various associations they might have.
But rather than conclude that this is all that is going on in our own case, Chalmers repeatedly insists that consciousness is something more than phenomenal judgement, and that ‘acquaintance’ is our infallible evidence for it.
He seems to be driven by a conviction that to say that consciousness is nothing more than these phenomenal judgements is to say that consciousness is nothing, full stop. Given such a conviction, his manoeuvres are perhaps understandable. But he has no non-question-begging arguments for this conviction (indeed it seems clearly false), and I believe that phenomenal judgements are quite adequate as a conception of conscious experience.
For example, the last major passage cited from Chalmers showed that the purely physical entity with nothing more to its mind than cognitive processes (including phenomenal judgements) does not know how it finds itself in perceptual states - but neither do we. Nor can it describe these other than in terms of the ways they are similar to and different from each other - as is the case with us. Paul Churchland develops this take on ineffabilty (accepting it, whereas Chalmers rejects it) by introducing the notion of discriminational simples:
These are the features of the world where one is unable to say how it is that one discriminates one such feature from another; one simply can…
Such features must exist, if only to prevent an infinite regress of features discriminated by constituting subfeatures discriminated by constituting sub-subfeatures, and so on. ...Given any person at any time, there must be some set of features whose spontaneous or noninferential discrimination is currently basic for that person... In short, there must be something that counts, for that person, as a set of inarticulable qualia.
Such basic discriminatory states are to be expected in any physical organism which perceives; there is nothing special, nonphysical, or inexplicable about them. The ‘ineffable qualia’ which so seem to evade scientific explanation or objective description are simply a necessary consequence of the degree of access introspection can have to perceptual abilities.
Physically instantiated cognitive functioning (of a certain degree of sophistication) will lead to having concepts of consciousness, phenomenal character, and so forth, and to applying these concepts sincerely to the internal representational states that one becomes aware of via the operation of attention. Anything functioning like this will genuinely believe that it is conscious, and I think that these phenomenal judgements are the only phenomena of consciousness that we have reason to believe in.
Furthermore, this hypothesis can resolve the issue of how to relate experiences to the experiencer – an issue that can lead, via notions like ‘acquaintance’, to the infinite regress of the homunculus fallacy.
Representational states, arising as a result of perception, exist in the brain. When the deployment of attention to the stimulus - or even the state itself - brings such a state into awareness, the information that it carries can spread to higher functional areas in the brain, thus conceptualising that information (so any conscious state has conceptual content). Part, at least, of this process, will involve the use of phenomenal concepts. The state, when conceived of phenomenally, is phenomenal. So rather than having ‘acquaintance’ with an ‘experience’, one has higher-order awareness of a sensory representational state - and this just is an experience. This is to deny the view of some (such as Ned Block and Michael Tye) that one can have an experience without awareness; and it is not to say that an experience must be experienced in order to exist (which creates the intrinsic/relational difficulty and also threatens a regress). It is to say that a representation must be attentionally singled out in order for a conscious experience to exist.
[No doubt this could be better written, and better argued, but by undergraduate standards I still think it’s not bad…]
6 comments:
Crick pointed out that although the zombie analogy is weak, there is nevertheless a certain level of zombieness in the brains of all conscious beings in the form of all the number crunching that, for reasons not clear, are filtered out from our final perception of consciousness.
Koch and himself try explore the manner in which certain process reach consciousness while others do not using the visual system as a modal: .pdf
Also online here.
But what happened to the horse with the long face....
Good work, Tom! The crux of the post seems to be:
The state, when conceived of phenomenally, is phenomenal. So rather than having ‘acquaintance’ with an ‘experience’, one has higher-order awareness of a sensory representational state - and this just is an experience.
Your view seems to go something like this. Presently, I am having a conscious experience of a computer. The idea that my conscious experience of a computer is comprised of some phenomenal content -- images, as it were -- and my acquaintance with that phenomenal content is incorrect. Instead, as a result of the proper functioning of my perceptual apparatus, I have an awareness of this computer, an external object. My conscious experience of this computer just is an awareness of my awareness of this computer.
However, I worry that this gets the content of my experience wrong. If my experience is an awareness of one of my awareness, then the object of my experience is not an external object -- it is an awareness of mine!
In case you haven't seen it, I took a stab at the Zombie Argument in a post of mine.
Timmo, ta for the link, I'll tak a look at yours (plus the Crick & Koch - thanks Rev. Dr.).
I think that's a pretty decent summary. But:
I worry that this gets the content of my experience wrong. If my experience is an awareness of one of my awareness, then the object of my experience is not an external object -- it is an awareness of mine!
Not quite. Say there's a perceptual awareness, P, of which the object is a stimulus, S. Then there's a cognitive conceptual awareness, C, of which the object is P. The combination of C and P in this relationship is what constitutes the phenomenal experience, E.
The content of P is certain perceptible aspects of S. The content of C is certain conceivable aspects of S.
Now as to the content and object of E, I don't think I have any problem with saying that S is also the object of E even though the P-C pair is arguably its content (although I'm not completely sure about putting it that way).
How does that sound? To be honest, my head is already hurting - 22-year-old me was much better at this than...
Hi,
Interesting material, and I've always wanted to believe something like this, but I still need some help with the 'conceptualization' that you see as essential to phenomenal awareness. I don't feel like I'm doing any conceptualizing as I enjoy the view of the forest from my back deck. And would you deny consciousness/phenomenal awareness to beings that don't conceptualize (bats, dogs, babies)? Or do you see them as 'conceptualizing' in some way?
Thanks,
Rick
Hi Rick, thanks for the comment.
When I talk about conceptualising here, I certainly don’t mean anything deliberate. When we hear somebody talking for instance, we don’t first hear a series of sounds and then interpret it as meaningful language. We experience it instantly (or near enough) as already meaningful language – in other words, a whole lot of cognitive processing takes place extremely quickly and automatically, and there isn’t a prior ‘conscious experience’ of the sound on which that is then built.
(I appreciate that ‘cognitive processing’ is a bit hand-wavy!)
I’m pretty agnostic as to whether animals have rudimentary ways of conceiving of some of their mental states (chimps, probably; ants, surely not). Babies can be pretty quick learners, but absolute newborns, possibly they can’t yet do this.
The following occurs, completely off the top of my head, so there’s a fair chance it’s rubbish and/or contradicts stuff I’ve previously said:
William James wrote about the “blooming, buzzing confusion” of a newborn’s mental life, where there isn’t so much a stream of consciousness as a rapid and overlapping series of splashes. I doubt that very young babies have anything beyond a few simple innate shortcuts in the way of understanding of the appearance/reality (mind/world) distinction. And my argument was written with adults in mind, who do have that understanding.
But maybe one doesn’t need an understanding of what a mental state is to have some sort of phenomenal consciousness. Say the sensory inputs young babies get aren’t distinguished in terms of stimulus vs appearance - there could still be a crude ‘what it’s like’ or ‘how things are’. One thing I went into in another part of my dissertation was that a lot of phenomenal concepts are demonstrative (and possibly fleeting) – if you look at a tree with many slightly differently coloured leaves, you can see these differences without having named and committed to memory a concept of each of the many different shades of green. They’re just indexed briefly as ‘like that’ (not that those words run through your head) while you’re looking at them, and perhaps a little afterwards.
So if something looms into a baby’s visual field and captures its attention, it’s perceptibly different from the visual background and will be, briefly and crudely, singled out as different. In terms of my earlier comment, the baby has no idea about the difference between S and P, but once it focuses on [S/P], there’s room for a C to emerge, bringing with it some sort of E. So perhaps, when we’re talking about rudimentary demonstrative phenomenal concepts, perception can be ‘like something’ without the perceiver even knowing what perception is?
Post a Comment