Over the last few days I’ve found myself once again carefully reading a paper by philosopher and cognitive scientist, David Chalmers. As I said last time, I find myself more aligned with Chalmers than not, although those three posts turned on a point of disagreement.
This time, with his paper Facing Up to the Problem of Consciousness (1995), I’m especially aligned with him, because the paper is about the phenomenal aspects of consciousness and doesn’t touch on computationalism at all. My only point of real disagreement is with his dual-aspects of information idea, which he admits is “extremely speculative” and “also underdetermined.”
This post is my reactions and responses to his paper.
I’m including his response to his critics, Moving Forward on the Problem of Consciousness (1997), which explores the topics in further detail.
I find it intriguing that we’re still chewing on the same issues over twenty years later. The gap between the major camps remains untouched.
I also find it fascinating that so many people study something when they don’t agree on what it is. So many discussions start by noting how the term “consciousness” is too ambiguous to be useful.
Strange to study something so intently without really knowing what it is!
Many doubted the reality of black holes, but everyone agreed what one was.
Chalmers starts by explaining the “hard problem of consciousness” — a phrase of art that has (A) caught on in a big way and (B) remained a bone of contention.
The hard problem, simply put, is the question: Why should there be “something it is like” to process information in the brain?
He distinguishes the hard problem from easy problems, which are problems that neuroscience can investigate, because they involve how the brain functions.
Note: By “hard” and “easy” Chalmers obviously means relative to each other — many easy problems are actually very hard.
The question is why these functions, for us, also have a phenomenal aspect. There is nothing in physics that even hints at such a thing.
Chalmers notes the ambiguity of the term “consciousness” and offers:
Sometimes terms such as “phenomenal consciousness” and “qualia” are also used here, but I find it more natural to speak of “conscious experience” or simply “experience”. Another useful way to avoid confusion (used by e.g. Newell 1990, Chalmers 1996) is to reserve the term “consciousness” for the phenomena of experience, using the less loaded term “awareness” for the more straightforward phenomena described earlier.
I find myself aligned with him on this.
“Consciousness” — when it comes to the study of its nature — is comfortably aligned with the basic intuition we all have linking it with phenomenal experience. The hard problem lives in this space.
“Awareness” then serves to identify non-phenomenal reaction to input. Even a photocell can be “aware” of how much light falls on it. Awareness is an “easy” problem.
The big point Chalmers makes is that, while difficult problems exist throughout science, some very challenging, the general perception is that solving them is a matter of understanding their function and structure.
He sees the “hard problem” as being on another level, as not being a matter of understanding function or structure. (On the account that nothing in physics suggests phenomenal experience.)
What makes the hard problem hard and almost unique is that it goes beyond problems about the performance of functions. To see this, note that even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience — perceptual discrimination, categorization, internal access, verbal report — there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?
Reductive methods are successful in most domains because what needs explaining in those domains are structures and functions, and these are the kind of thing that a physical account can entail. When it comes to a problem over and above the explanation of structures and functions, these methods are impotent.
(Emphasis his in all quotes.)
The general response to this is that function and structure will fully explain phenomenal experience once we understand the brain well enough.
A common view is that experience is what information processing feels like on the inside — a view Chalmers turns out to have some sympathy for.
Something that I think might get a little lost in the debate is a point on which Chalmers and I are strongly aligned. It’s something I’ve said repeatedly.
The claims of “type A materialists” may turn out to be correct, but both Chalmers and I find those claims to be extraordinary. As such they require strong and robust support — arguments that don’t beg the question:
Often, a proponent will simply assert that functions are all that need explaining, or will argue in a way that subtly assumes this position at some point. But that is clearly unsatisfactory. Prima facie, there is very good reason to believe that the phenomena a theory of consciousness must account for include not just discrimination, integration, report, and such functions, but also experience, and prima facie, there is good reason to believe that the question of explaining experience is distinct from the questions about explaining the various functions. Such prima facie intuitions can be overturned, but to do so requires very solid and substantial argument.
There seems to be, on the part of materialists, a presumption that experience has a material explanation — because it must have, right?
But this begs the question. It assumes (type A) materialism.
A key plank in many materialist arguments involves intuition — specifically, that it’s often wrong.
Which it surely is on a human level, and even science has a history of proving intuition wrong.
Chalmers points out that, firstly, the turned-out-false intuitions materialists point to involved questions of structure and function. For example, he mentions how “Dennett imagines a vitalist arguing about the hard problem of ‘life’, or a neuroscientist arguing about the hard problem of ‘perception’.”
With regard to vitalism, Chalmers points out: “When it comes to the problem of life, for example, it is just obvious that what needs explaining is structure and function: How does a living system self-organize?”
More to the point, “their driving question was always ‘How could a mere physical system perform these complex functions?’, not ‘Why are these functions accompanied by life?'”
According to Chalmers, the question being asked about phenomenal experience falls into a different class.
Obviously materialists disagree. I do think Chalmers makes some pretty strong arguments against “type A” materialism. (Naturally I agree with those arguments. 😉 )
And, again, the main point being made isn’t so much to refute materialism as it is to point out that (as I’ve long said) it’s a big ask. Contrary to what materialists want to believe, there is no precedent for experience in physics.
Skepticism is very much warranted.
Besides making what I see as a strong statement supporting the hard problem and making a strong argument against materialism, Chalmers offers his idea on how to resolve the issue:
…That is, we can find the same abstract information space embedded in physical processing and in conscious experience.
This leads to a natural hypothesis: information (or at least some information) has two basic aspects, a physical aspect and a phenomenal aspect. This has the status of a basic principle that might underlie and explain the emergence of experience from the physical.
This approaches both the territory of panpsychism and the territory of “type B” materialists — to which Chalmers admits freely and is fine with.
This is where Chalmers and I part ways, although he does admit this idea is speculative and very possibly wrong (he mainly puts it forth as a starting point to move things forward, which I respect greatly).
I think the error is in seeing that two different system can be described by the same abstraction and thereby drawing conclusions about their mutual identity.
Chalmers takes this idea much further in A Computational Foundation for the Study of Cognition, the paper I discussed last month.
[FWIW: I agree entirely with his first principle, of structural coherence, and in terms of brains and things that physically resemble brains, I agree with his second principle of organizational invariance. (I disagree it applies to numeric simulations.)]
I plan to explore the computational aspects further, but they don’t apply here, so I’ll focus just on the dual-aspect nature of information.
I think a further error might be in granting a higher ontological position for information than it deserves. Information is an aspect of any physical system, but I think great care needs to be taken when taking it as a thing in itself.
(As computationalists often point out, all information must be reified (made physical) to have any value.)
As Chalmers says, “An obvious question is whether all information has a phenomenal aspect.”
One answer is yes, that all information does. Otherwise we’re confronted with the question of why some forms of information have a phenomenal aspect but not other forms. Chalmers has sympathy for this approach.
(I don’t. I’m definitely from Missouri on that one.)
Another answer is no, information doesn’t ever have a phenomenal aspect. Given that information is abstract, it’s hard to see how it could have any additional aspect, phenomenal or otherwise.
(I vote for this one.)
On the other hand, then, what do I think might account for phenomenal experience, if not materialism, if not panpsychism, if not IIT (or any information-based theory)?
I’ll have to get back to you on that.
I think it involves the entire brain doing what it does. I think physical effects may play a key role. I think timing and synchronization may play a role. I think a massively connected physical network plays a role.
Ultimately I lean with Chalmers in thinking new physical principles must be involved — principles we haven’t discovered yet. I do agree with his view of the hard problem — that it isn’t the same as other physical questions.
There’s an answer somewhere in there.
Stay confused, my friends!