This is what I imagined as my final post discussing A Computational Foundation for the Study of Cognition, a 1993 paper by philosopher and cognitive scientist David Chalmers (republished in 2012). The reader is assumed to have read the paper and the previous two posts.
This post’s title is a bit gratuitous because the post isn’t actually about intentional states. It’s about system states (and states of the system). Intention exists in all design, certainly in software design, but it doesn’t otherwise factor in. I just really like the title and have been wanting to use it. (I can’t believe no one has made a book or movie with the name).
What I want to do here is look closely at the CSA states from Chalmers’ paper.
[Update: As it turned out, that close look made me realize there is something wrong with the whole idea. I’ll butt in again after this first part, which is still on point.]
The idea is that a system of interest (a brain, in this case) can be divided into its functional parts (neurons, for instance) and the behavior of the system can be characterized as an ordered series of “snapshots” and transitions from one snapshot to the next.
The snapshots show the state of the functional parts changing over time as the system does its thing. In each snapshot, each part is in some state.
The possible states a part can have are typically denoted with numbers, so in each snapshot, the state of each part is specified with the number representing the state that part is in. The “snapshot,” therefore, is a list of numbers (rather than, say, a list of pixels for an image).
This is called a system state, as the combined states of all the parts at that moment comprise the state of the system at that moment.
Exactly as in a movie, the ordered series of snapshots describe the behavior of the system over time. (But with lists of numbers rather than pixels.)
Very importantly, as in an animated movie, with the right algorithm, we can also predict or simulate the behavior of the system over time. System states therefore can describe a real physical system in the future or an entirely abstract system.
A crucial point, however, is that these snapshots themselves only describe the system.
I think the movie analogy helps point out two important things:
Firstly, movie frames don’t capture what’s between frames. In filming real-world things, ordinary movie cameras miss a great deal. It requires high-speed cameras to begin to capture what’s going on.
Likewise, simulations need to have an appropriate time resolution if they’re to capture all the dynamics of the system. How often do we need to take snapshots of the brain to capture what it does?
There is also the issue of trying to determine what the brain states even are in an asynchronous real-time object like a brain. It doesn’t march in step, there’s no obvious moment to take the snapshots.
Secondly, it makes clear the descriptive nature of the snapshots (the movie).
A movie of real life isn’t real life. An animated movie also isn’t of real life. Both just describe something (as a series of still images).
We should distinguish between the system states and the states of the system.
The former is a description of the system itself — it actually reflects the system’s causality. The latter lists the states over some time period of the system. Most importantly, states of the system contain no causality.
We can, in principle, capture the states of the system from someone’s brain. That gives us a description of their cognition that could, in principle, somehow be “played back.”
Some believe that merely capturing these states results in a recording that somehow has mental content. Others hold that it needs to be played back in some fashion (which certainly raises the question: “How?”).
Crucially, in this case, no algorithm computes the playback of states.
Chalmers requires a causal topology found, he believes, in such an algorithm executing. He requires the next state be causally generated.
For, otherwise, he’s stuck with Searle’s Wall and Putnam’s Pixies, which he’s able to discard on grounds of having no causality. (I reject them on grounds of the complexity of the interpretation necessary to extract them, which I find more robust.)
But the recording of a real brain’s states has this problem: The causality of the brain is not captured. We just record a series of snapshots of it in action. Only in the ordering of the snapshots there is any apparent causal behavior.
Just as the frozen frames of a movie don’t contain causality, but show us an illusion of it when played back in sequence.
In any event, playback of states of the system can, at best, only show what happened. It can’t show what will happen.
That requires a simulated system (of something physical real or something imagined) to create new states of the system.
And that requires knowing the system well enough to determine its system states and transitions. Here we must have the transitions — they’re the whole point, they tell us how to move from state to state.
Chalmers starts with the obvious low-level parts, neurons, which gives us our state vector (for specificity, it has 50-billion neuron components, each with enough bits to model a neuron state).
The hard part is determining, given some configuration of those 50-billion neuron states, what is the next state? For now we have to imagine some program P that tells us how to compute the next state.
Note that what Chalmers describes here is not quite the same thing as simulating the neurons in a neural network. The neuron states don’t really reflect the behavior of the neurons, they certainly don’t model it.
All the CSA does is say: Given state SN, the next state is SN+1.
[This is where, for me, the lightbulb went on. The rest of this is current with regard to what I realized.]
But there’s a problem with this and it involves the whole point of the CSA — which is to combine components into a single state.
In reducing the topology to SN » SN+1, we are saying that that next state for all 50-billion neurons depends simply on stepping from one brain state to the next.
Specifically, there is no reason for the next state. It’s not based on any theory.
It’s not due to some model that says, “Okay, if a (simulated) neuron is in this (simulated) state (due to these simulated inputs), then its next (simulated) state is as follows (because simulated biology).”
It’s due to a model that only says, “Okay, if the whole system is in state SN, then the next state is SN+1 (because N+1 follows N).”
That’s all it says; that’s all a CSA can say.
Why would there be any causality in the sole fact of the whole brain being in one state and then being in the next state?
Of course there is an underlying causality responsible for the change. But treating the system as a CSA reduces it to N+1 comes after N, which isn’t causality at all (any more than 5 causes 6).
In section 3.3, Chalmers writes, “Any system that implements this CSA will share the causal topology of the original system.”
Beyond that, he doesn’t say much about such an implementation. Per my previous post, we’re talking about program P (executing on engine E).
At this point, it’s reasonable to ask if program P is even possible (E is, essentially, any CPU).
The idea of Kolmogorov complexity might force P to essentially be a list of states and transitions. In fact, all the more complicated state engine apps I’ve written were exactly that: the state table along with a simple lookup engine.
[See: State Engines, part 1, and the two parts that follow, for an implementation of a simple state engine.]
Being a state engine makes the system a playback mechanism.
(Chalmers also doesn’t say what drives the system from state to state, but given the description, it seems to be a matter of “system ticks” — which raises the question of time signatures.)
Ultimately, the causal topology just says this state follows that state because it does. It’s no more than the movie.
§ § §
For me, this blows the CSA idea out of the water (or sinks the ship, fielder’s choice).
The idea that causality is preserved by playing back a set of states seems questionable. At the very least it needs an account for how it is one state follows another. That’s the real causality.
As support for computationalism, I think the idea of a CSA fails.
And that’s on top of the problem I have with simulation versus reality.
Which, I suppose, means another post explaining that, but I think I can move on from this paper. I wanted to talk a little about sections 2.2 and 3.4, just my reactions, but three posts is enough, I think.
(OTOH, Mr. Chalmers, I’m totally down with Absent Qualia, Fading Qualia, Dancing Qualia (1995). Completely agree.)
§ § §
Stay stateful, my friends!