Last month I wrote three posts about a proposition by philosopher and cognitive scientist David Chalmers — the idea of a Combinatorial-State Automata (CSA). I had a long debate with a reader about it, and I’ve pondering it ever since. I’m not going to return to the Chalmers paper so much as focus on the CSA idea itself.
I think I’ve found a way to express why I see a problem with the idea. I’m going to have another go at explaining it. The short version turns on how mental states transition from state to state versus how a computational system must handle it (even in the idealized Turing Machine sense — this is not about what is practical but about what is possible).
“Once more unto the breach, dear friends, once more…”
There are some side dishes, but here’s the main meal:
In the brain, one mental state follows another without intermediate mental states. Putatively, the brain’s function is fully described by these states. But in the computer, “mental states” are nothing like this.
Therefore a claim of organizational invariance is on very shaky ground (perhaps falsified).
Per Chalmers’ definition, the mental states in question must be at the granularity necessary to describe cognition. This applies to both the physical resolution and the time resolution.
Physical resolution has a range from quantum up to neurons and perhaps even higher. We currently don’t really know what parts of the brain are necessary factors in a simulation — some believe quantum effects may play a role.
Chalmers picks the neuron level, so that is what I’ll discuss here (I suspect a truly accurate simulation requires finer granularity — Chalmers seems to feel a coarser granularity might work — we meet in the middle).
The time resolution is a little tricky since neurons operate asynchronously. One approach is to require a new state when any neuron changes. This insures all neuron changes are accounted for.
But, since neurons don’t march in lockstep, that means states won’t always have the same time span between them. They’d need time codes to account for variable timing.
Alternately, we can take states on clock ticks, making those ticks quick enough that we’re guaranteed to never miss a state change. The downside here is states where nothing changes in that tick (leading to duplicate states).
For now, let’s just assume the system handles this appropriately, that states are calculated and made available at the right time.
There is also that, in many views, the timing doesn’t matter. States could be made available at any speed with any delay between states and it shouldn’t matter to the cognition (any more than it matters in most software).
The point here is that a state-based system replicating a real-time physical system is decidedly a non-trivial proposition. As computational approaches to solving problems, they have many advantages, but they’re also the biggest footprint and, in some ways, hardest to pull off.
That’s because the function that calculates the next state makes the timing issues just mentioned look trivial. Depending on how the state system is designed, that function may be a serious sticking point.
The alternate is a trivial function that just rolls out existing states from a table of states. I discussed this extensively in previous posts.
But let us set aside all such details and assume we have the following:
- FSA:Brain — essentially, the dynamic living brain itself.
- CSA:Brain — a program; a state-driven brain simulation.
- FSA:Computer — a machine that can run CSA programs.
A little unpacking is necessary to explain the terms.
By FSA (Finite-State Automata) I mean (the abstraction of) any physical system operating according to physical laws. The idea is the materialist one that any process has some Turing Machine that describes it completely.
Yes: For the moment, I’m assuming computationalism is true.
I’m assuming FSA:Brain is such a complete description of the brain as to essentially be the brain. Likewise, FSA:Computer is the fully detailed abstraction of how the computer works.
CSA:Brain is a real program that implements FSA:Brain. (It is what Chalmers describes as a CSA.)
Note that we currently have no idea what FSA:Brain is, let alone what CSA:Brain would be implementing it. We assume FSA:Brain exists, because under computationalism there is some TM that describes the brain.
And if FSA:Brain exists, we ought to be able to derive CSA:Brain easily enough.
The presumption then is that:
FSA:Brain ≡ FSA:Computer(CSA:Brain)
That is, the brain is identical to a computer running CSA:Brain. If Chalmers is right, both should experience identical mental states and cognition.
(Note that Chalmers only claims cognition here, not phenomenal experience.)
This turns on what Chalmers calls organizational invariance preserving the causal topology of the system.
I’m not sure it does.
It also turns on the idea that two systems that can be said to share a common abstraction have a meaningful identity — one strong enough to say both experience the same thing.
I’m not sure that’s true.
In the brain, a state change involves all neurons at once. (The CSA is defined in terms of vectors comprised of all neurons.) Note also that a neuron state is a physical, chemical, complex condition.
In the computer, the hardware-software combination changes the numeric values of an array of numbers one-by-one. Note here that a “neuron state” is just a bit pattern that stands for the physical, chemical, complex actual neuron state.
The crucial point is that a single-thread algorithm (which is what we presume in an idealized case such as this) cannot change all neuron states at once. (I’ll consider the alternative to this below.)
The upshot is that a given state changes to a new state slowly, one “neuron” (memory location) at a time.
Which means there are as many intermediate states as there are neurons.
Imagine the system is in state N. After changing one vector value to the new value for state N+1, the vector is in an illegal state with one component in the next state while all the rest remain in the current state.
This change ripples down the vector until, after all have been changed, now the vector is in the new legal state, N+1.
So, firstly, the state vector goes through billions of illegal transitory states between each legal state.
This raises the question of whether those states can matter. If only one state is a legal CSA state for every 86-billion those memory locations go through, how can the values of the state vector possibly mean anything?
What makes that one legal state “the one” while billions of others aren’t?
Secondly, the computer goes through dozens or hundreds of computer states accomplishing even each transitory vector state, let alone the many billions of computer states involved in accomplishing one legal vector state change.
This is the physical causality computationalists often point to as substantiating the idea that physicality is necessary. But it turns out to be entirely at the wrong level to have value. (I’ll get to that in a later post.)
Thirdly, the “neuron state” is a number that stands for a physical state. That simply isn’t the same thing as a physical neuron in a chemical state.
Neurons have a spatial orientation to each other as well as a topological wiring (the network). Memory locations have no connection with each other. Even the individual bits of a single location have no relation to each other.
Bottom line, it’s really hard to see where any causal topology is preserved given that the organization of these two systems is vastly different.
Perhaps there are adjustments we can make to improve things.
Can we eliminate the first, worst, problem of all those illegal transitory states? Is there a way to change all 86-billion vector components at once?
Yes, of course, but it requires that each component be separate. They can’t even be housed in the same RAM chips. They also require separate logic circuits for each to accomplish the change en masse.
Which pretty much puts us back to a Positronic Brain, a physical emulation.
I can see no way out of it computationally, and it suggests that the state vector doesn’t do much towards preserving causal topology.
Even a split-buffer technique has problems.
The idea is to use two memory arrays as state vectors, which allows modifying one while the other is considered “active” (whatever that means in this context). Once the changes are complete, the system is switched to considering the other buffer the active one. (The technique was often used in video buffering.)
The problem is: what does this say about the state vector? It seems to make it even more meaningless.
What can it even mean to say that one buffer, or the other, is the legal mental state? Programmatically, it’s just an address of one or the other.
While calculating one change, input from other cells matters, but it has to be the input from the current condition. If it happens the system has already updated those cells, those inputs are no longer valid.
Split-buffer allows taking inputs from the active buffer while calculating changes into the alternate buffer. This way the inputs stay stable.
So if the state vectors don’t seem to mean much, if they don’t seem the source of causal topology, if it exists, it must be in the function used to calculate states.
I think this is a good stopping point. Next time I’ll take a closer look at what kind of function could accomplish our goal.
You won’t be surprised to learn there are some serious issues there, too.
Stay stately, my friends!