This is a continuation of an exploration of an idea by philosopher and cognitive scientist David Chalmers — the idea of a Combinatorial-State Automata (CSA). I’m trying to better express ideas I first wrote about in these three posts.
The previous post explored the state vector part of a CSA intended to emulate human cognition. There I described how illegal transitory states seem to violate any isomorphism between mental states in the brain and the binary numbers in RAM locations that represent them. I’ll return to that in the next post.
In this post I want to explore the function that generates the states.
As a quick review, I introduced three systems:
- FSA:Brain — essentially, the dynamic living brain itself.
- CSA:Brain — a program; a state-driven brain simulation.
- FSA:Computer — a machine that can run CSA programs.
The third item is a computer that we’ll stipulate as up to the task of running a CSA program. There is nothing particularly remarkable about it.
Here I want to consider the second item, CSA:Brain. We also stipulate it is up to the task — specifically, that it is software that successfully calculates successive brain state numbers in a memory array.
The question I want to consider is how CSA:Brain must work (per computational abilities such as we know them today).
Last time I considered the result of CSA:Brain: the changing values in the CSA state vector. These are meant to represent physical brain states.
The problem is the many billions of illegal transitory states that necessarily occur in the state vector. If the state vector is meant to represent a mental state, then the system is generating many billions of invalid mental states for every valid one.
This seems to undermine the claim the state vector represents the causal behavior of the system (since state N does not immediately lead to state N+1, as the CSA requires).
What makes the CSA system different from a Dancing Pixies situation of appropriate — but not causally connected — states is the causality behind each state change.
That causality has to reside in the function that generates new states. To be more precise, it must reside in the combined system of FSA:Computer running CSA:Brain. I’ll look at the system next time; here I focus on the function that calculates states.
Let’s call it fBrain.
This function is the core (and possibly bulk) of CSA:Brain. It takes a state (the current one) and generates a new state (the next one). We could notate that like this:
fBrain(N) ⇒ N+1
The idea is that, given some current “mental state” (set of numbers in the state vector) it can generate the next “mental state” (a different set of numbers in the state vector).
How can it do that?
One approach is to simulate the behavior of the brain.
So, for each “neuron” the function finds all the neurons that connect to it and uses their states to figure out what this neuron should do.
This means fBrain has to have a map of neuron connections. The state vector is just a list — it has nothing to say about which neuron connects to another.
This approach highlights how the state vector is just memory used by fBrain to do its work, which seems to undercut the idea of a CSA.
In a CSA, the state vector as a whole determines the state, which implies that fBrain is supposed to use the entire vector as a “key” to “unlock” the next state. Otherwise why talk about Combinatorial-State Automata at all?
The approach I’ve just described is a brain simulation. It potentially generates successive “state vectors” representing neuron changes, but there is no real sense of a finite-state automata in how it behaves.
This really isn’t a CSA.
So let’s try to get closer to an actual CSA.
Let’s swing the pendulum all the way. In this approach, fBrain takes the current state vector as a key (with 86-billion parts!) and uses it to find the next state in a database.
A very, very large database.
But in this case fBrain isn’t doing any kind of calculation we might see as causal — it’s just using a given key to look something up. (In some regards, it’s Searle’s Chinese Room on a very fine granularity.)
So this approach also seems to lack the causal topology of the brain. It’s more like running a movie. The only reason one frame follows another is because one frame follows another — the causality is imposed by simple sequence.
The problem is the database only has one reply to any given key.
There’s something I haven’t touched on yet: What drives the system from state to state?
On the one hand, our mind kinda runs on its own doing its thing. We can imagine someone in sensory deprivation having thoughts with very little input. On the other hand, our world is filled with external inputs, and even our own thoughts are a kind of input to further thoughts.
In a state-based system, something has to initiate going from one state to the next.
In the last post I mentioned the idea of slicing states based on clock ticks. If we did that, then clock ticks would drive the system from state to state.
In the state-based systems I’ve designed, it’s typically inputs that drive the system. The content of that input determines which of the possible next states the system moves to.
As a simple example, if an email address parser is in the state of receiving characters from the name part, an at-sign (“@”) in the input steers the system into a state of receiving host name characters.
An important point here is that, in the first state, the system has three legal states it can jump to: (1) another name character; (2) the “@” symbol; (3) an illegal character or unexpected end of string (also illegal).
So a state system is a directed graph of states and transitions. There always has to be some event that moves the system from one state to another.
Another thing is, again based on systems I’ve designed, state-based systems, by necessity, know all their states.
They do not calculate the next state; they use input to determine which of the possible legal next states the system moves to (as the simple email parser example shows).
So the CSA system Chalmers describes doesn’t actually fit this model. His version seems to assume new states are calculated. Despite framing it as a CSA, what he describes comes off like it could be a brain simulation rather than a finite-state automata.
Or, if not, then all we’re doing is rolling off existing states depending on some input (which might just be clock ticks).
Again, I have a hard time seeing the organization invariance here. These are radically different systems. (But the claim involves the abstractions of these systems; I’ll get to that next post.)
Fundamentally, we want something that, given the current mental state, reacts to some kind of input, either internal or external (or timing).
We need a notion that, in this state, given this input, go to that state.
We can do that by simulating the neurons, or we can do that by assuming we know all the states and looking up the next one (as state-based system usually do).
What we’re really going for here is a stream of states in the state vector that represent mental states such that, if we hooked them up to the right interpreters, could generate muscle outputs (just like the brain).
Simulating the neurons is certainly a viable approach. (One I have doubts about, but I’ve covered that extensively.) But that’s not what Chalmers sees to suggest here.
A tricky aspect here is between what I’ve called system states versus states of the system.
What Chalmers seems to be reaching for is a system that generates (new) states of the system — that is, the states of a dynamic system in operation.
What state-based systems usually do is move among predetermined system states (the FSA of the system) to generate output states of the system.
There is a subtle difference in that both sets of states are the same. But a given system state only occurs once in the state table (system states always have a state table or diagram). Contrariwise, states of the system can repeat because the operation of the system can take it to the same state repeatedly.
So I think there might be some confusion (perhaps on my part) as to exactly what the putative CSA:Brain is supposed to be.
On the one hand, it could be a list of states of the system that could be played back to recreate that same line of cognition.
On the other hand, it could be a finite set of system states and transitions that define the system. This is what Chalmers seems to mean.
But that required fBrain to move the system from state to state, and it’s very hard for me to conceive of what fBrain might be (even in principle).
But let’s set that all aside and assume fBrain is possible.
Let’s assume there is a middle ground between looking up states and generating them based on simulating brain physiology. Let’s assume there is some function that analytically determines the next vector state.
Can it then be true that:
FSA:Brain ≡ FSA:Computer(CSA:Brain)
This turns on the claim that organizational invariance preserves the causal topology. And it involves the abstractions of those two systems being seen as matching.
I’ll pick things up there next time.
Stay invariant, my friends!