I left off last time talking about intermediate, or transitory, states of a system. The question is, if we only look at the system at certain key points that we think matter, do any intermediate states make a difference?
In a standard digital computer, the answer is a definite no. Even in many kinds of analog computers, transitory states exist for the same reason they do in digital computers (signals flowing through different paths and arriving at the key points at different times). In both cases they are ignored. Only the stable final state matters.
So in the brain, what are the key points? What states matter?
Once again (for the last time, most likely), I’ll refer you to my posts about the full-adder code simulations. In particular today:
- Full Adder Simulation V1.0, and
- Full Adder Simulation V2.0, and
- Full Adder Redux (code for discussion below)
The first is a simulation of the common full-adder logic gate circuit:
The second is a simulation of a NOR-gate-only circuit:
The second circuit is more of a low-level simulation. The level below this involves simulating transistors, and this circuit is something of a bridge in that direction.
The key point about these circuits is that we only care about their inputs and outputs.
As the examples have shown, how we accomplish the correct outputs for given inputs varies and doesn’t matter in the context of a full-adder.
Its only goal is to add bits and generate a correct output.
Mechanism and platform don’t matter matter (other than the inputs and outputs needing to interface with any larger system).
In the circuits above, there are intermediate states between applying a given input and valid changes appearing on the output. In some cases, the intermediate states can cause invalid outputs to momentarily appear.
To deal with this, digital computers, latch inputs and outputs with the system clock.
Essentially, during the clock “tick,” the inputs are latched and stable so changes percolate through to outputs. On the clock “tock,” the system latches the output states on the assumption the outputs have “settled” down by then.
Because of this latching, digital computers march with all system states in lockstep with the system clock.
With analog computers a similar principle can be in play. A good metaphor is to imagine the technician taking a voltage reading.
The tech applies the test leads and then waits for the meter to settle down and provide a stable reading. (Not unlike when we step on the scale to weigh ourselves.)
Analog computers often do the same thing. Various signals need a chance to flow through the system and stabilize.
In either case, ignoring intermediate states is just a matter of waiting for a stable output before taking a reading.
In digital computers, the intermediate states are noise to be ignored (that must be ignored if the computer is to work correctly).
Background workings to be ignored might be a better way to put it. The “noise” is actually the sound of the system working.
Our own consciousness also isn’t aware of the “noise” of our brains working. In many regards, it’s not aware of any of its workings except at the highest level!
The question is whether those intermediate states matter. And if so, which intermediate states might matter? Just how low-level might we have to go to capture the system?
In particular, does a simulation have to go below the level of neurons and synapses? Are there things happening at the cellular level that matter? Atomic? Quantum??
But states that matter by definition aren’t “intermediate” from the point of view of a complete capture of necessary system states.
So the real question is: What brain states matter for consciousness?
If a model of neurons is too simple, if it doesn’t take those properties into account, the model might not work.
¶ Maybe the virtual brain is seen to function biologically, but is otherwise inert. Neurons operate, as such, but the system acts like a brain in a deep coma.
¶ Maybe it will show signs of consciousness, speech, awareness, but it will be raving, incoherent, or otherwise clearly insane. Or just a gibberish mind.
¶ Maybe, given the subtlety of the human mind and all the ways it breaks, a consciousness exists, but it’s not quite sane. (Or its sanity decreases over time as computational errors accumulate.)
The point is, the failure modes are myriad, and the “sweet spot” of consciousness may require specific conditions to occur.
(Like laser light does.)
Another huge question for computationalism is: Is consciousness in the outputs or in the process?
I’ll return to that topic when I talk more about computationalism, but the question involves exactly what a simulation’s numeric output means.
If consciousness lies in the operation of the machine (as laser light does), then no simulation will ever work.
Exactly as no simulation can produce coherent laser light photons.
Computers, marching in lockstep, use a series of checkpoints.
The general idea is that what happens between checkpoints doesn’t matter (and could be anything). Only the state of the system at the checkpoint matters.
The inputs and outputs to the full-adder are such checkpoints. The logical values there have to be correct. That is, they have to be stable and correct when the system “looks” at them.
But in a real-world physical system, intermediate states would be a problem.
Imagine if your car engine or drive train did unexpected things in between cranking out revolutions.
The way to look at it is that all states in real-world systems matter. In a sense, the concept of intermediate states don’t apply.
Here’s an example of intermediate states (see Full Adder Redux for the Python simulation code):
1: (A) 0 -> 1 2: G1: 1 -> 0 [1, 0] 3: G3: 0 -> 1 [0, 0] 4: G4: 1 -> 0 [0, 1] 4: (Co) 0 -> 1 5: G5: 0 -> 1 [0, 0] 5: G6: 0 -> 1 [0, 0] 6: G6: 1 -> 0 [0, 1] 6: G7: 1 -> 0 [1, 0] 6: G9: 1 -> 0 [0, 1] 7: G8: 0 -> 1 [0, 0] 7: (Co) 1 -> 0 8: (S) 0 -> 1
For the all-NOR circuit above, assuming it is in a stable state with all inputs set to 0, the list above shows the changes that occur when the a input is changed to 1.
In the list, each line starts with a clock tick number followed by a name and a transition. Input and output names are in parenthesis; the G# names are the nine NOR gates. Some clock ticks have multiple events.
For the gates, the two numbers in square-brackets are the input values.
The bottom line (literally) is that it takes eight clock ticks for a change of input a to trickle through the circuit to the S output.
Note how the Co output gets set to 1 in tick #4 but is reset to 0 in tick #7. That’s an output that’s briefly in a false state.
Here’s another way to see the changes passing through the network:
0: [1,0,0, 0,0,0,0,0,0,0,0,0, 0,0] 1: [0,0,0, 1,0,0,0,0,0,0,0,0, 0,0] 2: [0,0,0, 0,0,1,0,0,0,0,0,1, 0,0] 3: [0,0,0, 0,0,0,1,0,0,0,0,0, 0,1] 4: [0,0,0, 0,0,0,0,1,1,0,0,0, 0,0] 5: [0,0,0, 0,0,0,0,0,1,1,0,1, 0,0] 6: [0,0,0, 0,0,0,0,0,0,0,1,0, 0,1] 7: [0,0,0, 0,0,0,0,0,0,0,0,0, 1,0] 8: [0,0,0, 0,0,0,0,0,0,0,0,0, 0,0]
Each line shows a list (vector) of bits where each bit represents whether the respective gate changed during this tick (numbers along the left).
The simulation mechanism cycles the clock ticks until this vector is all zeros. (as at the bottom) That’s how it knows all changes have percolated through.
Another vector lets the software report on the gate states at each tick (literally capturing the states of the system):
0: [0,0,0, 1,0,0,1,0,0,1,0,0, 0,0] -- start states 1: [1,0,0, 1,0,0,1,0,0,1,0,0, 0,0] 2: [1,0,0, 0,0,0,1,0,0,1,0,0, 0,0] 3: [1,0,0, 0,0,1,1,0,0,1,0,1, 0,0] 4: [1,0,0, 0,0,1,0,0,0,1,0,1, 0,1] 5: [1,0,0, 0,0,1,0,1,1,1,0,1, 0,1] 6: [1,0,0, 0,0,1,0,1,0,0,0,0, 0,1] 7: [1,0,0, 0,0,1,0,1,0,0,1,0, 0,0] 8: [1,0,0, 0,0,1,0,1,0,0,1,0, 1,0] -- end states
In both cases, the three input bits, a, b, and Ci, are on the far left, and the two output bits, S and Co, are on the far right. The nine gates are in between.
The two lists, especially the first, show how changes flow through the system given a single input change.
The events for the other two inputs changing look similar.
The image at the top of the post shows a graphical representation of the NOR gates and outputs.
The chart shows the “power on” intermediate states:
All gates begin set False. But NOR gates invert, so the first click sees all the inputs at zero, so all the gates are set True. You can see that first big spike.
On further clicks the logic trickles through until, seven clock ticks later, all changes have propagated. The changes vector looks a little different on this one:
1: [0,0,0, 1,1,1,1,1,1,1,1,1, 0,0] 2: [0,0,0, 0,1,1,1,1,1,1,1,1, 1,1] 3: [0,0,0, 0,0,0,1,1,1,1,1,0, 1,1] 4: [0,0,0, 0,0,0,0,1,1,1,1,0, 1,0] 5: [0,0,0, 0,0,0,0,0,0,1,1,0, 1,0] 6: [0,0,0, 0,0,0,0,0,0,0,1,0, 1,0] 7: [0,0,0, 0,0,0,0,0,0,0,0,0, 1,0] 8: [0,0,0, 0,0,0,0,0,0,0,0,0, 0,0]
As upstream gates settle, fewer and fewer changes propagate. Looks like a flood more than the signal from the above example.
The bottom line is that a state-based simulation needs to pay attention to the level of system state that matters.
A full-adder doesn’t care about intermediate states, because only the outputs matter. Crucially, the intermediate states are not reflected in the full-adder abstractions (the truth table, the logical expression, or the FSM).
They do appear in the addition model (as overflow and carry), and they are implicit in the simulation.
In terms of outputs, these states don’t matter. In terms of implementations, those states are vital parts of the operation.
The question for consciousness is, which parts matter?
Stay intermediate, my friends!
 That clock can be as fast or slow as desired with the physical limits of the circuitry! (At very high speeds, RF coupling and excessive current flow become an issue.)
 Computers also have several levels of ignorance. The logic can’t see the wiring. The operating system can’t see the logic gates. It’s not uncommon for large parts of high-level software to be “blind” to parts of itself.
 At the quantum level, it does! Virtual particles are forming and canceling all the time everywhere, your whole car does weird things down in the interstices of reality in between being your car.
Fortunately, reality has its own checkpoints where things have to look normal.