Intentional States

This is what I imagined as my final post discussing A Computational Foundation for the Study of Cognition, a 1993 paper by philosopher and cognitive scientist David Chalmers (republished in 2012). The reader is assumed to have read the paper and the previous two posts.

This post’s title is a bit gratuitous because the post isn’t actually about intentional states. It’s about system states (and states of the system). Intention exists in all design, certainly in software design, but it doesn’t otherwise factor in. I just really like the title and have been wanting to use it. (I can’t believe no one has made a book or movie with the name).

What I want to do here is look closely at the CSA states from Chalmers’ paper.

[Update: As it turned out, that close look made me realize there is something wrong with the whole idea. I’ll butt in again after this first part, which is still on point.]

The idea is that a system of interest (a brain, in this case) can be divided into its functional parts (neurons, for instance) and the behavior of the system can be characterized as an ordered series of “snapshots” and transitions from one snapshot to the next.

The snapshots show the state of the functional parts changing over time as the system does its thing. In each snapshot, each part is in some state.

The possible states a part can have are typically denoted with numbers, so in each snapshot, the state of each part is specified with the number representing the state that part is in. The “snapshot,” therefore, is a list of numbers (rather than, say, a list of pixels for an image).

This is called a system state, as the combined states of all the parts at that moment comprise the state of the system at that moment.

Exactly as in a movie, the ordered series of snapshots describe the behavior of the system over time. (But with lists of numbers rather than pixels.)

Very importantly, as in an animated movie, with the right algorithm, we can also predict or simulate the behavior of the system over time. System states therefore can describe a real physical system in the future or an entirely abstract system.

A crucial point, however, is that these snapshots themselves only describe the system.

§

I think the movie analogy helps point out two important things:

Firstly, movie frames don’t capture what’s between frames. In filming real-world things, ordinary movie cameras miss a great deal. It requires high-speed cameras to begin to capture what’s going on.

Likewise, simulations need to have an appropriate time resolution if they’re to capture all the dynamics of the system. How often do we need to take snapshots of the brain to capture what it does?

There is also the issue of trying to determine what the brain states even are in an asynchronous real-time object like a brain. It doesn’t march in step, there’s no obvious moment to take the snapshots.

Secondly, it makes clear the descriptive nature of the snapshots (the movie).

A movie of real life isn’t real life. An animated movie also isn’t of real life. Both just describe something (as a series of still images).

§

We should distinguish between the system states and the states of the system.

The former is a description of the system itself — it actually reflects the system’s causality. The latter lists the states over some time period of the system. Most importantly, states of the system contain no causality.

We can, in principle, capture the states of the system from someone’s brain. That gives us a description of their cognition that could, in principle, somehow be “played back.”

Some believe that merely capturing these states results in a recording that somehow has mental content. Others hold that it needs to be played back in some fashion (which certainly raises the question: “How?”).

Crucially, in this case, no algorithm computes the playback of states.

Chalmers requires a causal topology found, he believes, in such an algorithm executing. He requires the next state be causally generated.

For, otherwise, he’s stuck with Searle’s Wall and Putnam’s Pixies, which he’s able to discard on grounds of having no causality. (I reject them on grounds of the complexity of the interpretation necessary to extract them, which I find more robust.)

But the recording of a real brain’s states has this problem: The causality of the brain is not captured. We just record a series of snapshots of it in action. Only in the ordering of the snapshots there is any apparent causal behavior.

Just as the frozen frames of a movie don’t contain causality, but show us an illusion of it when played back in sequence.

§ §

In any event, playback of states of the system can, at best, only show what happened. It can’t show what will happen.

That requires a simulated system (of something physical real or something imagined) to create new states of the system.

And that requires knowing the system well enough to determine its system states and transitions. Here we must have the transitions — they’re the whole point, they tell us how to move from state to state.

Chalmers starts with the obvious low-level parts, neurons, which gives us our state vector (for specificity, it has 50-billion neuron components, each with enough bits to model a neuron state).

The hard part is determining, given some configuration of those 50-billion neuron states, what is the next state? For now we have to imagine some program P that tells us how to compute the next state.

Note that what Chalmers describes here is not quite the same thing as simulating the neurons in a neural network. The neuron states don’t really reflect the behavior of the neurons, they certainly don’t model it.

All the CSA does is say: Given state SN, the next state is SN+1.

§

[This is where, for me, the lightbulb went on. The rest of this is current with regard to what I realized.]

But there’s a problem with this and it involves the whole point of the CSA — which is to combine components into a single state.

In reducing the topology to SN » SN+1, we are saying that that next state for all 50-billion neurons depends simply on stepping from one brain state to the next.

Specifically, there is no reason for the next state. It’s not based on any theory.

It’s not due to some model that says, “Okay, if a (simulated) neuron is in this (simulated) state (due to these simulated inputs), then its next (simulated) state is as follows (because simulated biology).”

It’s due to a model that only says, “Okay, if the whole system is in state SN, then the next state is SN+1 (because N+1 follows N).”

That’s all it says; that’s all a CSA can say.

Why would there be any causality in the sole fact of the whole brain being in one state and then being in the next state?

Of course there is an underlying causality responsible for the change. But treating the system as a CSA reduces it to N+1 comes after N, which isn’t causality at all (any more than 5 causes 6).

§ §

In section 3.3, Chalmers writes, “Any system that implements this CSA will share the causal topology of the original system.”

Beyond that, he doesn’t say much about such an implementation. Per my previous post, we’re talking about program P (executing on engine E).

At this point, it’s reasonable to ask if program P is even possible (E is, essentially, any CPU).

The idea of Kolmogorov complexity might force P to essentially be a list of states and transitions. In fact, all the more complicated state engine apps I’ve written were exactly that: the state table along with a simple lookup engine.

[See: State Engines, part 1, and the two parts that follow, for an implementation of a simple state engine.]

Being a state engine makes the system a playback mechanism.

(Chalmers also doesn’t say what drives the system from state to state, but given the description, it seems to be a matter of “system ticks” — which raises the question of time signatures.)

Ultimately, the causal topology just says this state follows that state because it does. It’s no more than the movie.

§ § §

For me, this blows the CSA idea out of the water (or sinks the ship, fielder’s choice).

The idea that causality is preserved by playing back a set of states seems questionable. At the very least it needs an account for how it is one state follows another. That’s the real causality.

As support for computationalism, I think the idea of a CSA fails.

§

And that’s on top of the problem I have with simulation versus reality.

Which, I suppose, means another post explaining that, but I think I can move on from this paper. I wanted to talk a little about sections 2.2 and 3.4, just my reactions, but three posts is enough, I think.

(OTOH, Mr. Chalmers, I’m totally down with Absent Qualia, Fading Qualia, Dancing Qualia (1995). Completely agree.)

§ § §

Stay stateful, my friends!

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

62 responses to “Intentional States

  • Wyrd Smythe

    Maybe I can spare myself (and you) a fourth post with this summary:

    On The One Hand: A physical, physically connected, network of 86-billion nodes. Node behavior is in virtue of their internal operation (whatever that amounts to). Each neuron responds to the direct (physically causal) input from, on average, 7,000 other neurons.

    On The Other Hand: An array (vector) of 86-billion memory locations, each an array of separate bits (with enough bits to represent neuron states). A separate Program, P, executed by an Engine, E, sets the bit patterns to represent different states.

    [Ever notice that “On The One Hand” and “On The Other Hand” both have the same abbreviation?]

    Already those seem pretty different to me, despite the claim that the latter is executing the “causal topology” of the former.

    But in more detail:

    A neuron has a physical state that it’s in. It’s a complex machine in its own right, so it has its own CSA, so to speak. What matters, of course, is the output, the result, of the internal states.

    Specifically, it matters to us whether the neuron is firing or not.

    That’s determined by the (on average) 7,000 other inputs and the complex internal workings of the neuron. From the outside, we can make a causal map (that ignores the internal workings) and determine that given inputs cause a given output.

    A simulated neuron is a set of instructions along with the necessary memory for neuron state and inputs. There is logic that says, given these numbers as inputs, here’s how to set the numbers representing state.

    Everything done amounts to manipulating numbers with logic. Which is to say: Everything done amounts to doing math.

    So I see those as very different.

    Another way to look at it is that the physical forces involved swamp the signal information. Even in an electrical circuit, the voltages involved far exceed the information content of a single bit.

    In contrast, an abstraction is pure information. The CSA, of itself, requires no energy whatsoever. The computer uses physical signals to be a computer, but the abstraction the computer is executing is pure information.

    Meanwhile, back in the brain, the abstraction (from which the CSA is created) is reified in the physical operation of the brain. The causality that backs the abstraction is based on the physics of the brain.

    Which brings up another crucial difference:

    In the brain (in any machine), the CSA representing it is reified in the physics of the brain in a direct, one-to-one way. The CSA comes from those physics.

    A computer also has a CSA that the physical machine reifies. That CSA, of course, bears no relation to the brain CSA.

    The operation of the computer plays back or calculates the brain CSA, which is an abstraction.

    So, bottom line:

    • Physical neuron state versus a number in memory.
    • Physical forces versus abstract information.
    • Physical mechanism versus numeric abstraction.

    All I’m saying is that these difference seem significant enough to call computationalism an extraordinary claim.

  • JamesOfSeattle

    Ultimately, the causal topology just says this state follows that state because it does. It’s no more than the movie.

    Actually it would be more accurate to say the causal topology says this state follows that state because the laws of physics require it. That’s why the order of states matters and is different from a movie. In a movie you could change the order of frames and it would still play. In a causal topology, changing the order of the states would require changing the laws of physics.

    *

    • Wyrd Smythe

      “Actually it would be more accurate to say the causal topology says this state follows that state because the laws of physics require it.”

      In the actual physical system, yes, absolutely. The states occur due to the physics of the physical system.

      But once you abstract the CSA and play it back on a computer, that physics-based causality is lost. You can play the CSA states back in any order, just as you can re-order the frames of a movie.

      The situations are the same. A movie of a real-life scene, records snapshots of physical causality in action. But those snapshots are disconnected, non-causal, and re-orderable.

      Likewise, the CSA is a set of snapshots of a physical system in action. And those snapshots are likewise disconnected, non-causal, and re-orderable.

  • JamesOfSeattle

    If the computer implements the CSA, then the states follow in order because of the laws of physics. If the states are following some other order, that computer is not implementing the CSA. If the computer is just playing back the states in order, but each state is not leading to the next state because of the laws of physics, that computer is not implementing the CSA in question, it’s implementing some other CSA.

    *

    • Wyrd Smythe

      “If the computer implements the CSA, then the states follow in order because of the laws of physics.”

      ?? What physics?

      CSA state #50001 is a list of 50-billion numbers.

      CSA state #50002 is a list of 50-billion numbers.

      What possible physics connects two lists of numbers?

      The only reason, as far as the computer is concerned, that one follows the other is that one is labeled #50001 and one is labeled #50002.

      All CSA states are just lists of numbers. What physics connects lists of numbers?

      • JamesOfSeattle

        If the computer implements the CSA, some physical state of the machine represents state #50001, and some physical process pushes electrons around until the machine is in a state that represents #50002. If that is not how it works, the machine is not implementing the CSA.

        *

      • JamesOfSeattle

        Let me clarify that last sentence. If that is not how it works, the machine is not implementing the CSA in a way that preserves the causal topology.

        *

      • Wyrd Smythe

        Correct. No computer can.

        The causality you speak of, the electrons pushing things around, as I explained earlier, is the computer’s causality, not the program’s.

        Consider that I program the computer to run the states, but skip every other one. Or I program it to run them backwards.

        What causality is responsible for the next state?

      • JamesOfSeattle

        If you program the computer to skip states, the computer is executing a different CSA with a different causal topology.

        Whatever the computer is running, the current state is responsible for the following state. The order of those states is the causal topology. If those states in that order match a CSA, then that computer is implementing that CSA with that causal topology. The states are physical, they are not the numbers. The numbers in the table represent the states. They are not identical to the states.

        *

      • Wyrd Smythe

        “If you program the computer to skip states, the computer is executing a different CSA with a different causal topology.”

        Obviously, but do you understand it’s completely arbitrary to the computer?

        “Whatever the computer is running, the current state is responsible for the following state.”

        The current state of the computer, not the program. The program is arbitrary.

        Again, those same computer states occur during the execution of any program.

        The only difference is the bit values involved in those states.

        “The states are physical, they are not the numbers.”

        Yes, but they represent arbitrary numbers. The states in the physical system represent genuine physical states of the system, not numbers.

        “The numbers in the table represent the states. They are not identical to the states.”

        Exactly. So what is the difference between having those numbers in a table and cycling them through memory one after the other?

        The only causality is in the ordering of the table of numbers.

        Chalmers’ claim is that running those numbers through memory in the right order preserves the causal topology of the physical system. I argue that it can’t.

      • Wyrd Smythe

        Let’s try something. Here is a causal hierarchy:

        Brain ⟩⟩ Neurons ⟩⟩ Biology ⟩⟩ Chemistry ⟩⟩ Physics.

        Here’s another:

        CPU ⟩⟩ Gates ⟩⟩ Circuits ⟩⟩ Electronics ⟩⟩ Physics.

        What I mean by causal hierarchy is that the causal behavior of an item on the left is fully explained by the item to the right. So brains are fully explained in terms of neurons, and a CPU is fully explained in terms of gates. These are layers of physical causality.

        Ultimately what this is saying is that both a brain and a CPU are fully explained by physics.

        Do you agree?

      • JamesOfSeattle

        Maybe that’s the disconnect. The computer is not the CPU. The computer is the cpu plus all of the memory plus all of the input/output hardware (and maybe some other stuff). The contents of memory are part of the physical state of the computer. If a cosmic ray flips a bit in memory it has changed the state of the computer. If by flipping that bit the cosmic ray changes how the computer functions, presumably by changing what state will follow the current state, the cosmic ray has changed the causal topology.

        *

      • Wyrd Smythe

        “The computer is the cpu plus all of the memory plus all of the input/output hardware (and maybe some other stuff).”

        Absolutely, no question. I used “CPU” as a stand in for all that.

        So, given that elaboration, do you agree with that causal hierarchy?

        “…presumably by changing what state will follow the current state, the cosmic ray has changed the causal topology.”

        Hold that thought, because we’ll come back to it.

        For now, do you agree with the causal hierarchy?

  • JamesOfSeattle

    I should add, the program in memory, i.e., the program currently running on the computer, is part of the physical state of the computer. (The program is not the state, but how the program is stored on the computer specifies part of the state). The computer running a different program is in a different state.

    Also, you could have a computer running the same program under different operating systems. In that case the physical states would be different, but the causal topology could be the same if you can map each state under one operating system to the equivalent state under the other operating system.

    *

    • Wyrd Smythe

      “I should add, the program in memory, i.e., the program currently running on the computer, is part of the physical state of the computer.”

      We’re not yet talking about any physical states anywhere.

      The causal hierarchy I’m speaking of, which in the computer depends on the gates or circuits or electronics (whichever level you care to look at), is there regardless of the states of memory.

      Before you even turn it on, is it a physical causal object? Does it have a physical causality that controls its behavior?

      The instant you turn it on, before memory is loaded, is it a physical causal object?

      This is the level of physical causality we’re discussing at the moment, and it’s the only thing we’re discussing at the moment.

      We’ll get to states, trust me.

      • JamesOfSeattle

        Before you turn it on, the computer is a physical causal object. After you turn it on the computer is a different physical causal object.

        Go on.

      • Wyrd Smythe

        Certainly, but at this point we’re just talking about the physical causality of the computer itself.

        Crucially, that physical causality is the same regardless of whether the computer is on or off, and therefore obviously the same regardless of what software it loads.

        We’re talking about the physical capabilities of the computer itself.

  • JamesOfSeattle

    I will agree with the causal hierarchy, although it’s usually referred to as levels of abstraction.

    Go.

    *

    • Wyrd Smythe

      You should have just said yes.

      Because, no, brains, neurons, gates, and circuits are physical objects, and electronics, chemistry, and physics are all, well, physics — physical causality. None of these items are abstractions. That’s the point. We’re talking about physical reality, physical causality, at this point. We’re saying higher layers of that physical system are explained by the lower levels of that system (what’s called reduction). From a reductionist point of view, both the brain and the computer are fully explained by physics.

      Okay, we agree on the causal hierarchy. Let’s proceed (and get to the abstract part).

      1. We can specify the behavior of the brain with a CSA, which we will name CSA:Brain.
      2. We can specify the behavior of the computer with a CSA, which we will name CSA:Computer.

      That much should be obvious. We’re just applying Chalmers’ notion of a CSA to the two physical systems, exactly as he specified.

      The crucial point here is this: CSA:BrainCSA:Computer (because they specify different physical systems with different physical behavior).

      Agree?

      • Wyrd Smythe

        In case it wasn’t clear, CSA:Brain is, per Chalmers, based on state vectors of neurons. Equivalently, CSA:Computer is based on vectors of logic gates. (Because, in the causal hierarchy, neurons and gates are equivalent.)

      • JamesOfSeattle

        What you say is fine in that there is no contradiction, but it is incomplete. The problem I see coming up stems from the fact that the computer can implement more than one CSA. I will say the computer can implement CSA:brain if it is programmed appropriately. The CSA:computer that you reference is there, but irrelevant.

      • Wyrd Smythe

        “What you say is fine in that there is no contradiction, but it is incomplete.”

        It is incomplete on the computer side, exactly so.

        “The problem I see coming up stems from the fact that the computer can implement more than one CSA.”

        Ha! Yes, indeed (although it remains to be seen if you and I are seeing the same problem).

        “I will say the computer can implement CSA:brain if it is programmed appropriately.”

        Yep, you’re three-for-three.

        “The CSA:computer that you reference is there, but irrelevant.”

        But I’m not sure if we’re on the same page here.

        In the context of computationalism, absolutely the platform is irrelevant. (Church-Turing, yes?)

        But we are, at the moment, discussing only how two physical systems work in and of themselves. The only abstractions of interest at the moment are CSA:Brain and CSA:Computer.

        We agree, if grudgingly on your part, those are entirely different causal topologies, yes?

        Now please bear with me for a slight detour before I get to the computer running the brain CSA. You have programming experience and I noticed C on your list of languages, so I think this should make sense:

        You’ve used a compiler. Any program is an FSA, which means any program is a CSA. Therefore, for any given C compiler, there is a CSA:Compiler that specifies how that compiler works. (In fact, most software starts as some sort of abstract specification, such as an FSA.)

        This is, again, just applying Chalmers concept of a CSA to the operation of a compiler. The state vector in this case is likely the RAM space the compiler uses plus the CPU registers.

        The key point is that CSA:Compiler specifies how the compiler behaves just as CSA:Brain and CSA:Computer specify the behavior of their respective systems.

        Still with me on all counts? (I’m almost there.)

      • Wyrd Smythe

        Excellent, thank you for bearing with me.

        You might legitimately note that CSA:Compiler is incomplete because it makes no reference to the inputs or outputs. Adding those brings up a key point:

        Let’s call FSA:Compiler the finite state system that defines how the compiler works. It needs no reference to specific inputs or outputs other than to define their range and meaning. This is true of just about every program; it handles a range of inputs. (As I mentioned previously, most software starts as an abstraction, one possibility of which is a finite state system. The IPO architecture is another starting point abstraction for software design.)

        The range of valid inputs for FSA:Compiler is all possible valid C programs. The range of outputs is object code generated by the compiler. Obviously, there is a mapping from inputs to outputs defined by FSA:Compiler.

        Now consider two scenarios:

        1. FSA:Compiler(Program.1)
        2. FSA:Compiler(Program.2)

        That is, we feed two different (valid C) programs to the compiler. Each causes the compiler to go through a set of similar, but different, states (due to two different inputs).

        If we record those states as a CSA (using memory as the state vector), we end up with two different (but probably fairly similar) CSAs:

        1. CSA#1:Compiler
        2. CSA#2:Compiler

        All good so far?

      • Wyrd Smythe

        (This is similar to what you said before about the computer going through different states when it executes different programs.)

      • Wyrd Smythe

        Great! Ballgame and the Dem Debate. Back in a couple hours.

      • Wyrd Smythe

        Fairly interesting debate. I wanted to see our own Amy Klobuchar; she did well. See below for continuation of discussion.

  • Wyrd Smythe

    The compiler detour helps me explain this next part, because the same thing happens with regard to the computer running software.

    And now the following propositions should make sense and hopefully be agreeable to you. As with the compiler…

    1. There is an FSA that defines the operation of the brain. Call it FSA:Brain.
    2. There is a (different) FSA that defines the operation of the computer. Call it FSA:Computer.
    3. CSA:Brain is, as previously defined, a list of state vectors describing the brain’s operation — presumably as a consequence of FSA:Brain doing its thing.
    4. We define FSA:Computer(program) to mean the computer executes program.
    5. As a consequence of that execution, the computer goes through a set of states that comprises a CSA, per Chalmers’ definition. Assume for the state vectors, we use the computer’s memory locations (including any registers).
    6. Therefore: FSA:Computer(program) results in CSA:program.

    In particular: FSA:Computer(CSA:Brain) results in CSA:Brain.

    How are we doing? Can you agree with everything in the numbered list? How about the last line?

    • Wyrd Smythe

      (It kind of results in CSA:CSA:Brain, if you see what I mean, but since a CSA is an abstraction, a CSA of a CSA is just a CSA.)

      • JamesOfSeattle

        As long as the CSA of the CSA is a different CSA, I think I’m with you

      • Wyrd Smythe

        “As long as the CSA of the CSA is a different CSA, I think I’m with you”

        Yes, absolutely! CSA:CSA:BrainCSA:Brain.

        The former is the list of state vectors of computer gates while the computer is running CSA:Brain.

        Now comes what might be the sticking point (but it shouldn’t be; it follows from what I’ve said):

        1. The causality of the brain is defined in FSA:Brain, whereas CSA:Brain is a recording of brain states during some time period.
        2. The causality of the computer is defined in FSA:Computer, whereas any given CSA:Computer is a recording of computer states during some time period.
        3. Of specific interest is CSA:program, which is a recording of computer states during the execution of some program.

        The main point being that the causality of the system itself is encoded in the FSM, whereas the CSA is a recording of the states of that system in action.

        A distinguishing characteristic between them is that the FSM is an abstraction describing the causality of the system. The FSM provides the “because” of the system: the system goes from this state to that state because of some input or condition.

        OTOH, the CSA is an abstraction describing the behavior of the system. It’s a recording, like a movie. When we play it back, the system repeats that behavior. This state follows that state because it comes next (which is a tautology).

        So,… still on the ship or did you jump here?

  • JamesOfSeattle

    Nope, ya lost me there. FSA:Brain and CSA:Brain are essentially the same kind of thing but at different levels of description. They’re both automata. As Chalmers says, “combinatorial-state automata (CSAs) […] differ from FSAs only in that an internal state is specified not by a monadic label S, but by a vector [S^1, S^2, S^3, …].” I don’t see how that makes a CSA a recording instead of a causal topology.

    *

    • Wyrd Smythe

      “FSA:Brain and CSA:Brain are essentially the same kind of thing but at different levels of description. They’re both automata.”

      Agreed on both counts. Very different levels of description!

      The key here is what they are automata of.

      This is why I took a detour through FSA:compiler(program) resulting in a CSA that reflects the compiler processing program. The CSA is a list of the states of the compiler during that time.

      Likewise, FSA:computer(program) results in CSA:program, which is a list of states while the computer executes program.

      And CSA:Brain is a list of neuron states during some time period. (Presumably, it’s due to some kind of internal FSA:Brain executing, essentially, itself.)

      So an FSA, as I’m using it here, means an automata that defines a process. For instance, FSA:Compiler defines how the compiler works.

      As Chalmers defines a CSA, it describes system behavior over time. Exactly as you quote, “a vector [S^1, S^2, S^3, …].” Those are the entries of the list.

      Let me ask you: Have you any coding experience with FSA? I should think comparing one with the CSA Chalmers defines would be helpful in seeing the difference between the types of finite-state systems involved.

      [Here’s one example of what I’m talking about. Or just see the image of the FSA in first post in this series.]

      • Wyrd Smythe

        Another difference between the two: An FSA is a graph with abstract nodes (states) and edges (transitions). A CSA is a list of actual neuron or memory states. An FSA has loops, a CSA is linear.

      • Wyrd Smythe

        One might call CSA:Brain our minds — it’s the brain’s states over time, our flow of cognition, as a result of FSA:Brain executing.

        It’s worth noting that having FSA:Brain amounts to having a Theory of Consciousness. Or, per Chalmers, at least a Theory of Cognition.

        FSA:Brain is the Holy Grail of AGI.

        CSA:Brain is possible in principle if we could record the state of each neuron fast enough.

  • Wyrd Smythe

    If you’re still with me, I can now connect all the dots.

    To start, I’ve been cheating a little with Chalmers’ notion of a CSA. I wanted to draw a clear distinction between the base idea of an FSA, which can define a system (what I’ve called system states) compared to a list of states over time that describe a system (states of the system).

    Think of it as kind of the edge case possible for a CSA.

    And it forms an important example for why I think simulations are questionable, so I needed it to be clear what I meant by a list of states.

    The final step might get you back on the ship, at least temporarily. 🙂

    So, in the context I’ve set, what Chalmers needs is a “CSA” (in the sense I’ve defined it as a list) to be more FSA-like. That is, he needs that “because” to move from state to state.

    In the paper [section 2.1] he writes: “State-transition rules are determined by specifying, for each element of the state-vector, a function by which its new state depends on the old overall state-vector and input-vector, and the same for each element of the output-vector.”

    Which distributes the causality of the inputs across all “neurons” in the state vector. All the neurons march to the next state because of some input.

    Crucially, there must be in the system the possibility that a given state leads to multiple possible other states depending on the input.

    (The complication is treating the entire system as a (combined) state and how that requires distributing input causality across all neurons, but that’s a separate discussion for now.)

    One issue is that the more FSA-like the CSA becomes, the bigger it becomes. The Holy Grail would be for CSA:Brain to be FSA:Brain, which means that CSA:Brain must account for every possible state the brain can be in (as FSA:Brain does).

    The notion of CSA:Brain as a list of states is at least foreseeable — it just involves recording and playing back 86-billion values per state. Very large, but not as big as FSA:Brain would be.

    The problem is that state tables tend to be much larger than the problem they describe.

    Anyway, if this connects all the dots and makes sense, we can move on to why I think it won’t work.

    • JamesOfSeattle

      The problem seems to me to be that you are mis-characterizing CSA’s as just a list of states. Let me go find specific examples:

      And CSA:Brain is a list of neuron states during some time period. (Presumably, it’s due to some kind of internal FSA:Brain executing, essentially, itself.)

      It’s due to the laws of physics, just like the list of states for FSA:Brain is due to the laws of physics.

      As Chalmers defines a CSA, it describes system behavior over time.

      A CSA describes system behavior over time exactly the way a FSA describes behavior of a system over time.

      An FSA is a graph with abstract nodes (states) and edges (transitions). A CSA is a list of actual neuron or memory states.

      No, A CSA is also a graph with abstract nodes (states) and edges (transitions). It’s just that the CSA specifies values for specific substates, whereas the equivalent FSA lumps all those sub states into 1 state.

      One might call CSA:Brain our minds — it’s the brain’s states over time, our flow of cognition, as a result of FSA:Brain executing.

      “FSA:Brain executing” is the same thing as CSA:Brain executing. The sentence would be just as accurate: one might call FSA:Brain executing our minds — […] as a result of CSA:Brain executing.

      An FSA has loops, a CSA is linear.

      This is simply false. A CSA can have loops.

      CSA:Brain is possible in principle if we could record the state of each neuron fast enough.

      The CSA:Brain is also possible in principle we can understand the causal relation between each neuron.

      Bottom line, CSA:Brain is a specification of how the brain will respond to input, just like FSA:Brain is such a specification.

      *

    • JamesOfSeattle

      I’m trying to see your point, but you
      1. spread it over many posts,
      2. make some comments that appear incorrect on their face (“An FSA has loops, a CSA is linear“), and
      3. make lots of comments like

      The problem is that state tables tend to be much larger than the problem they describe.

      which suggest that you are going to argue that it won’t work because it’s really hard (tables get really big?), as opposed to arguing it’s logically impossible.

      Please move on to why you think it won’t work. At your leisure.

      *

      • Wyrd Smythe

        “The problem seems to me to be that you are mis-characterizing CSA’s as just a list of states.”

        As I said, I’ve been cheating a little to create a context. The state-transition function Chalmers mentions [section 2.1], I’ve been visualizing something like this:

        function nextState (state_id, neuron_id) {
            return STATES(statd_id)[neuron_id];
        }

        Where STATES is a function that returns the entire state vector given the id of that state. The idea is that, in this case, the state_id is a timer tick, so the CSA rolls off the states one after the other.

        I suppose my error was in trying to use Chalmers’ notion of a CSA that way, but you didn’t like it when I introduced new terminology, so I saw this as a way to represent what I meant.

        That said, yes, the CSA Chalmers ultimately needs is an FSA. Part of what I’m trying to illustrate is that there are multiple FSAs for any process — multiple ways to abstract it.

        “I’m trying to see your point,”

        It doesn’t feel like it.

        The way it’s usually done, in my experience, is that the other person feeds back their understanding of what you said. You mostly tell me I don’t know what I’m talking about. That’s not making an effort to understand me. Certainly I’m not perfect at expressing myself, are you?

        “make lots of comments like”

        A lot? Really?

        No, I wasn’t going to base any argument on table size. It was an aside.

        “Please move on to why you think it won’t work.”

        I was just about to. Stand by.

  • Wyrd Smythe

    Okay, here’s the point:

    I needed to define a notion I think of states of the system, which I’ve explained before. There is a CSA that accomplishes what I want, but instead I’ll call it “LIST”. That can replace “CSA” in much of what I wrote.

    So FSA:Compiler, given Program.1 or Program.2 results in:

    FSA:Compiler(Program.1) → LIST:Compiler.1

    FSA:Compiler(Program.2) → LIST:Compiler.2

    Note that LIST is the list of states (based on memory states) the compiler goes through, not its output. The output would be the object code for the two programs.

    As an aside, I used a compiler here because it takes a program as input, but treats that program, and the output that results from processing it, as data.

    Importantly, the program, and output, encodes the causality of the program, as put there by the programmer, but the compiler is unaware of it.

    Likewise, giving two programs, CSA.1 and CSA.2, to the computer:

    FSA:Computer(CSA.1) → LIST:Computer.1

    FSA:Computer(CSA.2) → LIST.Computer.2

    Again, LIST is the states (based on logic gates here) the computer goes through. Similarly, with the brain:

    FSA:Brain(environment) → LIST:Brain

    (LIST based on neurons here.) And in the above, FSA==CSA, for all intents and purposes.

    Now, given that we can record LIST:Brain and create the simple “list” CSA I’ve described previously, it can be true that:

    CSA.1 == Wrapper(LIST:Brain)

    Such that when the computer executes CSA.1, it is just rolling off states one after the other based on nothing more than what comes next. To insure the computer does lots of processing, let’s encrypt and compress LIST, so Wrapper() involves decompression and decryption.

    Of course, obviously, there is no causal topology of the brain represented here. It’s just a recording of brain states. (And not even that, it’s a list of numbers representing brain states.)

    So, presumably, since there is no causal topology in CSA.1, it cannot accomplish the same thing as CSA.2, which is exactly the CSA Chalmers describes. It does encode, on some level, the causality of the brain.

    And, clearly LIST:Computer.1 and LIST:Computer.2 have to be quite different.

    The former represents the computer decompressing and decrypting data, while the latter represents its execution of the CSA.2 logic that calculates the next state.

    But here’s the punch line. As far as the states generated in memory, the “output” of the two programs, that’s the same in both cases. The state vectors go through the same set of changes in both cases.

    But one is just a list, the other involves calculation.

    Chalmers’ claim is that the calculation amounts to recapitulating the causal topology of the brain, but I don’t see how that’s possible.

    Mainly because:

    LIST:Brain ≠ LIST:Computer.2

    My bottom line: The two systems are most definitely not going through the same set of states. The only claim they are involves a very high-level abstraction, and I have a hard time seeing how that counts.

    • JamesOfSeattle

      Quick question: is FSA:Computer(CSA.1) the loading of program CSA.1, or is it the running of CSA.1 given some input?

      *

    • JamesOfSeattle

      LIST:Brain ≠ LIST:Computer.2, but
      LIST:Brain correlates with LIST:Computer.2, for any input.

      That is, given LIST:Computer2 for any given input, we can determine LIST:Brain for the equivalent input.

      Agreed?

      *

      • Wyrd Smythe

        Yes, with one caveat:

        The correlation is there, but trying to derive LIST:Brain from LIST:Computer.2 may not always be possible. A rough analogy is that, given the value 42, it’s not possible to determine if it results from 21+21 or 6×7. Or, given the result of (X+Y) mod 12, it’s impossible to determine exactly what X or Y is.

      • Wyrd Smythe

        A better analogy might be a hash function. Ignoring the small possibility of collisions, a hash function always creates a unique hash for any given input. But it’s generally not possible to determine what that input was. All one can say is that, given Hash-1 and Hash-2, which are different, the two inputs had to be different.

      • Wyrd Smythe

        Bandwidth issues again, or did you jump ship?

      • JamesOfSeattle

        Mostly got tired, which is a bandwidth issue. But picking up …

        States -> Behavior

        Equivalent states -> equivalent behavior

        Mentality = behavior

        Equivalent states -> equivalent mentality

        Yes? No?

        *

      • Wyrd Smythe

        “States -> Behavior”

        Within the context of their system, agreed. Brain states describe brain behavior, computer states describe computer behavior.

        “Equivalent states -> equivalent behavior”

        Depending on what’s meant by “equivalent,” agreed. And only within the context of their respective systems.

        “Mentality = behavior”

        I’m not entirely sure about that. I don’t know that some of our behaviors aren’t rooted in something deeper than our mentality (the knee-jerk reflex, for instance). But generally speaking, agreed.

        “Equivalent states -> equivalent mentality”

        In two brains, equivalent brain states probably result in equivalent mentality, agreed.

        But remember my point is that the states of the computer are nothing like the states of the brain. Saying the computer goes through the same states at the brain is a high-level abstraction at best and, IMO, not at all true.

    • JamesOfSeattle

      I’m saying if Brain1 and Computer2 have equivalent states, they have equivalent mentality. How do you test the mentality of something? You test it. You check it’s answers. I.e., behavior. The behavior of Brain1 and Computer2 is equivalent. I would say identical, but it cannot be identical, just like your behavior cannot be identical to mine. If you channel the output into a common medium, like text messaging, the behavior would be indiscernible.

      *

      • Wyrd Smythe

        “I’m saying if Brain1 and Computer2 have equivalent states, they have equivalent mentality.”

        I know. That’s what Chalmers is saying, too. I’m challenging it.

        “How do you test the mentality of something? You test it.”

        Absolutely! But we’re a long way away from having a system we can test. Until then it’s all speculation based on what one believes.

        “The behavior of Brain1 and Computer2 is equivalent.”

        We’ve already established that LIST:Brain and LIST:Computer.2 are quite different, which means that FSA:Brain and FSA:Computer are quite different.

        Any claim to equivalency is based on comparing the abstractions of FSA:Brain and FSA:Computer(CSA:Brain). Further, the abstraction that FSA:Computer(CSA:Brain) can be said to recapitulating the states of FSA:Brain is based on an interpretation of what the computer is doing that is very different from the interpretation of the brain that results in CSA:Brain.

        So I’m saying any claim to equivalency between these two cases is weak or non-existent.

  • JamesOfSeattle

    Any claim to equivalency is based on comparing the abstractions of FSA:Brain and FSA:Computer(CSA:Brain)

    True, but mentality is an abstraction, so comparing mentality of two systems would require comparing abstractions.

    You say “But we’re a long way away from having a system we can test”, which is another way of saying it’s hard. So what? We’re talking about logical possibility. You continue “Until then it’s all speculation based on what one believes”, which is just giving up and saying “who knows?”

    But like I said, logically, if you ask FSA:Brain and FSA:Computer(CSA:Brain) the same questions, you will get the same answers. Right? Or are you saying “who knows?”

    *

    • Wyrd Smythe

      “True, but mentality is an abstraction,”

      How do you figure? It’s a product of a physical system operating in the real world. Our description of it is an abstraction, but mentality is a real physical thing.

      (Unless you’re making a dualist or spiritual claim, which I don’t believe you are.)

      “You say ‘But we’re a long way away from having a system we can test’, which is another way of saying it’s hard. So what?”

      Being a little disingenuous here, aren’t you?

      What I said was in direct reply to your assertion: “How do you test the mentality of something? You test it.”

      I agreed and commented: “Absolutely! But we’re a long way away from having a system we can test. Until then it’s all speculation based on what one believes.”

      You are, once again, taking my asides much too seriously.

      “But like I said, logically, if you ask FSA:Brain and FSA:Computer(CSA:Brain) the same questions, you will get the same answers. Right?”

      Depends on the question, doesn’t it?

      As I have said before, a good enough simulation of a brain is likely to answer questions about a biological brain. Similar to how a simulation of a heart can answer questions about how a heart works.

      Whether such a system can answer questions about consciousness is what I’m not at all sure about.

      This depends on entirely whether two completely different systems doing two completely different things, but which can be interpreted to be using the same high-level abstraction, can accomplish all the same results.

      I think it’s a stretch to imagine they can.

      Have you considered, for one example, that the CSA Chalmers describes makes no reference to the network connectivity of the brain? Presumably it’s encoded in the function that calculates the next state, but it’s not contained in the CSA states themselves.

      • JamesOfSeattle

        [I don’t take your asides too seriously. I simply fail to recognize them as asides. I’ll try harder.]

        “But like I said, logically, if you ask FSA:Brain and FSA:Computer(CSA:Brain) the same questions, you will get the same answers. Right?”
        Depends on the question, doesn’t it?

        Um, no. I’m talking about inputs and outputs. If you input a question like “how do you feel about Warren’s chances in the primary?”, you should be getting the same response from each, right?

        *

      • Wyrd Smythe

        I’m saying probably not, because the brain simulation would be of a comatose brain (my best guess) or a seriously impaired brain if it had “thoughts” at all.

      • Wyrd Smythe

        [I’ll try to cut down on the asides. Way my brain works, lots of things connect with other things, and there are so many nuances to things…]

        Let me put it this way: I fully understand the claim of computationalism, which (unless I missed something) is basically what you are arguing. If we do the right thing in the computer, it will calculate a mind.

        (FWIW, Chalmers makes the weaker claim that it will calculate cognition.)

        I disagree, as I said before, largely on the grounds that LIST.Brain and LIST.Computer.2 are so different (and hence what those two systems are doing is so different).

        There is also my argument that LIST.Computer.x (for any x) looks a lot like any LIST.Computer.y (again, for any y). For example, at least half of any computer LIST involves the CPU fetching and decoding instructions, which hardly varies across the process itself let alone across different processes. Fetching and decoding an ADD instruction is the same every time in every context.

        Thirdly, there is my argument that the abstraction CSA:Brain has a strong direct mapping to the physical brain itself. That causality is determined by the physical characteristics of the thing itself. But the abstraction created by FSA:Computer(CSA:Brain) only exists in the programmer’s mind — it has no connection to the machine.

        I think all those things matter.

      • Wyrd Smythe

        There is also a more subtle point that, in the brain, there is a one-to-one correspondence between FSA:Brain states and CSA:Brain states. The states are one-onto-one.

        In FSA:Computer(CSA:Brain), there are many states necessary to simulate each state of CSA:Brain. So the underlying mechanism has a one-to-many correspondence.

        Put another way, each state in LIST:Brain is a brain state, whereas it requires many, many states in LIST:Computer.2 to simulate that same single brain state.

        Again, a difference I find significant.

      • JamesOfSeattle

        Problem is there are lots of correpondences, so it can get confusing to keep track.

        So for each state in LIST:Brain there should be exactly one corresponding state in FSA:Brain. And as you say, there is exactly one corresponding state in CSA:Brain, and there should be exactly one corresponding state in FSA:Computer(CSA:Brain), and there should be exactly one corresponding state in LIST:Computer.2. Let’s call it the special correspondence. (See next paragraph for how we get there.)

        Given that there is a many-to-one LIST:Brain to LIST:Computer2 corrspondence, we can consider a different, one-to-one, mapping where the state in LIST:Brain corresponds to the final corresponding state in LIST:Computer2. That state is the special, “exactly one corresponding state in LIST:Computer2” referenced in the preceding paragraph. For example, if we have
        LIST:Brain(state1) —> LIST:Computer2(stateA,stateB,stateC),
        then the special correspondence referenced above is
        LIST:Brain(state1) —> LIST:Computer2(stateC)

        Now let’s assume the input for this scenario was the written question “Do you believe in God?”. At some point in LIST:Brain there will be a state (staten) for which the decision to answer “no” has been made, and the next state is the beginning of generating the typed response “no”. That state will have a corresponding state in LIST:Computer, the special correspondence as just described.

        Are you saying LIST:Brain can have a one-to-one (special) correspondence with LIST:Computer2 but also result in a different answer?

        *

      • Wyrd Smythe

        “And as you say, there is exactly one corresponding state in CSA:Brain, and there should be exactly one corresponding state in FSA:Computer(CSA:Brain), and there should be exactly one corresponding state in LIST:Computer.2.”

        I think you’ve got it, but just to be clear:

        FSA:Brain(state-###) → LIST:Brain(state-###) which is the result of CSA:Brain(state-###). In all three, state-### is a single state.

        FSA:Computer(CSA:Brain(state-###)) → LIST:Computer.2(many states) because it requires many computer states to accomplish CSA:Brain(state-###).

        “LIST:Brain(state1) —> LIST:Computer2(stateC)”

        You’re going to just hand-wave away all the intervening computer states? Doesn’t that demonstrate how abstract the whole thing is?

        “That state will have a corresponding state in LIST:Computer, the special correspondence as just described.”

        But you’re assuming FSA:Computer(CSA:Brain) works in the first place, which is what’s being questioned. (Nearly all computationalist arguments do beg the question. They have to, since no working systems exist, nor do we know how to build one.)

        IF it works, what you say is true. But my contention is it won’t work like you think.

  • JamesOfSeattle

    Fair enough. I’m gonna leave it at that.

    • Wyrd Smythe

      Ultimately it isn’t a resolvable question at this time, so debating it is pretty much just an intellectual exercise. One’s position, as I mentioned earlier, depends a lot on one’s axioms.

      FWIW, I started off (back in the 1970s) just assuming computationalism was right. It’s only in the last decade or so, after lots of conversations like ours and lots of reading, that I’ve found myself questioning my axioms. And it isn’t that I’m anti-computationalist. I’ve just gotten skeptical.

And what do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: