Algorithmic Causality

This continues my discussion of A Computational Foundation for the Study of Cognition, a 1993 paper by philosopher and cognitive scientist David Chalmers (republished in 2012). The reader is assumed to have read the paper and the previous post.

I left off talking about the differences between the causality of the (human) brain versus having that “causal topology” abstractly encoded in an algorithm implementing a Mind CSA (Combinatorial-State Automata). The contention is that executing this abstract causal topology has the same result as the physical system’s causal topology.

As always, it boils down to whether process matters.

Given that the argument rests, in part, on causality, it seems contradictory that a numerical computation, which has an entirely different physical causality from a brain (or any physical process), could be seen as equivalent.

I really do think there is a category error right off the bat when it comes to computationalism. A physical system and a numerically-based abstraction of that system fall into disjoint categories.

For one thing, the number of alternate physical systems is limited. We can, for instance, replace brain neurons with silicon neurons or with something else, but we still need to preserve the structure and behavior of those neurons if we really intend to preserve the casual topology of the physical brain. There are only so many ways we can do that.

But the number of ways we can model something are numerically is endless. Entirely different models can accomplish the same thing in entirely different ways. Even a specific model has endless implementations, because we can freely change how its numbers map to reality.

All of which is to say that these abstractions — as true of all abstractions — aren’t real!

§

So, in addition to what I see as a category error, there is the challenge I’ve offered my computationalist friends:

Name just one physical system for which a numerical simulation of that system (i.e. an algorithm) produces the exact same results.

This is an impossible task, because simulated rain isn’t wet, simulated lasers don’t emit photons, and simulated earthquakes don’t knock down buildings.

Simulations aren’t real!

Some might suggest that a calculator could be simulated, but I’d point out the display of a simulated calculator doesn’t emit any photons and the battery doesn’t heat up if short-circuited.

The fact is, computationalism has always presumed The Algorithm — that the brain, unlike any other physical system we know, is just a meat reification of a mathematical abstraction.

Most computationalists don’t realize they are dualists of a sort. They believe in The Algorithm as distinct from the physical brain. The only way an algorithm produces the same results is if it describes as abstraction to begin with (per the Church-Turing thesis).

That’s why an algorithm can simulate the computation a (physical) calculator does. That computation is abstract and reified by the physical calculator. As such the algorithm can produce the same results the calculator’s computation does.

That is to say: Doing math on numbers results in new numbers.

§ §

Turning back to the Chalmers paper, as I said last time, for me the paper jumps the shark (in section 3.3) when it moves from physical reality to numerical simulation.

This has always been the bone of contention. Framed per Chalmers’ idea of causal topology, does executing an algorithm truly preserve causal topology?

I think it may not, except in the most abstract way, and I’m not sure that’s enough.

For one thing, the algorithm (P) only makes sense in reference, not just to an execution engine (E), but also to external factors: the design and intent behind P and E (which are both structurally complex).

§

For example, consider this fragment of machine-level code:

0001    LOAD    R1, [R8]
0002    LOAD    R2, [R9]
0003    ADD     R1, R2
0004    JZ      0006
0005    INCR    R5
0006    ...

For the uninitiated, the first two lines load registers 1 and 2 with whatever is in the memory locations pointed to by registers 8 and 9, respectively. The third line adds those two loaded values. Line four jumps past line five if the add result was zero. Line five, only performed if the add was non-zero, increments (adds one to) the contents of register 5.

So what is the causal topology here? All we can say looking at it is what I just described.

But is the ADD really an add? Are we really checking to see if a sum is zero? Maybe it’s an OR operation, and we’re counting how many times the OR fails. Potentially, it could be an inverted-logic AND. Or something else entirely (register 5 might not be intended as a counter).

Only in reference to the programmer’s intent can we see any higher causal topology.

§

It’s true that looking at a tiny fragment of code is like looking a tiny fragment of anything: it doesn’t tell us much. We shouldn’t expect to see the higher purpose the fragment is part of.

As a (retired) career programmer, I can say I’ve spent a fair bit of time trying to figure out what someone else’s much larger section of code was doing. Which is why code comments and documentation are so important — they communicate the intent the code cannot.

In fact, that’s an important point: Code comments are crucial because the code itself cannot contain all the semantics of the process. That says something about the lack of semantic content of code.

Small as it is, the fragment still illustrates the ambiguity of code. With that in mind, we can take another look at the {P, E, S, M} system (from last time).

Suppose the register for S is a huge light display in a sports stadium. The execution of P causes lights to go on or off. Further, P is executed slowly enough for people to enjoy the display — let’s say two or three state changes per second.

The organizational invariance principle, along with the claim that P executed by E (let’s call that P+E) preserves the causal topology of a brain, requires that our light display — very slowly — has mental cognition.

[Again I’ll note that Chalmers does not make the stronger claim that this results in phenomenal experience. Many computationalists do. I am more sympathetic to this weaker claim.]

§

An algorithm that generates a sequence of states (and we need to talk more about that) can be engineered to be capable of generating those states backwards or otherwise not in order.

As an aside, there is the question of whether these putative brain states can be generated randomly or if they can only be generated in reference to previous brain states. For example, is it possible to generate brain state #8126 without first generating the 8125 that lead to it?

If, somehow, the system is given #8125, can it then calculate #8126? Or does that final state still require all the prior states?

Alternately, what happens if the algorithm skips every 50th state, or loops over some, or just sometimes “jitters” back and forth a bit between two states?

What if the Mind CSA algorithm just inserts a random state every once in a while? Or repeats a state?

The question I’m really asking here is: How correct and exact does a computation have to be to preserve causal topology?

§

In an algorithm, numbers representing something satisfy certain numerical conditions that, when tested, steer the flow of the algorithm such that it changes numbers representing something in a way that maps to the abstract model appropriately.

For example, when the numbers representing the position and direction of the Pong “puck” satisfy the numerical condition of “intersecting with the right wall” (the horizontal location number matches the location number of the wall) then the game code changes the number representing the direction of the puck.

Note that very small code changes can have the puck bounce back slightly before it reaches the wall, or it can have it “sink” into the wall (any desired amount) before bouncing back. It can even have the puck ignore the wall and fly right through.

In fact, absent corrective code, if the puck moved fast enough between “clock ticks” it might have moved past the wall when the system checks its location. In that case, the system would have no reason to turn it around, and it would effectively have flown through the wall.

That is correctly recognized as a flaw in the system, but it points out how unreal the simulated world is and how easy it is for a simulation to be unreal (people who develop virtual 3D realities have lots of funny stories).

The question, again, is: How correct and exact does a computation have to be to preserve causal topology?

§

Assuming it even does in the first place.

Given the ambiguity of code and the lack of meaning in numbers, it’s hard to see how. Given the ability for Pong pucks to fly through walls, it’s hard to see how.

In the brain (or any physical system) the causal topology is well-defined and obvious. Physical effect A directly causes physical effect B.

Real Pong pucks cannot fly through walls.

In an algorithm certain numbers imply other numbers based on a map of numbers to reality. For instance, the Pong puck’s numbers imply a new numbers based a map describing the Pong reality (how the puck moves, where the walls are).

The causality, such as it is, is entirely abstract and very high-level. We can all but say it exists only in the programmer’s mind.

(And, hopefully, the code comments!)

§

Let’s go back to the program P and the states Sn it generates.

I’ve said I don’t think the states themselves can be part of the causal topology. At best, they’re a scratch pad for the system to remember the current state as it calculates the next state. As discussed above, there are questions involving how the next state is created and to what extent previous states are necessary.

A question to consider is whether the algorithm is possible without the register for S. Could the components of the new S be generated by P, but not stored. That would require P does not need reference to previous states.

Given I’m not sure P is possible, it’s hard to say how much it would depend on previous states. I am inclined to think it would, at least, need the current state to generate a new state.

In general it raises an interesting question: What if we store the pattern of states and play it back?

Let’s go back to the giant light display and imagine it’s driven by a legitimate P — the pattern of activity of the lights really does reflect the system’s cognition.

What if each light records a time signal and on/off status for each state? That would allow, given a “Go!” command, for the lights individually on their own to play back the states.

All together the display would again show the (putative) cognition of the system.

But would the cognition actually occur? Remember that it’s a playback of cognition that supposedly already happened. But there’s no algorithm driving it this time. The states are not being calculated by a supposedly casual algorithm, there is no causal topology.

So what happens?

§

I’m going to call that enough for today. I may dip into this yet again.

I had wanted to say some things about section 2.2 and section 3.4. And it feels like there’s more to say about whether causal topology is preserved by an abstraction.

There’s also the thing about system states versus states of the system. I think that’s a meaningful distinction that just about everyone ignores.

So see ya later, maybe.

Stay physically causal, my friends!

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

52 responses to “Algorithmic Causality

  • Wyrd Smythe

    With regard to code comments, a phrase that popped into my head was, “Code can’t say why.”

    That led to the memory of so many beginning programming students who had a common question: “Why am I doing this?”

    They didn’t ask that in an existential sense, but in the practical sense of why did you have me type:

    int counter = 0;

    What am I doing? Why am I doing that?

    I learned to see that as something of a divider between those destined to grasp programming well and those who would forever flounder with it. (The other great divider is the Data Structures class — that one really separates the coders from the wannabes.)

    The reason, in both cases, is that these are tests of a person’s ability to grasp abstract concepts. The abstraction of a variable, named “counter”, being mapped to a real-world concept is harder from some to grasp than others. Some never fully wrap their minds around it.

    All-in-all, it speaks to the unreality and artificiality of code. This is, again, why I claim that nothing in nature (including our brains) uses algorithms.

  • JamesOfSeattle

    Wyrd, with all due respect, I think you have misunderstood the concept of causal topology.

    For example, you say “I left off talking about the differences between the causality of the (human) brain versus having that “causal topology” abstractly encoded in an algorithm implementing a Mind CSA (Combinatorial-State Automata).”

    Causality is not the same as Causal Topology. If we say causality is the causal chain at the lowest level we understand, a Causal Topology is a course grain version where we lump some parts of the chain together into units we can call mechanisms. So a higher level topology will have fewer mechanisms. Then each mechanism can have its own inner causality, but for any given topology, we don’t care about the causalities inside the mechanisms.

    So let’s say we have this causality chain:

    System1: 1->[A]->2->[B]->3->[C]->4->[D]->5

    where inputs and outputs are numbers and mechanisms are in brackets. Here is a higher level topology:

    Topology1: 1->[X]->3->[Y]->5

    We can see the System1 maps to this topology if we combine [A]->2->[B] to give [X], and similarly for [Y].

    Here’s another system that maps to Topology1:

    System2: 1->[q]->6->[r]->7->[s]->3->[t]->5

    In this case [q]->6->[r]->7->[s] maps to [X] and [t] maps to [Y].

    Here’s a system that does NOT map to Topology 1:

    SystemQ: 1->[A]->2->[F]->4->[D]->5

    Note this is System1 except there is no “3” in SystemQ.

    So the point of all this is to show that two systems (System1 and System2) can have very different causality, but still have the EXACT SAME (causal invariant) causal topology. So a brain and a computer can have the exact same causal topology. If the causal topology is at the level of the neuron, i.e., each neuron is a mechanism, then we don’t care whether the mechanism is performed by an actual neuron or a computer, as long as the outputs map appropriately.

    So do you think the above is incorrect, or do you think it is correct but doesn’t apply to your argument?

    *
    [or have I changed your mind? ]
    [It could happen]

    • Wyrd Smythe

      “Wyrd, with all due respect, I think you have misunderstood the concept of causal topology.”

      I think the shoe is on the other foot, Amigo. For one thing, Chalmers explicitly states that the CSA formalism is the formalism for causal topology, so the concept is pretty well defined.

      “If we say causality is the causal chain at the lowest level we understand, a Causal Topology is a course grain version where we lump some parts of the chain together into units we can call mechanisms.”

      No, that’s not correct. Again, causal topology = CSA.

      All four of your examples feature different causal topologies at the level of the nodes involved. All four examples are identical if the cause we care about is that (1) causes (5).

      The causal topology depends entirely on what we care about, and Chalmers is explicit that we care about the neurons (or possibly some higher-level organization, he’s not sure, but he’s sure neurons are probably far enough down).

      So the causal topology in question is that of the physical neural network.

      Chalmers’ claim is that a numerical simulation of that physical system preserves the causal topology. As you say:

      “So a brain and a computer can have the exact same causal topology.”

      Or so Chalmers claims. I disagree. For reasons detailed in these two posts.

      • JamesOfSeattle

        I base my understanding of causal topology on these sentences from Chalmers’ paper:

        The causal topology represents the abstract causal organization of the system: that is, the pattern of interaction among parts of the system, abstracted away from the make-up of individual parts and from the way the causal connections are implemented. […]

        Call a property P an organizational invariant if it is invariant with respect to causal topology: that is, if any change to the system that preserves the causal topology preserves P. The sort of changes in question include: […] (c) replacing sufficiently small parts of the system with parts that perform the same local function (e.g. replacing a neuron with a silicon chip with the same I/O properties); (d) replacing the causal links between parts of a system with other links that preserve the same pattern of dependencies (e.g., we might replace a mechanical link in a telephone exchange with an electrical link);

        In my way of looking at it (input->[mechanism]->output), the “individual parts” are the mechanisms, and the “causal connections” or “causal links” are the inputs/outputs of the mechanisms.

        Re CSA’s: Chalmers did not say causal topology = CSA, he said for any causal topology there is a CSA. To wit: “Given [causal interaction between parts of the system] we can straightforwardly abstract it into a CSA description”. [emphasis added]. This also holds for my variation of causal topology, and any CSA can also be mapped into my version.

        As you say, Chalmers focuses on the level of neurons, and the invariant property he focuses on is behavior. His point is that if you can identify one part in the causal topology (one neuron) and replace that part with a different mechanism that performs the same function, and if the behavior is preserved, then that is an example of causal invariance.

        You say you disagree with Chalmers’ conclusion that a brain and a computer can have the same causal topology, but I have not seen the convincing argument. Can you summarize?

        *

      • Wyrd Smythe

        “I base my understanding of causal topology on these sentences from Chalmers’ paper…”

        The last sentence of the first graph you quoted is: “The notion of causal topology is necessarily informal for now; I will discuss its formalization below.”

        BTW, I quoted that same bit in the first post — did you read the first post? There I also quoted the formalization he mentions: “In fact, it turns out that the CSA formalism provides a perfect formalization of the notion of causal topology.”

        “The sort of changes in question include:”

        Yes. And do you notice how he says: “replacing […] parts of the system with parts that perform the same local function” (note he’s speaking of one-to-one replacement here) and also: “replacing the causal links between parts of a system with other links.”

        In both cases he’s speaking of a one-to-one replacement at the level of granularity of interest. (In this case, neurons.)

        “Chalmers did not say causal topology = CSA, he said for any causal topology there is a CSA.”

        Again: “the CSA formalism provides a perfect formalization of the notion of causal topology”

        So yes he did. They are both abstract notions. A CSA is how a causal topology is specified.

        “His point is that if you can identify one part in the causal topology (one neuron) and replace that part with a different mechanism that performs the same function, and if the behavior is preserved, then that is an example of causal invariance.”

        I agree. I said repeatedly that a Positronic Brain ought to work.

        The question is whether a numerical simulation does preserve casual topology. I’ve argued here (and in the previous post) that it doesn’t.

        “Can you summarize?”

        It took two posts, and I’m not sure I’m done, so no, I don’t think so. Read the two posts and show me where my arguments are wrong, is all I can say.

        Hopefully I’ve at least shown you I know perfectly well what Chalmers means by causal topology.

      • Wyrd Smythe

        BTW, I’ve mentioned before that you use an IPO formalism, which is quite distinct from the FSA or CSA formalism Chalmers uses in the paper and as the formal definition of causal topology. The IPO architecture doesn’t map directly to finite-state system concepts, but of course there is a (non-isomorphic) map between them.

        IPO is generally used to break down and analyze the functional parts of a system — for instance, identifying and characterizing the neural nodes themselves, the parts of the system. A finite-state analysis deals with the behavior of an entire system. In particular, this is why Chalmers introduces a CSA, because he sees the states of the entire system as needing many components (all the neurons) to characterize.

  • JamesOfSeattle

    Okay, I will accept a description of a CSA as “the” formalization of a causal topology. That makes the changing of a “part” more difficult to flesh out, but so be it.

    My question is then what do you mean by a numerical simulation in this context? Can you replace a neuron with a numerical simulation? Could you replace each neuron with a different numerical simulation, respectively? I’m going to assume you would say yes, because that is how you get the positronic brain. Except that if you only replace the neurons, you haven’t replaced the links between the neurons, i.e., the neurotransmitters. So let’s say you replace the neurotransmitters with electrical links. This gives you the full blown positronic brain, which I think you are okay with.

    Now let’s replace one positronic neuron (let’s call it a pneuron) with a robot (an rneuron) that wirelessly calls a central computer, gets the feedback, and puts out a signal to the next pneuron. Now let’s replace that pneuron with an rneuron that calls the same central computer. I will assume you are okay with saying we’re still in the same causal topology.

    Now let’s change the link between [rneuron1] and [rneuron2], which we will call link1. Instead of a [rneuron1] generating a neurotransmitter, and instead of a voltage on a connecting wire, [rneuron1] sends a signal to the central computer to put a value in a register which we will call register1. [rneuron2] now periodically sends a request for the contents of register1 and responds appropriately.

    I think Chalmers would say we still have the same causal topology, yes?

    Now let’s replace [rneuron1] with [sneuron1]. sneuron1 is a robot that calls the central computer which does the appropriate calculation and instead of sending a response back to the robot simply leaves the result in the register1 just described above.

    Now let’s replace the link between rneuron2 and rneuron3 and make it a register on the central computer just like we did with link1.

    Now let’s replace [rneuron2] with [sneuron2]. [sneuron2] is located on the central computer. It watches register1 and responds appropriately by putting a value in register2. I believe you would say [sneuron2] is a numerical simulation and therefor somehow changes the causal topology. Chalmers and I say the topology is preserved. So why do you see a difference, and why does that make a difference?

    *

    • Wyrd Smythe

      “My question is then what do you mean by a numerical simulation in this context?”

      A running computer program.

      “Can you replace a neuron with a numerical simulation?”

      As the question is phrased, no, that’s my point, but what I think you mean is can a neuron be replaced by a unit that uses a computer program to function as a neuron. To that, yes, absolutely.

      “Except that if you only replace the neurons, you haven’t replaced the links between the neurons…”

      As you go on to discover, it’s all replaced with technology of some kind. Asimov never really said how Positronic Brains work other than they used pathways and connections. I think relays are also mentioned. The implication was that a Positronic Brain is essentially a human brain but made of wire and plastic and ceramic and whatever. Just not made of meat.

      Lt. Cmndr. Data, in Star Trek, was said on the show to have a Positronic Brain in homage to Asimov. Again, to the extent it was ever explored, it appears to be a physical replication of the structure of a human brain. (It was said to be so complicated only one scientist ever figured out how to do it, and he’s dead.)

      In any event, the strong implication is that a Positronic Brain is a technological replica of the structure of a human brain. Specifically, it consists of a large network massively interconnected operating in full parallel. The nodes of said network are presumed to be trainable as ours are.

      And, really, the whole point is that network structure (rather than the nodes — neurons are generally seen as a sort of sophisticated summing logic gate).

      “Now let’s replace one positronic neuron (let’s call it a pneuron) with a robot (an rneuron)…”

      Going from pneurons to rneurons probably works in theory although you’re talking about 50-100 billion radio channels and the radio link time delay might prevent this from working (I think there’s good odds on that, since I think timing is important to cognition).

      Those (huge) caveats aside, in principle, because the physical causal structure is preserved, it seems like it might work.

      “Now let’s change the link between [rneuron1] and [rneuron2], […] I think Chalmers would say we still have the same causal topology, yes?”

      I’m sure Chalmers would agree, yes. I think it’s very possible you’ve broken the causal chain in an important way, though.

      You’ve definitely broken the physical causal chain at this point. rneuron1 no longer directly affects rneuron2. That latter neuron depends now solely on the contents of a register, a completely separate mechanism — separated in space, time, and behavior.

      To me, your further examples become more and more disconnected from that physical causal chain. Yes, an abstract causal topology can be said to exist, but it’s abstract and reified in information, not physical causality.

      I think that matters.

  • JamesOfSeattle

    I think it’s very possible you’ve broken the causal chain in an important way, though.

    So are you saying that the change would change the behavior of the system?

    *

    • Wyrd Smythe

      I’m saying it could, yes.

      (But keep in mind the whole exercise is a fantasy that can’t be implemented in any practical way. Whether it can possibly work in principle may not matter. (But it would be nice to know the principles involved.))

      • JamesOfSeattle

        In what way? Because by definition, if the new part does not have the same functionality as the old part, it’s not a proper change. That includes the time to perform the function, by the way.

        So again, how can changing a part, which change does not affect the functioning of that part, change the behavior of the system?

        *

      • Wyrd Smythe

        “In what way?”

        As I have said in the past, I suspect the numerical output of numerical brain simulations will describe a biologically functioning brain, but that brain will be “comatose” as far as conscious thought.

        Other possibilities include the appearance of thought, but the data is random gibberish, or there is some mental content, but it’s incoherent or diminished in some way.

        Or it might work as computationalists hope. I see that as a “sweet spot” that might be hard or impossible to achieve (on many grounds).

        “Because by definition, if the new part does not have the same functionality as the old part, it’s not a proper change.”

        Kinda depends on what you mean by “functionality doesn’t it? On one level, an electric car doesn’t have the same functionality as a car with a gas engine. Considered at another level, they both function equally to transport people.

        My argument involves that I see the gas-powered cars as different enough from electrically powered cars to question how equal they really are despite their equal transportation abilities.

        “That includes the time to perform the function, by the way.”

        Not per Chalmers or the usual view of computationalism.

        A calculation is a calculation. It doesn’t matter how long it takes, the end result is the same. Therefore, if computationalism is true, cycle time is as irrelevant as platform.

        If you intuit that time does matter, then perhaps your gut is telling you physical causality really does matter. That would be the only reason for time to matter.

        “So again, how can changing a part, which change does not affect the functioning of that part, change the behavior of the system?”

        Firstly, as we are just discussing, there is a question of what it really means to be functioning the same. Certainly there is a clear case for it with a Positronic Brain.

        Secondly, when you switch from a signaling environment to a polling environment, that’s a huge change in architecture. It’s very hard to argue the parts function the same in that case. They don’t!

      • JamesOfSeattle

        The parts function the same if the same input produces the same output. The internal architecture of a given part is irrelevant. And we are not yet addressing simulating all the neurons. We’re talking about simulating one neuron. How does simulating one neuron change the behavior of the system?

        *

      • Wyrd Smythe

        Now we’re repeating ourselves. As already agreed, one neuron can easily be replaced so long as the signaling architecture is preserved. Do you understand what I mean by a signaling architecture versus a polling one? I’m not sure why we’re back to square one.

      • JamesOfSeattle

        The signaling architecture is not relevant because it is internal to the “rneuron”. As described, the physical parts of [sneuron1] include both the robot replacement and part of the central computer. The physical parts of sneuron2 are entirely within the central computer. The physical parts of rneuron3 include part of the central computer and the robot.

        Does this make sense?

        *

      • Wyrd Smythe

        “The signaling architecture is not relevant…”

        If you believe that, you’re not understanding the point.

      • JamesOfSeattle

        Well, I think the point is that the internal architecture of any part is not relevant. What’s relevant is whether the internal architecture produces the output in response to the input, no matter how it does it.

        *

      • Wyrd Smythe

        If by “any part” you mean the neurons and how they physically connect and interact, then I agree, as I’ve said.

        You didn’t answer my question: Do you understand what I mean by a signaling architecture versus a polling one?

        Do you see why they are different? Signals are immediate and direct. Polling is delayed and indirect.

      • JamesOfSeattle

        I see that signals are a different architecture, but I don’t see that signals are more immediate or direct. A signal is not necessarily immediate. A neurotransmitter takes a certain amount of time to cross the gap, but even after it binds to its receptor, that does not mean the next part of the chain is immediate. The next part of the chain might be waiting for a soluble messenger to float by, recognize a receptor which has bound a neurotransmitter, and then do what it does. This architecture is actually a kind of polling. A soluble messenger periodically checks the receptor to see if it has bound something. It may be that this molecular polling happens 10,000 times a second, but then it’s possible that the robotic polling happens 100,000 times a second. Which one is more direct or immediate?

        *

      • Wyrd Smythe

        FTR: I don’t at all require that people agree with me, but I really do wish they made some effort to understand me.

        “This architecture is actually a kind of polling.”

        Yeah, maybe, kinda, sorta, if you wanna interpret it that way. But it’s one tiny piece in an undeniable physical causal chain that relies predominantly on signaling.

        No, signals are not immediate in the literal sense. Einstein put an upper limit on signal propagation, and biology is much slower.

        Try it this way: There is the matter of causal necessity. Neurotransmitters aren’t fast, but there is a physical necessity behind their behavior.

        That necessity is abstract (essentially imaginary) in the information realm. As I point out, the walls in the Pong game don’t really exist and the puck is free to ignore the simulated physics.

        That simply can’t happen in the real world due to the causal necessity. Walls can’t be ignored.

      • JamesOfSeattle

        I see how polling is different from a signal, but I don’t see how that is a difference that makes a difference. It doesn’t change the behavior of the system, or if it does, then the new architecture is not a valid substitute for the function in question. So let’s change the architecture of rneuron3 such that it works as a signal and there is no polling. Does that fix things?

        *

      • Wyrd Smythe

        “I don’t see how that is a difference that makes a difference. It doesn’t change the behavior of the system,…”

        It does, most crucially by crossing the divide from physical system to information system.

        The register has no intrinsic meaning. You yourself pointed out the neurotransmitter does.

        “…or if it does, then the new architecture is not a valid substitute for the function in question.”

        “By George, he’s got it!” 🙂

        “So let’s change the architecture of rneuron3 such that it works as a signal and there is no polling.”

        At this point I have no idea what you’re suggesting. As I keep saying, so long as the physical causal architecture is preserved, no problem. When you transition to an information architecture, problem.

    • Wyrd Smythe

      It just occurred to me that, even in strictly computational terms, moving from having neuron-1 directly signaling neuron-2 to having neuron-1 and and neuron-2 sharing a register seriously changes the nature of the game.

      In computational terms, the first scenario involves “asynchronous signaling” whereas the second scenario involves “polling” (without signaling).

      The “asynchronous” part means neuron-1 can signal neuron-2 (in whatever fashion) and then go on about its business of being neuron-1. That is, it doesn’t have to wait for a response from neuron-2. (This matches how a pre-synaptic neuron signals a post-synaptic neuron; it’s a one-way process.)

      (If neuron-1 had to wait for a response, even just an “Okay, heard ya,” then it’s called “synchronous signaling.” Either way, there is a direct casual connection from neuron-1 to neuron-2, whether it be biochemical, electronic, or even photonic.)

      In contrast, polling introduces a third party that mediates between the neurons, and the neurons are now disconnected in any physical or direct causal sense.

      The disconnect in causality is clear when considering the time between neuron-1 updating the register and neuron-2 polling it.

      During that time, neuron-1 clearly hasn’t caused anything in neuron-2. Since the time period depends strictly on the polling time, an arbitrary property, the causality disconnect should be pretty obvious. Neuron-2 might never poll (due to a bad link, say), and then no causality occurs.

      So, bottom line, it’s hard to say there really is a causal topology here. There is, but it’s very abstract and contingent. So much so I’m dubious we can really consider it as real.

      • JamesOfSeattle

        I just read this one more closely, and again, I don’t see a difference that makes a difference. In version 1 two neurons share a synapse space. Neuron1 dumps a neurotransmitter into the space, and then starts removing it from the space. Whether there is a neuron2 at all does not change what neuron 1 does. The same happens in version 2. rneuron1 leaves a value in a register and never knows if that value is read by rneuron2. We could even specify that after a certain time rneuron1 resets the value in the register to 0 whether or not the value has been read by rneuron2. There is no disconnect in causality. If there is a broken link and rneuron 2 never polls, then neuron2 is non-functional and is not a valid substitution.

        *

      • Wyrd Smythe

        “Whether there is a neuron2 at all does not change what neuron 1 does.”

        Are you sure about that? Does half a synapse function? No matter, for sake of argument, assume the synapse is damaged. Neuron-1 still acts per usual (it probably has 1000s of other neurons to signal).

        “The same happens in version 2. rneuron1 leaves a value in a register and never knows if that value is read by rneuron2.”

        No. The point isn’t that the first guy doesn’t know. The point is the disconnect. The difference between mailing a card and hand-delivering it.

        In the physical system, neuron-1 touches neuron-2, so potentially does know something’s wrong. Like I’d know if you didn’t take a hand-delivered card. In the polling system the lack of connection means it can’t know. If I mail the card, I have no idea what happens.

        The greater point is that whatever value is put in a register has no intrinsic meaning. You’ve said a neurotransmitter does (and I agree in the sense you meant it).

        To illustrate this, changes to neurotransmitters have noticeable effects on mentation. One can change what value is put in the register with no problem. (It’s bound to be a system constant everything refers to, so just change that constant.)

        “If there is a broken link and rneuron 2 never polls, then neuron2 is non-functional and is not a valid substitution.”

        Fair enough. (Keep in mind, the polling/signaling thing was an aside. Don’t get too lost in it. It just illustrates the difference in the architecture is all.)

        And there is still the issue that polling time is arbitrary (another system constant), which illustrates the disconnect with the physical system’s causality.

      • JamesOfSeattle

        First, here’s a diagram of a synapse. There’s no feedback. The neuron’s don’t touch. But the point is, as long as the functional behavior is conserved, including any feedback behavior, it doesn’t matter.

        Second,

        The greater point is that whatever value is put in a register has no intrinsic meaning. You’ve said a neurotransmitter does (and I agree in the sense you meant it).

        There is no difference between the meaning in/of the value in the register and the meaning in/of the neurotransmitter. The value in the register has meaning in exactly the same sense that the neurotransmitter has meaning. But I won’t be arguing this point further. We been thru that.

        *

      • Wyrd Smythe

        You’re ignoring the point that the register value can be changed without altering system behavior at all. Neurotransmitters can’t.

        Secondly, the distance between neurons in synapses is trivial, and the connection is still functionally direct. The distance between your putative radio neurons and the register is huge. The processes involved are completely different.

        More importantly, the difference between a physical system and an information system. That’s the key here. I’m not entirely clear you appreciate the difference. You certainly don’t seem to acknowledge the difference (granting you do appreciate it).

        If you appreciate it and think it doesn’t matter, fine, you don’t think it matters.

        But you can’t keep insisting these things are the same when I’ve pointed out how many ways they are different. The issue is whether the differences matter.

      • JamesOfSeattle

        How do you change the register value without changing system behavior? What the second neuron does depends on the value of the register.

        *

      • Wyrd Smythe

        I’m guessing you’ve never written a line of code, but the way such systems are put together there is a global list of definitions (global defined constants) that the system uses.

        How does neuron-1 know what to put in the register? It gets that from the global constants. After all, the value has no meaning intrinsically, it’s just a number the programmer picked to represent the output of a neuron.

        Likewise, how does neuron-2 know what the register values mean? Again, because of the list of global constants. If we change the actual numerical value of the constant, both neurons use that new value and see no difference in operation, because the actual value has no meaning.

        (In programming, one of the earliest pieces of advice is that the only literal values that should ever appear in code are zero and one and the empty string, and those should be viewed with suspicion to insure they are being used as the primitive concepts (of nothing (0), one thing (1), and nothing (”), respectively) rather then as values, per se.)

      • JamesOfSeattle

        The distance between your putative radio neurons and the register is huge

        This is another point you are missing. Part of the central computer is within the architecture of the robot neurons. Whatever part of the computer that puts the value in the register is considered inside the robot neuron. It’s part of the robot neuron. The link from the robot to the computer is an internal link. It’s part of the internal architecture of the neuron.

        *

      • Wyrd Smythe

        “Part of the central computer is within the architecture of the robot neurons.”

        Exactly. “Part of.”

        “Whatever part of the computer that puts the value in the register is considered inside the robot neuron.”

        Maybe “considered” by you, but the reality is that it’s in a completely different system connected with a radio link.

        You’re taking your robot neurons far beyond what Chalmers proposes. His only point is that individual neurons can be replaced with technology-based neurons and that seems like it should work. I’ve agree with that point from the start.

        As I was falling asleep last night I realized something about your scenario. We start with real brain neurons:

        N1 {signal} N2 {signal} N3

        Your idea is that N1 sets Register 1 (R1) and N2 reads R1 to know that it has been signaled. A similar process exists between N2 and N3 using register 2 (R2):

        N1 {radio push}{R1}{radio poll} N2 {radio push}{R2}{radio poll} N3

        Then, given that, N2 can be replaced by a calculation that reacts to R1 and sets R2 giving us:

        N1 {radio push}{R1}{calculation}{R2}{radio poll} N3

        Demonstrating, you say, that N2 can be completely replaced with a computation.

        Have I stated that fairly?

        The problem is that this assumes N2 only receives input from N1 and only signals N3. If that were true, it would effectively mean N2 had no real value. Even in the brain it could be replaced by connecting N1 directly to N3.

        But that’s not how the brain works.

        In reality, N1 is connected to thousands of other neurons (on average, 7,000). So is N2 and N3. And all three receive input from thousands of neurons.

        So N2 cannot be replaced by a calculation. There must be a node in the network to receive the thousands of incoming signals from other neurons and to provide the thousands of outputs to other neurons.

        The physicality of the network requires the node be there.

      • JamesOfSeattle

        [highschool: Fortran. COBOL, RPG,
        Self taught: BASIC, Visual Basic, HyperTalk, Perl, JavaScript, HTML
        Community College: C, SQL
        Work experience: Visual Basic, Perl, SQL, JavaScript, HTML,

        so, not an engineer, but a coder]

        Somewhere in the physical memory is a register. If you change that value without changing anything else (i.e., without changing any value in the list of definitions), you change the behavior. If you want to say the value has no intrinsic meaning as long as both the sender and receiver agree on the meaning, then you can say the same thing about a neurotransmitter. As long as the second neuron has receptors for the same neurotransmitter as produced by the first neuron, it doesn’t make any difference what that neurotransmitter is.

        *

      • Wyrd Smythe

        “Somewhere in the physical memory is a register. If you change that value without changing anything else (i.e., without changing any value in the list of definitions), you change the behavior.”

        That’s not what I said, and given what you said about your coding experience you should know better.

        I referred explicitly, and in detail, to changing the global constant. Which wouldn’t alter the behavior of the system one bit.

        “If you want to say the value has no intrinsic meaning as long as both the sender and receiver agree on the meaning, then you can say the same thing about a neurotransmitter.”

        And how do you propose to do that in a physical system? Go back in time and rewrite evolution?

        Again, that’s the point. Physical systems have physical causality. Changing them requires changing a great deal (if it’s even possible). Changing an information system only requires one edit to a text file.

        “As long as the second neuron has receptors for the same neurotransmitter as produced by the first neuron, it doesn’t make any difference what that neurotransmitter is.”

        Sure, in principle, but (again) how do you propose to implement such a change?

      • JamesOfSeattle

        I see now. When you said “the register value can be changed without altering system behavior at all” I assumed you meant changing the actual register value, but you actually meant changing a value somewhere else, which change actually affects both neurons. I don’t expect you will get this, but that global constant you refer to is actually, physically, part of the architecture of both neurons, just like the central computer CPU is part of the architecture of both neurons. We could split it out and have 2 central computers, computer 1 being part of neuron1 and computer2 being part of neuron 2. They could both share access to the signal register. Then if you change the global constant on computer1 you would have to change it on computer2. Alternatively, you could have two programs running on computer1, both accessing the signal register. Again, if you change a constant on program1 you would have to change the constant on program2.

        I didn’t understand that you had the same constant being used by both processes. My bad.

        From a previous comment regarding 3 neurons, the middle neuron being entirely simulated, you suggest that you could connect neuron1 to neuron3 directly. That’s true, but then you would not be preserving the causal topology. And then you could say the wall behind Seattle is performing the same function. The reason for talking about a causal topology is to differentiate

        *
        [How do I propose to change which neurotransmitter is being used between two neurons? Genetic engineering, aka CRISPR]

      • Wyrd Smythe

        “I assumed you meant changing the actual register value, but you actually meant changing a value somewhere else…”

        This is why I’m a little frustrated. Look back at the comment where I first mentioned this:

        “One can change what value is put in the register with no problem. (It’s bound to be a system constant everything refers to, so just change that constant.)”

        “…which change actually affects both neurons.”

        Well, of course. It has to or they can’t communicate.

        “I don’t expect you will get this,…”

        James. I was a software designer for 30 years. I’ve been coding for over 40. I’ve taught computer science classes. I’ve designed languages; I’ve designed computer architecture. I’ve created low-level network drivers and high-level web-based data systems. (I wrote the first web-based app in The Company back in the 1990s.) I’ve written well over a million lines of code in dozens of languages.

        I think I can keep up. 😉

        “…but that global constant you refer to is actually, physically, part of the architecture of both neurons, just like the central computer CPU is part of the architecture of both neurons.”

        Um,… duh. And?

        It’s self-evident that if two systems, of any kind, are going to communicate they need to do so using agreed upon protocol. I don’t see why you find that significant (or even interesting).

        “From a previous comment regarding 3 neurons, the middle neuron being entirely simulated, you suggest that you could connect neuron1 to neuron3 directly.”

        Heh. What I wrote (I assume you’re referring to) is:

        As I was falling asleep last night I realized something about your scenario. We start with real brain neurons:

        Since I said “we” I guess can see how you might have thought it was my idea.

        But you forgot it was yours. Read the last three paragraphs of your neuron scenarios in the linked comment. In the penultimate paragraph, you suddenly introduced rneuron3, but I saw what you meant. In the last paragraph you introduce sneuron2, which is a numerical simulation.

        You didn’t spell it out, but unless I misunderstood you, that’s the scenario you were creating.

        But it can’t work (at least, not as described), because the network of the brain requires all three neurons. (As you now agree?)

        “And then you could say the wall behind Seattle is performing the same function.”

        😀 I don’t usually comment on obvious typos. But when they’re most excellent, they’re worth pointing out. Took me a while to figure out what the wall in Seattle might be! 😉

        If there are any walls in Seattle (and I’m sure there are), Searle probably thinks they’re all computing Wordstar (a program I remember fondly — used the keyboard commands in other editors).

        As far as, in the earlier scenario, removing N2 and connecting N1 to N3, I agree it changes the causal topology. I objected to the scenarios on that grounds in my first reply to your comment and then we discussed it a bunch more. 🙂

        “Genetic engineering, aka CRISPR”

        LOL! Do you really think you can edit the human genome to make such a change without a bunch of other changes happening? That’d be pretty amazing. You’d win a Nobel for sure.

        For the sake of argument, say you do. Consider the amount of research you’d need to do first to (a) identify the gene in question and develop a CRISPR program, (b) identify a new neurotransmitter that can work identically. Then you need to apply the gene mod to systems one by one.

        The point is: Physical system, lots of energy required to alter, and alterations are difficult.

        For the computer system, Just open the relevant text file, make a one-line edit, save the file, and restart the system. The change is global and, because it’s just information, can easily be distributed to other systems.

        The point is: Information system, easy to alter because not tied to physical causality.

        In a physical system, things are what they are. In an information system, things are what I say they are. That seems a huge difference to me.

      • JamesOfSeattle

        Wyrd, I knew you were referring to my 3-neuron scenario and you characterized it perfectly, so kudos for that.

        But then you say “But it can’t work (at least, not as described), because the network of the brain requires all three neurons. ”

        This is our disconnect, so one of us is not getting “it”. It seems to me it would work perfectly as described. Why would it not work? The network requires three neurons, but they don’t all need to be in the brain.

        *

      • Wyrd Smythe

        “It seems to me it would work perfectly as described. Why would it not work?”

        Well, there’s two answers to that, one I think we’re now past, what I was saying in this comment about how all three neurons have (on average) 70,000 connections. The scenario “as described” doesn’t account for that, and it would be problematic.

        The other answer to that is what I’ve written about in these two posts and what I’ve written about quite a lot on this blog: The difference between a physical system and an information system.

        This business about replacing neurons (which I’ve mentioned goes beyond what Chalmers does in his papers) is a fantasy thought scenario that can’t be taken too seriously. We can’t make judgements on a fantasy.

        The business about physical systems versus information systems is real and worth discussing to me (hence all the blog posts).

      • JamesOfSeattle

        But can you explain why the 3 neuron scenario, as described, cannot work?

      • Wyrd Smythe

        I’ve repeatedly said that, so long as the physical structure of the brain is preserved, specifically the interconnected network of neurons acting in parallel, I would expect it to work.

        My point is that a numerical simulation of that doesn’t seem likely to work the same way.

  • JamesOfSeattle

    [Starting a new thread so I don’t have to scroll way to find the reply button]

    For my benefit, you need to expand that last sentence. When you say “doesn’t seem likely to work the same way”, the same way as the non-simulation? Because it does work in exactly the same way relative to the causal topology. And the causal topology is all we care about (for this discussion). If the causal topology is the same, the behavior is the same.

    The expectation of almost everyone ( Hameroff and Penrose being exceptions) is that brains do what they do by virtue of the organization of neurons and signals between them (with influence by other cells and molecules floating around), and not by virtue of what happens inside the neurons. A system behaves by virtue of its causal topology.

    Refute away.

    *

    • Wyrd Smythe

      “For my benefit, you need to expand that last sentence.”

      I’ve written extensively on the topic, starting with this series of 17 posts, and again more recently in a series of 14 posts starting with this one, and those are more detailed than I can be in a comment.

      “When you say ‘doesn’t seem likely to work the same way’, the same way as the non-simulation?”

      Yes. Because information systems and physical systems are very different.

      I’ve come to realize that they are so different the shoe should be on the other foot. Computationalists should be tasked with providing the extraordinary proof necessary to support the extraordinary claim that a bunch of numbers is anything like physical reality.

      Because, as I’ll never tire of reminding them, simulated rainstorms aren’t wet, simulated lasers don’t emit photons, simulated earthquakes don’t knock down buildings. So the idea that a simulated mind would be conscious is, indeed, an extraordinary claim. Simulations are descriptions, not reality.

      “Because it does work in exactly the same way relative to the causal topology.”

      Firstly, that’s just not true (despite all the claims it is), as I’ve explained and demonstrated in these two posts and in this conversation.

      Secondly, more importantly here, I was working on a third post about Chalmers’ paper and realized the misstep is larger than I realized. The CSA Chalmers describes has zero causal topology at all!

      It describes the causal behavior of the system, but the description contains no causality as Chalmers explains it.

      For what he describes is nothing more than a series of snapshots of the system (brain) in action, very much like a movie is still images. Even on the claim that the resolution is fine-grained enough that nothing is missed between “frames” there is still no causality between frames.

      Other than the assertion that frame (state) #500001 is followed by #500002.

      See, Chalmers makes no reference to why #500002 follows #500001, and I initially assumed, as I think he does, that the algorithm would calculate the next state based on some logic.

      But he makes no reference to that or how it could be achieved. He seems to posit the simulation as a playback of states already determined. But if the states are already determined, then the system isn’t calculating them, it’s playing them back.

      When you watch a movie, the causality you see in the movie is an illusion, not real. In the case of a film of real life, the still frames capture physical causality in action, but these are just snapshots. There is no causality in the snapshots!

      Imagine having photos instead of frames of film. Imagine laying out those photos in any order, or just throwing them willy nilly on the floor. No causality between frames, see?

      Likewise the CSA. There is no causality between frames, no reason for neurons to change state, other than the next state says so.

      So, unless Chalmers can make more of a specification about how states are calculated, I’m seeing the whole idea as a miss.

      Which, as always, leaves the topic of computationalism undecided, but (I think) an extraordinary claim.

      James, to continue this discussion with you, I need to know you fully understand what I just said. I have no requirement that you agree, you’re free to think I’m completely wrong. But to continue I do need some feedback showing you fully understand what I’m pointing at.

      The short form is: The CSA Chalmers describes is nothing more than a playback mechanism for neuron states. It has no more causality than a movie does. (Which is to say none.)

  • JamesOfSeattle

    Wyrd, I claim to understand what you said, and I think some of your analysis is mistaken. Some difficulty comes when you introduce new vocabulary, like frames. I hope it is okay to go back to our sample system.

    Here is a description of a causal topology:
    Neuron1–>signal1–>neuron2–>signal2–>neuron3

    Another way to say this is the following (read as a sentence):
    [neuron1 causes]—>[signal1, which causes]—>[neuron2 to cause]—>[signal2 to cause]—>[neuron3 to do whatever it does]

    Now it seems to me that you are saying that if signal1 and neuron2 and signal2 are all part of a simulation on a computer, then they are not “causing” the things they are said to cause in the statement. Is that right?

    *

    • Wyrd Smythe

      “Some difficulty comes when you introduce new vocabulary, like frames.”

      “Frames” aren’t really new terminology, they’re part of the metaphor comparing a CSA to a movie. Metaphorically, frames = states. Same thing.

      “I hope it is okay to go back to our sample system.”

      If you like.

      You start with the IPO topology, which reflects the actual physical topology and causality of the original network (as I believe we agree?). Then you restate it in terms of that physical causality.

      I agree fully with both, and the summary in the final paragraph is essentially what I’m saying. Great!

      Yes, that’s my overall objection to computationalism: I don’t think an information system (a numerical simulation) can do everything a physical system can. (I think that’s self-evident, but that view clearly isn’t shared by everyone. Searle, I believe, would be one to agree.)

      The new problem, which I think might wreck Chalmers’ program here, is that the CSA he describes really doesn’t have actual causality from state to state.

      In reference to the physical chain of causal events in the physical neuron chain, consider what really happens in a simulation. You have coding experience, so I’ll speak in code:

      var neuron1 = Number();
      var neuron2 = Number();
      var neuron3 = Number();
      
      function new_state_for_n1_n2_n3 (s) {
          neuron1 = STATES('neuron1', s);
          neuron2 = STATES('neuron2', s);
          neuron3 = STATES('neuron3', s);
      }

      And we’re done. That’s essentially all a state engine does. It plays back the states (usually from a table where the states have already been determined).

      [See the three posts starting here if you want to see a more realistic example of code.]

      I went through a phase as a designer where I feel in love with state engines, so any problem I could solve by writing a state engine, that’s the approach I used. Any causality they have is strictly by design of the code, as the above example illustrates.

      There’s no real causality to the lines above other than that one line of code follows another. There’s no real connection between the three neurons and two links.

      Chalmers’ claim seems to be that merely playing back CSA states, merely cycling the right numbers through (massively large) physical memory will cause cognition.

      But unless he can speak to how states are determined, how the program knows one follows another (other than because it’s the next entry in a table), there is no causality whatsoever in the execution of the reification of the CSA.

      Which seems like a deal-breaker to me.

      • Wyrd Smythe

        (11:12 AM: Edited the code section to make it more like what really happens.)

        STATES(name, state) is a function that takes the name of a component in the state vector and a state identifier, and returns the correct state for that component.

  • JamesOfSeattle

    I agree fully with both, and the summary in the final paragraph is essentially what I’m saying. Great!

    Not great. There is the disconnect. I do not agree.

    There is actual, physical causation in the computer. Something in the computer physically caused electrons to move around in the register for signal1 such that afterward, the register is in a physical state that we, but more importantly, neuron2, associate with the value of the signal. Again, within what we call neuron1 there is lots of physical pushing of molecules and electrons, including some pushing of electrons which generatie photons which then go and push electrons around in the computer, still within neuron1, until finally something pushes the electrons in the register into the proper conformation. Those electrons in the register then push on electrons that are part of the system we call neuron2, and the series of pushing continues.

    The key point of the causal topology is that we can physically isolate and identify the each component. It doesn’t matter what happens inside any particular component as long as the functions required are performed, and that function absolutely must result in the appropriate physical change of the next component. Each component must cause a physical change in the next component, and that is exactly what happens in a computer simulating a neuron.

    Disagree?

    *

    • Wyrd Smythe

      “I do not agree.”

      Given the part you quoted, it looks like you disagree with my agreeing with you? At that point in my reply, all I had done is agree with what you’d said in the previous comment.

      Or do you mean you disagree with what I said after that?

      “There is actual, physical causation in the computer.”

      On that we agree completely. The computer is a physical system with physical causality.

      But that physical causality is only involved with the computer being a computer. This is clear in that the physical causality driving a computer is identical regardless of what the computer is calculating.

      All that electron pushing you mentioned is the same if the computer is computing a video game, a spreadsheet, Wordstar™, or a putative mind.

      So if it’s the same, it can’t possibly be a source of causality for what’s being computed.

      Any computed causality is abstract and only in virtue of how the computed numbers are interpreted. Those numbers could be anything or nothing at all.

      “The key point of the causal topology is that we can physically isolate and identify the each component.”

      I also agree completely on that. Nothing I’m talking about has anything to do with what happens inside the neuron. As I’ve said many times, I’m fine with a Positronic Brain.

      The entire issue is what happens when you move from a physical system to an information system. (Again, yes, the computer is physical, but the information it’s processing is not.)

      The further problem for Chalmers’ notion of a CSA is that, given a state vector for a given state, the next state vector is essentially an entry from a table (all the states are).

      It’s hard to see that the original system’s causal topology is genuinely preserved in a table of state vectors.

      • JamesOfSeattle

        This is the statement which is incorrect:

        the physical causality driving a computer is identical regardless of what the computer is calculating.

        The statement you made says the following two statements represent identical causality:

        1. Neuron1 set the register to 0.
        2. Neuron1 set the register to 1.

        Do you agree?

      • Wyrd Smythe

        “This is the statement which is incorrect:”

        You mean that you think is incorrect. As it turns out…

        “Do you agree? [about 1 and 2]”

        Of course. As we’ve already agreed, the actual value put in the register doesn’t matter so long as both agree on what those values mean.

        Of course, you mean within that context. That the neuron-2 considers the change meaningfully and considers the semantics behind the 0 and 1.

        Didn’t you recently argue that the semantics are external to the system, not within it, per se? I agreed completely.

        The thing is, firstly, the process of “neuron1” (by which we actually mean some bit of code) placing a value in a register and then having “neuron2” (another bit of code) read that value is (as I said) exactly the same process in any program that runs!

        Secondly, when you consider what’s really going on, the idea there is any causality there at all largely evaporates:

        The CPU is constantly fetching instructions from the program, decoding them, and executing them. That’s identical no matter what program is running.

        The CPU is, because of the instructions it fetched, constantly loading values from memory, storing values to memory, and manipulating values in registers. To the CPU, it’s nothing more than logic with address and data bits.

        This operation of the CPU is the same regardless of what program is running. If you hang a logic scope on the data and address bus of a PC, capture the bits, I defy you see any difference regardless of what program is running.

        The abstraction, the semantics, of a running program exist outside the computer. It doesn’t exist inside.

        On top of that, running a CSA, in particular, is causality-free.

  • JamesOfSeattle

    So the entire difference between us (I think) is our definition of causation and causality. Here’s mine:

    Causality refers to physical events that actually happen. If neuron1 sets a register to 1, neuron1 causes the register to hold a 1. It does NOT cause the register to hold a 0. At a later time it might cause the register to hold a 0, but that is a separate example of causation. It’s a separate event.

    When I describe a causal topology I am describing a series of individual events. For a system to incorporate a topology, the necessary and sufficient condition is that it must be capable of producing that series of actual events.

    And regardless of your definition, I claim they that the above is Chalmers’ definition. If it is problematic that you have a different definition, then I suggest you go through Chalmers’ paper and replace “cause” wherever you find it with “*cause”, and likewise with “causation”, etc.

    And so Chalmers would say that the mentality of a system is determined by its *causal topology. And that there is no difference in the *causal topology of a brain and an appropriately programmed computer.

    *

    • Wyrd Smythe

      I think we’re on the brink an understanding. It feels like we’re getting close.

      “Causality refers to physical events that actually happen.”

      Agreed!

      “If neuron1 sets a register to 1, neuron1 causes the register to hold a 1.”

      Not exactly. It causes, as far as the physical causality you speak of, a value to be set in the register.

      What do you think the actual causal difference is between setting it one value versus another?

      Remember that “neuron1” is a bit of code that takes a value it calculated in some local register and copies it to the “link” register. Certainly at the point of copy, the bit pattern is irrelevant.

      The question is how relevant is the code for “neuron1” calculating that 1 (instead of 0) in the first place. Ultimately it’s just a logic operation with no intrinsic meaning, and the same logic would set the value to 1 or 0 depending on the input conditions.

      Surely you agree the physical causality that runs the computer itself, that provides for register setting at all, is unrelated to the program the computer runs?

      There is a completely different abstract causality a program defines (per the programmer), and it is the execution of that abstraction on an engine that replicates the abstract states of the CSA in actual memory locations, agreed?

      Which brings us to:

      “When I describe a causal topology I am describing a series of individual events.”

      Per Chalmers’ explicit definition, replace “events” with “states” and that’s correct.

      The problem with “event” is that it implies a cause. Events are caused by {something}, yes? Chalmers does not speak to causes, just states.

      His causal topology is the ordered sequence of such states.

      “For a system to incorporate a topology, the necessary and sufficient condition is that it must be capable of producing that series of actual events.”

      s/events/states/

      Correct, but how literally do you mean that? If you mean it too literally, you end up agreeing with me.

      On the one hand, a working brain, the neurons (for whatever reason) are going through a series of states. We’ll call some time segment of those states BRAIN(CSA#1) — that is, CSA#1 is the ordered set of state vectors for the time period.

      Notice what CSA#1 is not. It is not a map of neuron connectivity. It is not a model of how neurons behave. It is only and precisely a list of lists (state vectors) of numbers.

      Chalmers’ claims is that any system implementing CSA#1 experiences the same cognition. So ENGINE(CSA#1), which plays back the states of CSA#1 into memory locations, should experience the same cognition.

      Notice, however, that all we’re doing is playing back the same states that the brain had. CSA#1 is just a recording.

      Do you think playing back a recording of neuron states into memory locations is the same thing as those neurons having those states due to the physical causality of the physical brain?

      “And regardless of your definition,”

      Not mine, James. It’s due to what Chalmers wrote explicitly, as I’ve pointed out to you before.

      “I claim they that the above is Chalmers’ definition.”

      Clearly not. Here’s an exercise. Pull up his paper and search for the word “event.” Your use of the word indicates you don’t fully apprehend his definition.

      That said, two points:

      Firstly, are you clear that this talk of neuron1 causing anything at all in neuron2, whether in the brain or in the computer, is a side trip from the Chalmers paper?

      In this paper, Chalmers make no reference to neuron modeling or neuron connections. He is speaking strictly about an ordered list of state vectors, each comprised of 50-billion numbers.

      Secondly, Chalmers’ assertion is that: “Any system that implements this CSA will share the causal topology of the original system.”

      I disagree. I’ve explained why in what is now three posts.

      The short form is that I don’t see how setting billions of numbers in memory locations is anything like neurons in the brain having actual states due to the physical causal nature of the brain.

    • Wyrd Smythe

      (If you want more elbow room, feel free to move this to the third post in the series. It’s directly on point.)

And what do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: