Tag Archives: computer program

Failed States (part 3)

This ends an arc of exploration of a Combinatorial-State Automata (CSA), an idea by philosopher and cognitive scientist David Chalmers — who despite all these posts is someone whose thinking I regard very highly on multiple counts. (The only place my view diverges much from his is on computationalism, and even there I see some compatibility.)

In the first post I looked closely at the CSA state vector. In the second post I looked closely at the function that generates new states in that vector. Now I’ll consider the system as a whole, for it’s only at this level that we actually seek the causal topology Chalmers requires.

It all turns on how much matching abstractions means matching systems.

Continue reading


Failed States (part 2)

This is a continuation of an exploration of an idea by philosopher and cognitive scientist David Chalmers — the idea of a Combinatorial-State Automata (CSA). I’m trying to better express ideas I first wrote about in these three posts.

The previous post explored the state vector part of a CSA intended to emulate human cognition. There I described how illegal transitory states seem to violate any isomorphism between mental states in the brain and the binary numbers in RAM locations that represent them. I’ll return to that in the next post.

In this post I want to explore the function that generates the states.

Continue reading


Failed States (part 1)

Last month I wrote three posts about a proposition by philosopher and cognitive scientist David Chalmers — the idea of a Combinatorial-State Automata (CSA). I had a long debate with a reader about it, and I’ve pondering it ever since. I’m not going to return to the Chalmers paper so much as focus on the CSA idea itself.

I think I’ve found a way to express why I see a problem with the idea. I’m going to have another go at explaining it. The short version turns on how mental states transition from state to state versus how a computational system must handle it (even in the idealized Turing Machine sense — this is not about what is practical but about what is possible).

“Once more unto the breach, dear friends, once more…”

Continue reading


Intentional States

This is what I imagined as my final post discussing A Computational Foundation for the Study of Cognition, a 1993 paper by philosopher and cognitive scientist David Chalmers (republished in 2012). The reader is assumed to have read the paper and the previous two posts.

This post’s title is a bit gratuitous because the post isn’t actually about intentional states. It’s about system states (and states of the system). Intention exists in all design, certainly in software design, but it doesn’t otherwise factor in. I just really like the title and have been wanting to use it. (I can’t believe no one has made a book or movie with the name).

What I want to do here is look closely at the CSA states from Chalmers’ paper.

Continue reading


Algorithmic Causality

This continues my discussion of A Computational Foundation for the Study of Cognition, a 1993 paper by philosopher and cognitive scientist David Chalmers (republished in 2012). The reader is assumed to have read the paper and the previous post.

I left off talking about the differences between the causality of the (human) brain versus having that “causal topology” abstractly encoded in an algorithm implementing a Mind CSA (Combinatorial-State Automata). The contention is that executing this abstract causal topology has the same result as the physical system’s causal topology.

As always, it boils down to whether process matters.

Continue reading


Causal Topology

I’ve always liked (philosopher and cognitive scientist) David Chalmers. Of those working on a Theory of Mind, I often find myself aligned with how he sees things. Even when I don’t, I still find his views rational and well-constructed. I also like how he conditions his views and acknowledges controversy without disdain. A guy I’d love to have a beer with!

Back during the May Mind Marathon, I followed someone’s link to a paper Chalmers wrote. I looked at it briefly, found it interesting, and shelved it for later. Recently it popped up again on my friend Mike’s blog, plus my name was mentioned in connection with it, so I took a closer look and thought about it…

Then I thought about it some more…

Continue reading


The Mighty Mandelbrot

The Mandelbrot Fractal

Mandelbrot Antennae
[click for big]

I realized that, if I’m going to do the Mandelbrot in May, I’d better get a move on it. This ties to the main theme of Mind in May only in being about computation — but not about computationalism or consciousness. (Other than in the subjective appreciation of its sheer beauty.)

I’ve heard it called “the most complex” mathematical object, but that’s a hard title to earn, let alone hold. It’s complexity does have attractive and fascinating aspects, though. For most, its visceral visual beauty puts it miles ahead of the cool intellectual poetry of Euler’s Identity (both beauties live on the same block, though).

For me, the cool thing about the Mandelbrot is that it’s a computation that can never be fully computed.

Continue reading


Final States

Over the last three posts I’ve been exploring the idea of system states and how they might connect with computational theories of mind. I’ve used a full-adder logic circuit as a simple stand-in for the brain — the analog flow and logical gating characteristics of the two are very similar.

In particular I’ve explored the idea that the output state of the system doesn’t reflect its inner working, especially with regard to intermediate states of the system as it generates the desired output (and that output can fluctuate until it “settles” to a valid correct value).

Here I plan to wrap up and summarize the system states exploration.

Continue reading


Intermediate States

I left off last time talking about intermediate, or transitory, states of a system. The question is, if we only look at the system at certain key points that we think matter, do any intermediate states make a difference?

In a standard digital computer, the answer is a definite no. Even in many kinds of analog computers, transitory states exist for the same reason they do in digital computers (signals flowing through different paths and arriving at the key points at different times). In both cases they are ignored. Only the stable final state matters.

So in the brain, what are the key points? What states matter?

Continue reading


State of the System

State DiagramIn the last post I talked about software models for a full-adder logic circuit. I broke them into two broad categories: models of an abstraction, and models of a physical instance. Because the post was long, I was able to mention the code implementations only in passing (but there are links).

I want to talk a little more about those two categories, especially the latter, and in particular an implementation that bridges between the categories. It’s here that ideas about simulating the brain or mind become important. Most approaches involve some kind of simulation.

One type of simulation involves the states of a system.

Continue reading