Tag Archives: brain mind problem

Failed States (part 3)

This ends an arc of exploration of a Combinatorial-State Automata (CSA), an idea by philosopher and cognitive scientist David Chalmers — who despite all these posts is someone whose thinking I regard very highly on multiple counts. (The only place my view diverges much from his is on computationalism, and even there I see some compatibility.)

In the first post I looked closely at the CSA state vector. In the second post I looked closely at the function that generates new states in that vector. Now I’ll consider the system as a whole, for it’s only at this level that we actually seek the causal topology Chalmers requires.

It all turns on how much matching abstractions means matching systems.

Continue reading


Failed States (part 2)

This is a continuation of an exploration of an idea by philosopher and cognitive scientist David Chalmers — the idea of a Combinatorial-State Automata (CSA). I’m trying to better express ideas I first wrote about in these three posts.

The previous post explored the state vector part of a CSA intended to emulate human cognition. There I described how illegal transitory states seem to violate any isomorphism between mental states in the brain and the binary numbers in RAM locations that represent them. I’ll return to that in the next post.

In this post I want to explore the function that generates the states.

Continue reading


Failed States (part 1)

Last month I wrote three posts about a proposition by philosopher and cognitive scientist David Chalmers — the idea of a Combinatorial-State Automata (CSA). I had a long debate with a reader about it, and I’ve pondering it ever since. I’m not going to return to the Chalmers paper so much as focus on the CSA idea itself.

I think I’ve found a way to express why I see a problem with the idea. I’m going to have another go at explaining it. The short version turns on how mental states transition from state to state versus how a computational system must handle it (even in the idealized Turing Machine sense — this is not about what is practical but about what is possible).

“Once more unto the breach, dear friends, once more…”

Continue reading


Intentional States

This is what I imagined as my final post discussing A Computational Foundation for the Study of Cognition, a 1993 paper by philosopher and cognitive scientist David Chalmers (republished in 2012). The reader is assumed to have read the paper and the previous two posts.

This post’s title is a bit gratuitous because the post isn’t actually about intentional states. It’s about system states (and states of the system). Intention exists in all design, certainly in software design, but it doesn’t otherwise factor in. I just really like the title and have been wanting to use it. (I can’t believe no one has made a book or movie with the name).

What I want to do here is look closely at the CSA states from Chalmers’ paper.

Continue reading


Algorithmic Causality

This continues my discussion of A Computational Foundation for the Study of Cognition, a 1993 paper by philosopher and cognitive scientist David Chalmers (republished in 2012). The reader is assumed to have read the paper and the previous post.

I left off talking about the differences between the causality of the (human) brain versus having that “causal topology” abstractly encoded in an algorithm implementing a Mind CSA (Combinatorial-State Automata). The contention is that executing this abstract causal topology has the same result as the physical system’s causal topology.

As always, it boils down to whether process matters.

Continue reading


Causal Topology

I’ve always liked (philosopher and cognitive scientist) David Chalmers. Of those working on a Theory of Mind, I often find myself aligned with how he sees things. Even when I don’t, I still find his views rational and well-constructed. I also like how he conditions his views and acknowledges controversy without disdain. A guy I’d love to have a beer with!

Back during the May Mind Marathon, I followed someone’s link to a paper Chalmers wrote. I looked at it briefly, found it interesting, and shelved it for later. Recently it popped up again on my friend Mike’s blog, plus my name was mentioned in connection with it, so I took a closer look and thought about it…

Then I thought about it some more…

Continue reading


Laser Light Shining Bright

laser-lightLast Friday I ended the week with some ruminations about what (higher) consciousness looks like from the outside. I end this week — and this posting mini-marathon — with some rambling ruminations about how I think consciousness seems to work on the inside.

When I say “seems to work” I don’t have any functional explanation to offer. I mean that in a far more general sense (and, of course, it’s a complete wild-ass guess on my part). Mostly I want to expand on why a precise simulation of a physical system may not produce everything the physical system does.

For me, the obvious example is laser light.

Continue reading


Does Not Compute

I’ve been on a post-a-day marathon for two weeks now, and I’m seeing this as the penultimate post (for now). Over the course of these, I’ve written a lot about various low-level aspects of computing, truth tables and system state, for instance. And I’ve weighed in on what I think consciousness amounts to.

How we view, interpret, or define, consciousness aside, a major point of debate involves whether machines can have the same “consciousness” properties we do. In particular, what is the role of subjective experience when it comes to us and to machines?

For me it boils down to a couple of key points.

Continue reading


Rise of the X-Zombies

Philosophical Zombies (of several kinds) are a favorite of consciousness philosophers. (Because who doesn’t like zombies. (Well, I don’t, but that’s another story.)) The basic idea involves beings who, by definition, [A] have higher consciousness (whatever that is) and [B] have no subjective experience.

They lie squarely at the heart of the “acts like a duck, is a duck” question about conscious behavior. And zombies of various types also pose questions about the role subjective experience plays in consciousness and why it should exist at all (the infamous “hard problem”).

So the Zombie Issue does seem central to ideas about consciousness.

Continue reading


The Grayscale Dungeon

In one of the more horrific examples of virtual personal enslavement in the service of philosophy, another classic conundrum of consciousness involves a woman confined for her entire life to a deep dungeon with no color and no windows to the outside. Everything is black, or white, or a shade of gray.

The enslaved misfortunate Mary has a single ray of monochromatic (artificial) light in her dreary existence: She has an electronic reader — with a black and white screen — that gives her access to all the world’s knowledge. In particular, she has studied and understands everything there is to know about color and how humans perceive it.

Then one day someone sends Mary a red rose.

Continue reading