Initially I thought, for the first time in the the Brain Bubbles series, I have a bubble actually related to the brain. When I went through the list, though, I saw that #17, Pointers!, was about the brain-mind problem, although the ideas expressed there were very speculative.
As is usually the case when talking about the mind and consciousness, considerable speculation is involved — there remain so many unknowns. A big one involves the notion of free will.
I just read an article that seems to support an idea I have about that.
In the nearly nine years of this blog I’ve written many posts about human consciousness with regard to computers. Human consciousness was a key topic from the beginning. So was the idea of conscious computers.
In the years since, there have been myriad posts and comment debates. It’s provided a nice opportunity to explore and test ideas (mine and others), and my views have evolved over time. One idea I’ve found increasingly skepticism for is computationalism, but it depends on which of two flavors of it we mean.
I find one flavor fascinating, but can see the other as only metaphor.
I cracked up when I saw the headline: Why your brain is not a computer. I kept on grinning while reading it because it makes some of the same points I’ve tried to make here. It’s nice to know other people see these things, too; it’s not just me.
Because, to quote an old gag line, “If you can keep your head when all about you are losing theirs,… perhaps you’ve misunderstood the situation.” The prevailing attitude seems to be that brains are just machines that we’ll figure out, no big deal. So it’s certainly (and ever) possible my skepticism represents my misunderstanding of the situation.
But if so I’m apparently not the only one…
In the last post I explored how algorithms are defined and what I think is — or is not — an algorithm. The dividing line for me has mainly to do with the requirement for an ordered list of instructions and an execution engine. Physical mechanisms, from what I can see, don’t have those.
For me, the behavior of machines is only metaphorically algorithmic. Living things are biological machines, so this applies to them, too. I would not be inclined to view my kidneys, liver, or heart, as embodied algorithms (their behavior can be described by algorithms, though).
Of course, this also applies to the brain and, therefore, the mind.
Philosopher and cognitive scientist Dave Chalmers, who coined the term hard problem (of consciousness), also coined the term meta hard problem, which asks why we think the hard problem is so hard. Ever since I was introduced to the term, I’ve been trying figure out what to make of it.
While the hard problem addresses a real problem — how phenomenal experience arises from the physics of information processing — the latter is about our opinions regarding that problem. What it tries to get at, I think, is why we’re so inclined to believe there’s some sort of “magic sauce” required for consciousness.
It’s an easy step when consciousness, so far, is quite mysterious.
This ends an arc of exploration of a Combinatorial-State Automata (CSA), an idea by philosopher and cognitive scientist David Chalmers — who despite all these posts is someone whose thinking I regard very highly on multiple counts. (The only place my view diverges much from his is on computationalism, and even there I see some compatibility.)
In the first post I looked closely at the CSA state vector. In the second post I looked closely at the function that generates new states in that vector. Now I’ll consider the system as a whole, for it’s only at this level that we actually seek the causal topology Chalmers requires.
It all turns on how much matching abstractions means matching systems.
This is a continuation of an exploration of an idea by philosopher and cognitive scientist David Chalmers — the idea of a Combinatorial-State Automata (CSA). I’m trying to better express ideas I first wrote about in these three posts.
The previous post explored the state vector part of a CSA intended to emulate human cognition. There I described how illegal transitory states seem to violate any isomorphism between mental states in the brain and the binary numbers in RAM locations that represent them. I’ll return to that in the next post.
In this post I want to explore the function that generates the states.
Last month I wrote three posts about a proposition by philosopher and cognitive scientist David Chalmers — the idea of a Combinatorial-State Automata (CSA). I had a long debate with a reader about it, and I’ve pondering it ever since. I’m not going to return to the Chalmers paper so much as focus on the CSA idea itself.
I think I’ve found a way to express why I see a problem with the idea. I’m going to have another go at explaining it. The short version turns on how mental states transition from state to state versus how a computational system must handle it (even in the idealized Turing Machine sense — this is not about what is practical but about what is possible).
“Once more unto the breach, dear friends, once more…”
This is what I imagined as my final post discussing A Computational Foundation for the Study of Cognition, a 1993 paper by philosopher and cognitive scientist David Chalmers (republished in 2012). The reader is assumed to have read the paper and the previous two posts.
This post’s title is a bit gratuitous because the post isn’t actually about intentional states. It’s about system states (and states of the system). Intention exists in all design, certainly in software design, but it doesn’t otherwise factor in. I just really like the title and have been wanting to use it. (I can’t believe no one has made a book or movie with the name).
What I want to do here is look closely at the CSA states from Chalmers’ paper.
This continues my discussion of A Computational Foundation for the Study of Cognition, a 1993 paper by philosopher and cognitive scientist David Chalmers (republished in 2012). The reader is assumed to have read the paper and the previous post.
I left off talking about the differences between the causality of the (human) brain versus having that “causal topology” abstractly encoded in an algorithm implementing a Mind CSA (Combinatorial-State Automata). The contention is that executing this abstract causal topology has the same result as the physical system’s causal topology.
As always, it boils down to whether process matters.