Tag Archives: human brain

Strong Computationalism

In the nearly nine years of this blog I’ve written many posts about human consciousness with regard to computers. Human consciousness was a key topic from the beginning. So was the idea of conscious computers.

In the years since, there have been myriad posts and comment debates. It’s provided a nice opportunity to explore and test ideas (mine and others), and my views have evolved over time. One idea I’ve found increasingly skepticism for is computationalism, but it depends on which of two flavors of it we mean.

I find one flavor fascinating, but can see the other as only metaphor.

Continue reading


Monday Miscellany #3

Signs of the Times

While lots of my posts are filled with miscellany, it’s been a while (six years!) since I did a Monday Miscellany post. It was a brief idea for a regular series that didn’t turn into anything. (Ah, well, it happens.) The really cool stuff ends up in the Wednesday Wow posts now.

Sometimes I do a “Friday news dump” of stuff that’s caught my eye but which probably isn’t that interesting to most (especially geeky stuff or social commentary stuff). Today is more stuff of middling medium Monday interest.

Or something like that. Mostly trying to keep notes from accumulating.

Continue reading


Capable of Greatness

I’ve been slowly going through the NPR Tiny Desk Concerts. Most of the musicians and groups are unknown to me (it’s been decades since I even attempted to keep up with music). Truth is, most of the acts are interesting, but don’t really grab me. Maybe one in ten engages; none have made me a new fan.

Which is a whole other story. I mention it because many of these music makers are sweet, gentle, loving people who just want everyone else to be sweet, gentle, and loving. It’s a common sentiment. Banish the bad forever!

But balance is required. There is a Yin-Yang aspect to life.

Continue reading


Brains Are Not Computers

I cracked up when I saw the headline: Why your brain is not a computer. I kept on grinning while reading it because it makes some of the same points I’ve tried to make here. It’s nice to know other people see these things, too; it’s not just me.

Because, to quote an old gag line, “If you can keep your head when all about you are losing theirs,… perhaps you’ve misunderstood the situation.” The prevailing attitude seems to be that brains are just machines that we’ll figure out, no big deal. So it’s certainly (and ever) possible my skepticism represents my misunderstanding of the situation.

But if so I’m apparently not the only one…

Continue reading


A Mind Algorithm?

In the last post I explored how algorithms are defined and what I think is — or is not — an algorithm. The dividing line for me has mainly to do with the requirement for an ordered list of instructions and an execution engine. Physical mechanisms, from what I can see, don’t have those.

For me, the behavior of machines is only metaphorically algorithmic. Living things are biological machines, so this applies to them, too. I would not be inclined to view my kidneys, liver, or heart, as embodied algorithms (their behavior can be described by algorithms, though).

Of course, this also applies to the brain and, therefore, the mind.

Continue reading


Structure vs Function

As a result of lurking on various online discussions, I’ve been thinking about computationalism in the context of structure versus function. It’s another way to frame the Yin-Yang tension between a simulation of a system’s functionality and that system’s physical structure.

In the end, I think it does boil down the two opposing propositions I discussed in my Real vs Simulated post: [1] An arbitrarily precise numerical simulation of a system’s function; [2] Simulated X isn’t Y.

It all depends on exactly what consciousness is. What can structure provide that could not be functionally simulated?

Continue reading


The Meta-Problem

Philosopher and cognitive scientist Dave Chalmers, who coined the term hard problem (of consciousness), also coined the term meta hard problem, which asks why we think the hard problem is so hard. Ever since I was introduced to the term, I’ve been trying figure out what to make of it.

While the hard problem addresses a real problem — how phenomenal experience arises from the physics of information processing — the latter is about our opinions regarding that problem. What it tries to get at, I think, is why we’re so inclined to believe there’s some sort of “magic sauce” required for consciousness.

It’s an easy step when consciousness, so far, is quite mysterious.

Continue reading


Failed States (part 3)

This ends an arc of exploration of a Combinatorial-State Automata (CSA), an idea by philosopher and cognitive scientist David Chalmers — who despite all these posts is someone whose thinking I regard very highly on multiple counts. (The only place my view diverges much from his is on computationalism, and even there I see some compatibility.)

In the first post I looked closely at the CSA state vector. In the second post I looked closely at the function that generates new states in that vector. Now I’ll consider the system as a whole, for it’s only at this level that we actually seek the causal topology Chalmers requires.

It all turns on how much matching abstractions means matching systems.

Continue reading


Failed States (part 2)

This is a continuation of an exploration of an idea by philosopher and cognitive scientist David Chalmers — the idea of a Combinatorial-State Automata (CSA). I’m trying to better express ideas I first wrote about in these three posts.

The previous post explored the state vector part of a CSA intended to emulate human cognition. There I described how illegal transitory states seem to violate any isomorphism between mental states in the brain and the binary numbers in RAM locations that represent them. I’ll return to that in the next post.

In this post I want to explore the function that generates the states.

Continue reading


Failed States (part 1)

Last month I wrote three posts about a proposition by philosopher and cognitive scientist David Chalmers — the idea of a Combinatorial-State Automata (CSA). I had a long debate with a reader about it, and I’ve pondering it ever since. I’m not going to return to the Chalmers paper so much as focus on the CSA idea itself.

I think I’ve found a way to express why I see a problem with the idea. I’m going to have another go at explaining it. The short version turns on how mental states transition from state to state versus how a computational system must handle it (even in the idealized Turing Machine sense — this is not about what is practical but about what is possible).

“Once more unto the breach, dear friends, once more…”

Continue reading