Tag Archives: theory of mind

Knowing Other Minds

I’ve got stuff on my mind!

My post last month about Dr. Gregory Berns and his studies of animal minds ran long because I also discussed Thomas Nagel and his infamous paper. Dr Berns referenced an aspect of that paper many times. It seemed like a bone of contention, and I wanted to explore it, so I needed to include details about Nagel’s paper.

The point is, at the end of the post, there’s a segue from the “Sebald Gap” between humans and animals to the idea we can never really even understand another human (let alone an animal). My notes for the post included more discussion about that, but the post ran long so I only mentioned it.

It’s taken a while to circle back to it, but better late than never?

Continue reading


Like Being a Dog

Back in 1974 Thomas Nagel published the now-famous paper What is it like to be a bat? It was an examination of the mind-body problem. Part of Nagel’s argument includes the notion that we can never really know what it’s like to be a bat. As W.G. Sebald said, “Men and animals regard each other across a gulf of mutual incomprehension.”

But in What It’s Like to Be a Dog: And Other Adventures in Animal Neuroscience (2017) neuroscientist Gregory Berns disagrees. In his opinion Nagel got it wrong. The Sebald Gap closes from both ends. Firstly because animal minds aren’t really that different from ours. Secondly because we can extrapolate our experiences to those of dogs, dolphins, or bats.

I think he has a point, but I also think he’s misreading Nagel a little.

Continue reading


BB #71: Brain Background

Initially I thought, for the first time in the the Brain Bubbles series, I have a bubble actually related to the brain. When I went through the list, though, I saw that #17, Pointers!, was about the brain-mind problem, although the ideas expressed there were very speculative.

As is usually the case when talking about the mind and consciousness, considerable speculation is involved — there remain so many unknowns. A big one involves the notion of free will.

I just read an article that seems to support an idea I have about that.

Continue reading


A Mind Algorithm?

In the last post I explored how algorithms are defined and what I think is — or is not — an algorithm. The dividing line for me has mainly to do with the requirement for an ordered list of instructions and an execution engine. Physical mechanisms, from what I can see, don’t have those.

For me, the behavior of machines is only metaphorically algorithmic. Living things are biological machines, so this applies to them, too. I would not be inclined to view my kidneys, liver, or heart, as embodied algorithms (their behavior can be described by algorithms, though).

Of course, this also applies to the brain and, therefore, the mind.

Continue reading


Real vs Simulated

Indulging in another round of the old computationalism debate reminded me of a post I’ve been meaning to write since my Blog Anniversary this past July. The debate involves a central question: Can the human mind be numerically simulated? (A more subtle question asks: Is the human mind algorithmic?)

An argument against is the assertion, “Simulated water isn’t wet,” which makes the point that numeric simulations are abstractions with no physical effects. A common counter is that simulations run on physical systems, so the argument is invalid.

Which makes no sense to me; here’s why…

Continue reading


Failed States (part 3)

This ends an arc of exploration of a Combinatorial-State Automata (CSA), an idea by philosopher and cognitive scientist David Chalmers — who despite all these posts is someone whose thinking I regard very highly on multiple counts. (The only place my view diverges much from his is on computationalism, and even there I see some compatibility.)

In the first post I looked closely at the CSA state vector. In the second post I looked closely at the function that generates new states in that vector. Now I’ll consider the system as a whole, for it’s only at this level that we actually seek the causal topology Chalmers requires.

It all turns on how much matching abstractions means matching systems.

Continue reading


Failed States (part 2)

This is a continuation of an exploration of an idea by philosopher and cognitive scientist David Chalmers — the idea of a Combinatorial-State Automata (CSA). I’m trying to better express ideas I first wrote about in these three posts.

The previous post explored the state vector part of a CSA intended to emulate human cognition. There I described how illegal transitory states seem to violate any isomorphism between mental states in the brain and the binary numbers in RAM locations that represent them. I’ll return to that in the next post.

In this post I want to explore the function that generates the states.

Continue reading


Failed States (part 1)

Last month I wrote three posts about a proposition by philosopher and cognitive scientist David Chalmers — the idea of a Combinatorial-State Automata (CSA). I had a long debate with a reader about it, and I’ve pondering it ever since. I’m not going to return to the Chalmers paper so much as focus on the CSA idea itself.

I think I’ve found a way to express why I see a problem with the idea. I’m going to have another go at explaining it. The short version turns on how mental states transition from state to state versus how a computational system must handle it (even in the idealized Turing Machine sense — this is not about what is practical but about what is possible).

“Once more unto the breach, dear friends, once more…”

Continue reading


Chalmers Again

Over the last few days I’ve found myself once again carefully reading a paper by philosopher and cognitive scientist, David Chalmers. As I said last time, I find myself more aligned with Chalmers than not, although those three posts turned on a point of disagreement.

This time, with his paper Facing Up to the Problem of Consciousness (1995), I’m especially aligned with him, because the paper is about the phenomenal aspects of consciousness and doesn’t touch on computationalism at all. My only point of real disagreement is with his dual-aspects of information idea, which he admits is “extremely speculative” and “also underdetermined.”

This post is my reactions and responses to his paper.

Continue reading


Intentional States

This is what I imagined as my final post discussing A Computational Foundation for the Study of Cognition, a 1993 paper by philosopher and cognitive scientist David Chalmers (republished in 2012). The reader is assumed to have read the paper and the previous two posts.

This post’s title is a bit gratuitous because the post isn’t actually about intentional states. It’s about system states (and states of the system). Intention exists in all design, certainly in software design, but it doesn’t otherwise factor in. I just really like the title and have been wanting to use it. (I can’t believe no one has made a book or movie with the name).

What I want to do here is look closely at the CSA states from Chalmers’ paper.

Continue reading