This continues my discussion of A Computational Foundation for the Study of Cognition, a 1993 paper by philosopher and cognitive scientist David Chalmers (republished in 2012). The reader is assumed to have read the paper and the previous post.
I left off talking about the differences between the causality of the (human) brain versus having that “causal topology” abstractly encoded in an algorithm implementing a Mind CSA (Combinatorial-State Automata). The contention is that executing this abstract causal topology has the same result as the physical system’s causal topology.
As always, it boils down to whether process matters.
I’ve always liked (philosopher and cognitive scientist) David Chalmers. Of those working on a Theory of Mind, I often find myself aligned with how he sees things. Even when I don’t, I still find his views rational and well-constructed. I also like how he conditions his views and acknowledges controversy without disdain. A guy I’d love to have a beer with!
Back during the May Mind Marathon, I followed someone’s link to a paper Chalmers wrote. I looked at it briefly, found it interesting, and shelved it for later. Recently it popped up again on my friend Mike’s blog, plus my name was mentioned in connection with it, so I took a closer look and thought about it…
Then I thought about it some more…
Did someone say walkies?
I’m spending the weekend dog-sitting my pal, Bentley (who seems to have fully recovered from eating a cotton towel!), while her mom follows strict Minnesota tradition by “going up north for the weekend.” So I have a nice furry end to the two-week posting marathon. Time for lots of walkies!
As a posted footnote to that marathon, this post contains various odds and ends left over from the assembly. Extra bits of this and that. And I finally found a place to tell you about a metaphor I stumbled over long ago and which I’ve found quite illustrative and fun. (It’s in my metaphor toolkit along with “Doing a Boston” and “Star Trekking It”)
It involves the idea of making a bad ROM call…
Last Friday I ended the week with some ruminations about what (higher) consciousness looks like from the outside. I end this week — and this posting mini-marathon — with some rambling ruminations about how I think consciousness seems to work on the inside.
When I say “seems to work” I don’t have any functional explanation to offer. I mean that in a far more general sense (and, of course, it’s a complete wild-ass guess on my part). Mostly I want to expand on why a precise simulation of a physical system may not produce everything the physical system does.
For me, the obvious example is laser light.
I’ve been on a post-a-day marathon for two weeks now, and I’m seeing this as the penultimate post (for now). Over the course of these, I’ve written a lot about various low-level aspects of computing, truth tables and system state, for instance. And I’ve weighed in on what I think consciousness amounts to.
How we view, interpret, or define, consciousness aside, a major point of debate involves whether machines can have the same “consciousness” properties we do. In particular, what is the role of subjective experience when it comes to us and to machines?
For me it boils down to a couple of key points.
Philosophical Zombies (of several kinds) are a favorite of consciousness philosophers. (Because who doesn’t like zombies. (Well, I don’t, but that’s another story.)) The basic idea involves beings who, by definition, [A] have higher consciousness (whatever that is) and [B] have no subjective experience.
They lie squarely at the heart of the “acts like a duck, is a duck” question about conscious behavior. And zombies of various types also pose questions about the role subjective experience plays in consciousness and why it should exist at all (the infamous “hard problem”).
So the Zombie Issue does seem central to ideas about consciousness.
In one of the more horrific examples of virtual personal enslavement in the service of philosophy, another classic conundrum of consciousness involves a woman confined for her entire life to a deep dungeon with no color and no windows to the outside. Everything is black, or white, or a shade of gray.
The enslaved misfortunate Mary has a single ray of monochromatic (artificial) light in her dreary existence: She has an electronic reader — with a black and white screen — that gives her access to all the world’s knowledge. In particular, she has studied and understands everything there is to know about color and how humans perceive it.
Then one day someone sends Mary a red rose.
After a weekend of transistorized baseball, it’s time to get back to wandering through pondering consciousness. I laid down a few cobblestones last week; time to add a few more to the road. Eventually I’ll have something on which I can drive an argument.
There are a number of classic, or at least well-known, arguments for and against computationalism. They variously involve Pixies, different kinds of Zombies, people trapped in different kinds of rooms, and rock walls that compute. (In fact, they compute rooms that trap Pixies. And everything else.)
Today I’m going to ruminate on the world’s most unfortunate file clerk.
When it comes to consciousness, one of the top challenges is defining what it is. (Some insist it doesn’t even exist, which makes defining it even more of a challenge.) Part of the problem is that there is no single correct definition. There never really has been.
There is also that there is sentience (essentially the ability to feel pain as pain) and there is sapience (roughly: wisdom). Lots of animals are sentient, but sapience seems to be a property of human consciousness.
Which raises the question: Are humans just a point on a spectrum, or is there some sort of “band gap” between higher and lower forms?
Moving on from system states (and states of the system), today I’d like to fly over the landscape of different systems. In particular, systems that are — or are not — viewed as conscious.
Two views make this especially interesting. The first holds that everything is computing everything and — under computationalism — this includes conscious computations. The second (if I understand it) holds that anything that processes input data into some kind of output is conscious. (I’m not clear if the view also sees an input-output system as a computer.)
So I want to explore what I see as major landmarks in the landscape of systems that… well, about the only thing we can probably all agree on is that they do something.