Philosophical Zombies (of several kinds) are a favorite of consciousness philosophers. (Because who doesn’t like zombies. (Well, I don’t, but that’s another story.)) The basic idea involves beings who, by definition, [A] have higher consciousness (whatever that is) and [B] have no subjective experience.
They lie squarely at the heart of the “acts like a duck, is a duck” question about conscious behavior. And zombies of various types also pose questions about the role subjective experience plays in consciousness and why it should exist at all (the infamous “hard problem”).
So the Zombie Issue does seem central to ideas about consciousness.
At the same time, philosophical zombies (p-zombies) are just as preposterous and fantastical as movie zombies. I’ll tell you now my bottom line: All zombies are ridiculous.
The basic idea of a p-zombie is that they look and act human — that is to say, conscious. To the extent humans are assumed conscious, p-zombies are assumed conscious.
The difference is, by definition, they have no subjective experience, no qualia. There is nothing it is like to be a p-zombie. And yet, also by definition, they behave as if they did.
Which, for many, myself included, is a bit of a deal-breaker. The idea of human-like beings doing all the human things without subjective experience does seem incoherent to me.
It’s essentially inventing the game of Quidditch and then asking me if I think the rules are fair. A framework that is logical only by fiat doesn’t carry a great deal of metaphysical weight in my eyes.
Their fantastical nature is a problem with all these fantasy scenarios.
The Giant File Room is essentially a well-programmed computer that, through conversation, appears to be a conscious being. By fiat, it can pass a Turing Test — which requires a system that all we can do is fantasize about.
And I don’t even know what to say about all the by fiat for poor Mary. (That scenario is a little different in having nothing directly to do with computationalism. It’s more about what kind of knowledge Mary gets when she escapes.)
The connection in all cases is physicalism; all three — zombies, the GFR, Mary — involve the properties of non-human physical systems compared to the one example we have of higher consciousness (humans).
The tie-in with computationalism is that physicalism is an absolute requirement for computationalism. (But note that it does not in any way imply computationalism.)
IF physicalism isn’t the whole picture, as Mary seeing the color red, or zombies being impossible, suggests, then there are issues for computationalism.
Focusing on the zombies, there are at least two basic classes: Philosophical Zombies (p-zombies) and Behavioral Zombies (b-zombies).
The basic difference is that p-zombies are physically identical to us whereas b-zombies can have a different internal structure.
They have in common that, by fiat, they have no subjective experience. Food doesn’t taste, falling trees don’t make a sound, roses don’t smell fragrant or look vibrant, and silk doesn’t feel silky.
Which isn’t to say they can’t identify food flavors, air vibrations, volatile molecules in the air, photon frequencies, or surface properties.
The whole point is zombies act on properties they identify without actually experiencing those properties. (As we supposedly do.)
In fact, one way to look at [pb]-zombies as being coherent is to see us as zombies. To say that we only have some kind of illusion of subjective experience.
Besides the two basic classes, there are variations on the theme that people seem to use, implicitly or explicitly:
- Evolved Zombie World: A duplicate reality of ours.
- Teleological Zombie World: A created reality.
- My Zombie Doppleganger: A zombie twin of mine.
- A Random Zombie: At least one zombie of some kind.
- An Atheist Zombie: It just lacks a spiritual soul.
- A Conscious Machine: It just lacks a human form.
The first requires a universe that parallels ours in every regard. The only thing missing from this identical universe, by fiat, is subjective experience. The hard problem does not exist in that universe.
The second is also an entire universe populated with zombies (presumably our twins are there), and it, too, is identical except for subjective experience. The only difference between #1 and #2 is that we can assume #2 just sprang into being (by fiat).
I distinguish between #1 and #2 because many of these consciousness scenarios have no story or history to account for their existence. They merely exist, by fiat, because they were willed into existence. I’ll get to that below.
The next two, #3 and #4, distinguish between a zombie twin of yours or just some zombie.
The fifth one is mostly for religious arguments, and #6 (as mentioned) is for Giant File Rooms and other non-humanoid robotic forms.
For me there are two things:
Firstly, there is all this by fiat business in these scenarios in general. I’m not sure why I should take seriously, for application in the real world, a game of Quidditch.
Secondly,… zombies? Seriously?
You know, I sometimes think computationalists have read too much science fiction, and now we’re talking about zombies?
For me, the first thing about zombies is that they’re frauds.
If we ask them to report on their phenomenal experience, they will. They will report that:
- Food tastes good (or bad or savory).
- Music sounds catchy (or boring or annoying).
- That flowers are pretty (or ordinary).
- That flowers are fragrant (or stinky).
- That a counter feels slick (or wet).
But they’ll be lying about having those qualia. By fiat, they aren’t.
And it has to be that they can’t know that, since that would entail extra knowledge they have that we lack. So they believe they have phenomenal experience even though, by fiat, they don’t.
They also believe they have an experiential self, are able to report on that self, and seemingly act according to the dictates of that self.
That’s kind of the central conundrum. Or contradiction.
But it comes from the definition that such zombies are possible in the first place. The contradiction suggests they aren’t.
One way out is the one illusionists take, that, yes, zombies are lying about subjective experience, and so are we.
My other big issue with zombies (and many of these scenarios) lies in their ontology — specifically their teleology.
They are deliberate creations designed to serve a purpose. (Almost always, that purpose is to prove someone’s point of view.)
Zombies essentially have a god, a creator — who made them in His or Her image.
Said creator, The Designer, is conscious.
That means we can’t discuss their implications without discussing their creation. In particular, we are free to question their creation. We can freely deny their relevance.
That I can conceive of something, even agree the idea is essentially coherent, doesn’t mean I believe in its existence.
In particular, it need not have any explanatory weight, but it can illuminate an author’s point.
In a sense, these fantasy scenarios amount to being a kind of science fiction. We can judge them on their coherence, and we can be informed or challenged by their point of view. But they are still stories.
I’ve always found science fiction illuminating, but the “fiction” part has a definite impact on how I take the “science” part.
As far as what x-Zombies have to say about how higher consciousness functions, or what it means, I find them suggestive at best.
More to the point: fictional.
My interests lie more in taking a close look at the idea of computationalism (in the real world). I’m interested in what might really happen or whether it’s possible at all.
Bottom line, the stories are interesting and can get us talking about real things. There are several questions to consider.
That’s where I’ll pick it up tomorrow.
Watch out for zombies, my friends!
 Vampires are much more interesting!
 This makes the GFR fall into the b-zombie class — it acts like a duck, but it’s clearly not a duck internally. However, it’s also clearly not a duck externally, which disqualifies it from being an actual b-zombie.
 Great fun, no doubt, but it’s just that: fun.
 I tried to find things with strong sense of qualia. It’s harder than it might seem. Physical phenomenon can be detected and reported by sensors: color and intensity of light, pitch and volume of sound, surface qualities of many kinds, chemical compounds, temperature, pressure, etc.
That difficulty, that lack of clear example, is exactly why the debate rages. It really could go either way.
 Unlike with natural physical objects.
May 22nd, 2019 at 11:39 am
And on that note, I think I wash my hands of consciousness science fiction. Unless written by actual science fiction authors.
(Greg Egan has some great stuff that assumes computationalism and physicalism and even GR. Good diamond-hard SF!)
((I’ve also enjoyed Hannu Rajaniemi, who writes much more poetic “hard” SF. The Quantum Thief is really good!))
May 22nd, 2019 at 12:57 pm
“You know, I sometimes think computationalists have read too much science fiction, and now we’re talking about zombies?”
I frequently think the same thing about people who are convinced AI will eat or enslave us.
It’s worth noting that computationalists generally didn’t come up with things like philosophical zombies, the Chinese room, Mary’s room, computing walls and rocks, pixies, and all the rest. Anti-computationalists came up with those in their attempts to contest computationalism.
That’s not to say that computationalists don’t come up with their own silly notions at times.
“However, it’s also clearly not a duck externally, which disqualifies it from being an actual b-zombie.”
There’s no doubt that the outside form confers a huge advantage. People are often convinced that an awareness is present in vegetative patients who undergo sleep / wake cycles and have reflexive reactions, even when their behavior and brain scans show no sign of actual phenomenal awareness. Teri Schaivo was a particularly stark example.
Zombies overall strike me as rhetoric for dualism, although even in terms of dualism I think they’re problematic.
May 22nd, 2019 at 1:30 pm
“I frequently think the same thing about people who are convinced AI will eat or enslave us.”
ROFL! Yes, I know exactly what you mean.
True also that these thought scenarios (I can’t dignify them with the term “experiment”) come from those attacking computationalism.
It’s also true they tend to be intuitively obvious to those denying computationalism and equally disputed by those supporting it.
“Teri Schaivo was a particularly stark example.”
Oh, that was so tragic. Charles Pierce used that mess as an example in his book, Idiot America: How Stupidity Became a Virtue in the Land of the Free. His main thesis was that we’ve lost our ability to discern crackpots from sane.
Given it was published in 2010, what’s happened since has only confirmed his thesis…
“Zombies overall strike me as rhetoric for dualism, although even in terms of dualism I think they’re problematic.”
To the extent they’re coherent at all, it does seem that way.
It’s a bit like the “immovable object meets unstoppable force” question, or the “if God can do anything, can God make a rock to heavy for Him to lift?” version of the same thing.
It’s a cartoon situation, and I can’t take it seriously.
They do serve as vehicles that explain a given position, and they have certainly fomented a great deal of discussion.
May 22nd, 2019 at 8:58 pm
I tend to rank vampires on the same interest scale as zombies. They can be useful metaphors, as long as they’re anchored to what they represent. That said, there’s too many examples of zombies and vampires that are just plain awful.
At any rate, it seems like x-zombies are like most philosophical constructs. They might illustrate a problem, but are poor for exploring it in a practical fashion.
May 22nd, 2019 at 11:04 pm
That does seem the general consensus. Gives us something to do until the Big Breakthrough, I guess.
As for story zombies, yeah, a lot of crap. Sturgeon’s Law always applies!