Philosophical Zombies (of several kinds) are a favorite of consciousness philosophers. (Because who doesn’t like zombies. (Well, I don’t, but that’s another story.)) The basic idea involves beings who, by definition, [A] have higher consciousness (whatever that is) and [B] have no subjective experience.
They lie squarely at the heart of the “acts like a duck, is a duck” question about conscious behavior. And zombies of various types also pose questions about the role subjective experience plays in consciousness and why it should exist at all (the infamous “hard problem”).
So the Zombie Issue does seem central to ideas about consciousness.
In one of the more horrific examples of virtual personal enslavement in the service of philosophy, another classic conundrum of consciousness involves a woman confined for her entire life to a deep dungeon with no color and no windows to the outside. Everything is black, or white, or a shade of gray.
The enslaved misfortunate Mary has a single ray of monochromatic (artificial) light in her dreary existence: She has an electronic reader — with a black and white screen — that gives her access to all the world’s knowledge. In particular, she has studied and understands everything there is to know about color and how humans perceive it.
Then one day someone sends Mary a red rose.
Last time I introduced four levels of possibility regarding how mind is related to brain. Behind Door #1 is a Holy Grail of AI research, a fully algorithmic implementation of a human mind. Behind Door #4 is an ineffable metaphysical mind no machine can duplicate.
The two doors between lead to physical models that recapitulate the structure of the human brain. Behind Door #3 is the biology of the brain, a model we know creates mind. Behind Door #2 is the network of the brain, which we presume encodes the mind regardless of its physical construction.
This time we’ll look more closely at some distinguishing details.
Last week we took a look at a simple computer software model of a human brain. (We discovered that it was big, requiring dozens of petabytes!) One goal of such models is replicating consciousness — a human mind. That can involve creating a (potentially superior) new mind or uploading an existing human mind (a very different goal).
Now that we’ve explored the basics of calculation, code (software), computers, and (computer software) models, we’re ready to explore what’s involved in attempting to model a (human) mind.
I’m dividing the possibilities into four basic levels.
In a discussion a while back I mentioned in passing that humans sense wetness and time. That was challenged on the basis that we don’t sense time at all and — when it comes to wetness — sense only pressure and temperature. There is some truth to that. We don’t have an actual time sensor, nor do we have specific “wetness” sensors.
I’ve been thinking about this ever since (not constantly; you know, on and off). A key question is whether wetness can be reduced to pressure and temperature and remain wetness. And time is a topic all on its own!
For the record: Here is my final answer…