After a weekend of transistorized baseball, it’s time to get back to wandering through pondering consciousness. I laid down a few cobblestones last week; time to add a few more to the road. Eventually I’ll have something on which I can drive an argument.
There are a number of classic, or at least well-known, arguments for and against computationalism. They variously involve Pixies, different kinds of Zombies, people trapped in different kinds of rooms, and rock walls that compute. (In fact, they compute rooms that trap Pixies. And everything else.)
Today I’m going to ruminate on the world’s most unfortunate file clerk.
His argument involves a clerk in a very large file room.
The clerk’s job involves receiving request messages (on one of those old-fashioned air tube things), processing the request, and sending a reply on its way (again, though a tube).
Very standard bureaucratic task, but there’s a twist: The clerk can’t read the language of the requests or the replies! Everything is in a foreign language the clerk doesn’t know (for example’s sake, say Chinese).
This is where the giant file room comes in. The clerk, who is very fast, is able to match (or index) the symbols on the request to a record in the files. That record contains the reply.
All the clerk does is look up the record, copy the reply, and send it out.
That’s from the inside. From the outside, it’s a different story.
All the requests the file clerk handles come from Chinese-speaking people making enquiries of the Giant File Room (GFR). As far as they can tell, a person is replying to them: the Grand Friendly Responder.
In particular, note that the replies all make sense relative to their enquiry.
This situation can be taken one of two ways:
¶ On the one hand, it’s meant to point out that the operator in the room doesn’t have any understanding of the request or the reply.
The operation is purely mechanical: a given set of symbols indexes (matches) a record somewhere in the file system. No understanding is required.
¶ On the other hand, it can also suggest the human mind is nothing but a lookup system. That all our responses are just indexed by perceived requests. That we are as mechanistic, as robotic, as the Chinese Room.
The problem is the question of where our sense of understanding the question and the answer come from.
Here’s where illusionism (as I understand it) suggests, yes, that’s exactly what’s going on. What feels like understanding is just what that indexing “feels” like.
Many different arguments spring from this, particularly with regard to the intended point that a mechanistic system can appear “conscious” without having any phenomenal experience.
One of the bigger points involves seeing the room as a virtual or composite system that “understands” Chinese.
The idea is the clerk plus the files plus the looking up comprise that system. An analogy compares the clerk to a computer CPU and the files to software (as in an FSM).
Which, fair enough, but where’s the phenomenal experience?
Does any part of the room (or any computer) get annoyed at dumb questions or even just repeated questions? Does Google get impatient when you just can’t manage to phrase your query right?
The room is a behavioral zombie — something that acts human from the outside, but which has none of the human internals, including the autobiographical self.
What if one of the requests asks about phenomenal experience? Does the room lie? If it doesn’t, it gives away the game.
If it does, that contradiction makes it different from a “real” consciousness (it’s lying about something central to personal existence).
Of course, illusionism (again, as I understand it), says we’re lying!
I do think the systems approach has a point. We need to consider the room as a whole.
For me that includes consideration of how the room came to exist. Metaphors need to be grounded in some sort of reality if they’re to be taken seriously.
The system I see doesn’t just include the clerk and the file system. It also includes The Designer of the file system. Someone — or some process — had to figure out how to index all those replies.
To the extent the GFR reflects an understanding, it reflects the understanding of the The Designer who created it.
It is essentially a very good knowledge-capture or expert system, one we imagine is good enough to answer any reasonable question correctly and naturally. Or say, “I don’t know.”
Note that one difference here is that the room can’t ever figure out the answer to a new question. It can only reply in ignorance.
So it’s a poor analogue for consciousness, is my point.
It’s exactly what it appears to be: a mindless lookup system capable of answering a limited set of inputs.
To make it a better analogue, runners would have to update the file system constantly to replicate learning. It might even simulate figuring out if future updates answer previously unanswerable questions.
But again, what’s the designing intent behind those runners and updates?
Something is shaping the room. Creating it, updating it, it all requires intentional design and implementation for the room to work.
The GFR is just a reflection of the conscious intent of The Designer.
As an aside: At what point does a reply of “I don’t know” become a giveaway?
There is the obvious problem of saying it too much. Or otherwise pretending to be less than competent. That has been tried as a way to bluff the Turing Test. But if things are in earnest, then no fraud is permitted.
One giveaway might involve questions A, B, and C, where the system can answer the first two but not the third. Yet the third would be a synthesis of A and B.
As a suggestive example, if it knew (A) about sibling relationships and (B) about parent-child child relationships, it wouldn’t necessarily know (C) that my parent’s sibling’s child is my cousin.
This particular omission would be glaring on the part of The Designer, but it provides a sense of the kind of thing I mean. There are lots of interlocking concepts in the real world. The file system needs to include them all.
It’s also possible weird gaps might show up over time.
(As far as the question of determining if some random single system is “conscious,” I think the only answer involves spending time with it.)
This still does leave us with the idea of a system, perhaps a more complicated one than that poor file clerk, that does a better job of emulating the Great Friendly Responder.
What’s involved in such an emulation, and what does it mean in terms of a that system being really conscious? (And, therefore, what exactly is this “consciousness” property we’re seeking?)
There is always the additional question of what it means to appear conscious versus actually being conscious. To what extent does that matter?
Given the prevalent presence of ‘thinking’ machines in our lives, younger generations seem more likely to not bother with the distinction. Acts like a duck is a duck.
Personally, I am interested in what it means to be (really) conscious.
Stay indexed, my friends!
 Not that we’d necessarily know that, but that wouldn’t matter from an objective point of view.
 The clerk has the fallback, when a request doesn’t find a match (because it’s impossible to pre-answer all questions), of a special reply that says, “Say what?”
 So when it replies about having phenomenal experience, it could answer (truthfully) on part of the room, “No,” or (equally truthfully) on the part of The Designer, “Yes.”
 I had a manager once who was pretty good about answering questions… except when corporate secrets prevented it. It was often possible to discern the outline of what he couldn’t talk about by exploring its boundaries.
In contrast, another manager was more of a sneak. His answers weren’t as clear or concise, so it was harder to see the outline of what he was hiding.