Last Friday I ended the week with some ruminations about what (higher) consciousness looks like from the outside. I end this week — and this posting mini-marathon — with some rambling ruminations about how I think consciousness seems to work on the inside.
When I say “seems to work” I don’t have any functional explanation to offer. I mean that in a far more general sense (and, of course, it’s a complete wild-ass guess on my part). Mostly I want to expand on why a precise simulation of a physical system may not produce everything the physical system does.
For me, the obvious example is laser light.
[I can hear the groans. “Again with the laser light thing.”]
Yep, again with the laser light thing.
Because the thing is, it does seem a pretty good response to the best assertion of computationalism: that a sufficiently precise numeric model can produce everything produced by the system it models.
In my view it depends on where the system actually is: in the behavior or in the outputs. And that depends on understanding the system.
Ultimately, it depends on understanding what consciousness really is.
(And I don’t think we’re close.)
§ § §
Admittedly, the point I’m making isn’t obvious.
I’ll begin with a very simple example: a computer simulation of a vending machine.
The vending machine has a very small number of system states. Mostly, it’s waiting for the user to insert coins or push buttons.
However, over the course of its life, it has a very large number of states of the system. Each time someone interacts with the machine, it generates new states as it cycles through the appropriate system states for the interaction.
If we want to simulate the vending machine, we can either simulate the system states — model the machine’s nature — or we can simulate its operation over time — model the machine’s trajectory.
In either case, we start with some kind of model of the machine.
We try to identify its system states — essentially what the machine can do, but also how it does it. System states are tightly constrained by the machine’s intent. The state diagram (for the vending machine) is small and limited.
Once we have this model, we can “run” it by assuming various kinds of inputs and cycling through the system states. If we record those states as they happen, we end up with a states of the system transcript.
We could replay that transcript to recreate the “life” of the machine during that time.
And note that replaying it doesn’t require the model, just the playback mechanism. By analogy, watching a play requires the actors be present. Watching a movie of that play does not.
Note that this model is very simplified.
Some simplicity is for purposes of this illustration of a stateful system — for instance, which product is selected has no bearing on the example, but would need to be part of a more realistic model.
There are also error conditions, such as not being able to make change, or being out of a product. These would be part of a real model.
The model also ignores such things as the machine heating up or cooling down, or wear and tear on the components, or vandalism. These might not be part of a model if they weren’t of concern in terms of desired results.
So clearly the level of detail necessary in any model depends on what the model should accomplish. My vending machine model only needs to accomplish a demonstration of a state system, so being a cartoon of a real vending machine is good enough.
Computationalism seeks a model sufficiently detailed to create, what is effectively, a person.
So it obviously needs to be a pretty good model.
It requires understanding the system being modeled and the ability to compute it. (Knowing the model doesn’t mean we can compute it.)
The central question is: Can a physical system have properties that a simulation cannot replicate?
On some level, the answer is, “Obviously, yes.” The laser analogy makes this clear. A simulation of a laser cannot emit photons. Only a certain kind of physical system behaving in a certain kind of way can emit photons.
The laser model simulates that behavior. The model generates outputs (list of numbers) that we interpret as describing the behavior of the physical system.
The outputs represent the states of the system — the trajectory of system states over time. We can almost think of them as very detailed “Dear Diary” entries.
If we faithfully replicate the physical aspects of a system, then it seems reasonable the copy should have the properties of the system it mirrors.
For instance, if we made a tiny vending machine, with all working parts, we’d expect it to work just like a vending machine, except tiny.
But if we model a system numerically, how can we know if our model contains the necessary properties? A numerical model can easily ignore something a physical model takes for granted.
For example, early 3D CAD-CAM programs had no awareness of different moving parts intersecting the same physical space at the same moment.
Simple 3D systems don’t mind “solid” objects overlapping even in a static model. You can put one object inside another object without the world blowing up.
In more complex models, calculations detect the overlap and react in some way. They may not act to prevent the overlap. They may just inform the user of what parts overlapped at what point of the cycle.
In the real world, when parts overlap, either the prototype makes crunching sounds or it just stops. Physical parts cannot overlap. Attempts to make them do so are generally catastrophic.
The point is that the behavior of overlapping physical objects must be built in to the model. And the model doesn’t care if the parts overlap — it’s just a matter of one kind of numerical result versus another.
§ § §
Do the outputs of the vending machine model faithfully describe the machine’s behavior?
For some definition of faithfully, yes, they do. They can be as faithful to the reality as the blueprint for the machine. In both cases, abstract information in a specific format represents the vending machine. The more detailed the representation, the more accurate the model.
But the simulation doesn’t dispense sodas or chips. It only tells us about dispensing them (using lists of numbers).
Likewise, the laser simulation doesn’t emit any photons, it can only tell us about them in — perhaps in great detail.
And also likewise, a brain simulation tells us, with lists of numbers, what the brain is doing. (Given a good enough model.)
Can it tell us that the brain is having phenomenal experience? Can it tell us the brain is experiencing consciousness?
The bedeviling thing about phenomenal experience is that it is subjective.
We can’t tell if someone else is having a subjective experience, we can only rely on what they tell us. Which means there is always room for doubt when dealing with an unknown intelligent system.
A general proposition is: Any system that can act sufficiently conscious (however we chose to define it or measure it) is conscious (by fiat).
I’m fine with that. I think it’s currently science fiction, and I think no such system can be created, both in principle and practically, but if one were? I’d have no problem working with Lt. Cmdr. Data, nor any doubts about his “humanity.”
I’m just skeptical it will ever happen. (For reasons I’ve explored over the last 14 days.) We certainly aren’t anywhere close.
In any event, the hope of computationalism is that, given a good enough model (which I think is huge ask), it will describe a brain having conscious thoughts in response to inputs fed to the model.
The outputs will tell various devices to speak, write, even move.
But what is the model really simulating? What do the outputs really tell us?
A good enough model should definitely tell us about a biologically working brain. The basic functions of the body should be easy enough to simulate.
Models that describe the heart pumping, for example, would lack for little in what they tell us about the heart.
But does simulating the brain result in subjective consciousness? Does it reside in something a numerical simulation can capture?
The challenges are formidable and possibly computationally out of reach, even if possible in principle. My only point throughout is that I’m very dubious about “in principle.”
Stay skeptical, my friends!
 And again with the footnotes!
 And, of course, could be wrong. (The only think I’m always right about is that I could be wrong.)
 The whole point of the Turing Test is not being able to tell between a person and a machine.
 It would be adorable! I want a tiny working vending machine! And tiny coins to feed it. And tiny sodas and itty bitty bags of chips!
 These outputs could tell another machine to dispense sodas and chips. In fact, most vending machines today do have an electronic “brain” with a model of vending machine behavior, and it does tell the machine what to do.
 Illusionists doubt we’re having subjective experience!
 Belch, fart, spit, blow their “nose,” clear their “throat,” and excuse themselves for a much-needed bathroom break. (“My back diodes are swimming!“)