The underlying question all along is whether a software model of a brain — in contrast to a physical model — can be conscious. A related, but separate, question is whether some algorithm (aka Turing Machine) functionally reproduces human consciousness without regard to the brain’s physical structure.
Now we focus on why a software model isn’t what it models!
Walking through Door #2 (mind is just a physical network) is a requirement to walk through Door #3 (mind is a biological physical network). Door #4 (dualist theories) assumes the biology, which assumes the network.
The point here is that Door #1 stands alone from those physical theories. A software model does not have a physical correlation to the thing it models. That is the crux of these posts!
(The diagram also shows that Door #1 includes several approaches. We’ve touched only on the first two: modeling the brain’s physical network; creating consciousness functionally.)
We’ll start by considering a software model of a simple physical process: current flowing through a resistor.
To make the model easy, we’ll assume the battery has a constant voltage and zero output impedance. That means it can supply any amount of current (within reason). We model it just as a voltage source.
The resistor has a resistance and power dissipation rating. The latter is important because it reflects the part’s ability to shed heat. If it gets too hot, it fries.
The current through the resistor depends on its resistance in a straightforward way: I = E ÷ R. The current (I) is the voltage (E) divided by the resistance (R).
If our battery supplies 10 volts, and our resistor is 200 ohms, the current is 0.05 amps (10 ÷ 200 = 0.05).
Power (watts) is also straightforward (and tasty): P = I × E. Just multiply the current (I) and the voltage (E).
With a current of 0.05 amps and a voltage of 10 volts, our resistor dissipates 0.5 watts.
Our software model takes battery voltage and resistance as input parameters. We can vary them to see how they affect the current flow.
The two formulas we saw are the guts of the model. Change the input parameters and the model uses the formulas to adjust current — and therefore the power — accordingly.
Here’s the punchline: Our model can tell us the resister is dissipating half a watt, but that’s just a number that came out of doing math with other numbers.
Our model can even take another input parameter — the resistor’s power rating — and tell us, numerically speaking, how hot the resistor would be.
But, unlike the real circuit, nothing in the software model generates any heat (let alone burns your fingers).
All we get is a number that we interpret as telling us, whoa, that baby is hot!
Another simple model we might consider involves a dropped bowling ball and the foot it lands on. Our model will answer the question: How much force does the foot receive?
There’s a simple trick that makes this model easy. Because the ball begins and ends at a dead stop, there’s a simple formula we can use:
Ffall × Dfall = E = Dstop × Fstop
The force (on the ball of gravity) times the distance it falls gives us a number representing the ball’s energy (in foot-pounds) after falling that far.
Since the ball comes to a stop on the foot, that energy number is the same as the (very short) distance times the force of stopping. It’s that last value, the force of stopping, that interests us, so we can rearrange the formula like this:
Ffoot = (Dfall ÷ Dstop) × Fball
Our model takes the weight of the bowling ball, the distance dropped, and the stopping distance, as input parameters. The operation of the model tells us how much our toes hurt.
For example, a 13-pound bowling ball dropped 4 feet (48 inches) and hitting, say, a ¼” layer of combined shoe padding and top layer of foot gives us:
(48 ÷ 0.25) × 13 = 2,496 pounds
Which you would expect to hurt!
But, again, in our model, the result is just a number that we interpret in a real world context to make sense of. Running the model does not make us say, “Ouch!”
And this really is the key point. A software model begins with numbers and creates new numbers with math. That’s all it does. That’s all it can do.
Remember: Computers are calculators. They crank out numbers.
Let’s consider a much more involved model. This time we’ll model a microwave oven — a really nice one with lots of user features.
Our model will include whatever knowledge necessary about creating and using microwaves to cook food.
The user features are simple — programmers have been writing user interfaces for many decades now (and, sadly, many of them still aren’t getting it right). Modeling the behavior of microwaves is complex, but engineers understand that behavior quite well.
We’ll break our model into two parts: the control system and the microwave system. To start off easily, we’ll imagine that the microwave system is literally a very simple microwave oven.
Then all we have to do is model the user interface (easy!) and let the magic happen inside the “black box” of the actual microwave oven. We end up with a model like this:
The UTM (Universal Turing Machine) is the algorithm that is our software model. The Oven is the black box we can think of as the I/O of the algorithm. The model creates numbers it can send as output. It can also read numbers as input.
Because the Oven is sophisticated, the outputs of the UTM amount to commands that replicate what a person might do standing at the oven. Set the time; set the power; press Start.
Most advanced features can be done by a human, so what we’re really doing here is modeling a human standing in front of a simple microwave.
If we think of that person as a virtual chef, we’re now modeling a microwave with advanced features.
But the heavy lifting is done by a black box.
Let’s dig deeper.
Let’s gut the microwave oven and weave our control lines into everything.
Any circuitry we can model with software, we rip out and discard. For instance, if there were timer circuits, we can toss those, because we can model a timer effectively in software.
In fact, we’ll throw away everything except the magnetron that actually generates the microwaves and any power circuitry associated with it.
Now our algorithm must account for the advanced user features and the operation of the parts of the oven. The software is now controlling the oven. The only black box now is the one that actually generates the microwaves.
For a final step, let’s open that box as well. No more special hardware!
All that remains now is the UTM, the algorithm that describes the behavior of our microwave oven. The model accounts for how microwaves are generated and how they behave.
The model is now is strictly software — the entire system is modeled by the algorithm. And since an algorithm is a mathematical object, so is our model.
We’ve found a mathematical object that describes the behavior of our microwave oven.
The punchline should be obvious: The model cannot generate microwaves unless we connect it to physical hardware capable of generating microwaves.
The microwaves come from intrinsic properties of the hardware, not the software. They cannot, in fact, ever come from the software. The software is just numbers.
A similar example exists with laser light.
The physics of what makes materials lase is quite well-understood, and software models can very precisely describe what happens under those circumstances.
But software models do not, themselves, ever lase. They can’t. A flow of numbers cannot lase.
Laser light supervenes on specific physical properties. So does the generation of microwaves. Likewise the force of falling bowling balls or the heat of overloaded resistors.
All these are physical phenomena.
We can model them with numbers, but those models are not the thing they model. They are distinctly different — precisely as different as analog sound and its digital model.
So the bottom line is this: What if the self-aware part of our mind supervenes on physical properties of the brain?
What if consciousness is like microwaves or laser light and can only arise under the correct physical conditions — physical conditions our brains happen to meet?
If our brains meet those physical conditions, it seems likely other physical objects with the same properties would also meet those conditions. That means we can step through Door #2.
But it would also mean Door #1 is locked forever. It cannot be opened, even in principle. We’ve encountered a physical limit of reality.
That doesn’t mean we might not someday model a brain, even model consciousness on some level. But those will be just flows of numbers that describe real world objects. They will not be consciousness.
Drop a (virtual) bowling ball on their (virtual) toe, and the resulting numbers, plus some clever programming, can make them send the
OUCH.WAV digital sound file to the output speaker.
But the pain is virtual.
An obvious question is: What exactly is it that makes the physical brain necessary for consciousness. Here are some possible answers:
¶ The first one is big and obvious: the fact of the unimaginably huge network we all have between our ears. That network has roughly 500 trillion connections, which is awesome and unique.
Maybe it just boils down to needing that actual physical network.
¶ Computer transistors have an on state and an off state. Neurons are either firing or not firing. Their “on” state (firing) is a pulsed signal; the frequency encodes the intensity of the firing state.
That’s two physical properties: Firstly, the time component inherent in a pulse train. Secondly, a firing neuron has an analog strength! The latter can probably be modeled, but the time aspect could be significant.
Time is an important element in the physics of generating laser light and microwaves — specifically these thing depend on resonances. Perhaps consciousness also supervenes on a resonance.
¶ Brains are dense and compact, plus they’re encased inside hard skulls.
Perhaps the pattern of neural activity in that small space generates emergent patterns, standing waves, we feel as consciousness.
Douglas Hofstadter posits consciousness is a complex feedback loop. If that were so, the physical self-containment might be part of that.
Cavities have the property of resonance, so maybe it takes a compact network inside a round container for consciousness to emerge.
We need a laser to lase; perhaps we need a brain to think. (When put that way, it seems kind of obvious.)
There is also this: Church-Turing tells us that, if mind can be implemented as an algorithm, then mind is an algorithm. (Again, it’s obvious when stated that way.)
But that means there is a natural physical object that is an algorithm — which is an abstract mathematical object.
But as far as we know, physical objects are not — generally speaking — abstract mathematical objects (although they can reify them).
So if mind is a physical object, it seems not very likely to be an algorithm.
Next time we’ll explore one more possible physical limit that might hamper efforts to create software consciousness. That will complete the arc of this series where we started: mathematics!