Indulging in another round of the old computationalism debate reminded me of a post I’ve been meaning to write since my Blog Anniversary this past July. The debate involves a central question: Can the human mind be numerically simulated? (A more subtle question asks: Is the human mind algorithmic?)
An argument against is the assertion, “Simulated water isn’t wet,” which makes the point that numeric simulations are abstractions with no physical effects. A common counter is that simulations run on physical systems, so the argument is invalid.
Which makes no sense to me; here’s why…
The human brain is a physical system, and we can model physical systems in various ways.
We can make a physical copy, often with a different size, usually with different materials. The copy may be functional to any degree from not at all to fully, depending on our ability to create such a physical model.
Alternately we can create a numerical model that describes the physical system.
Often such numerical models can be “run” over time to describe the dynamics of the simulated system. Computationalism is the idea that a numerical model of the brain (or possibly of the mind) will describe everything a brain does.
In particular, that the simulation would be conscious. (Which I’ll define here as being self-aware, cogently coherent, and capable of meta-thought.)
Effectively, that means we provide input numbers the running simulation recognizes as sense data in its virtual world. These numbers can reflect a real world or an imaginary one.
The simulation manipulates the numbers to produce new numbers, some of which the system interprets as outputs. Generally speaking these are motor outputs that drive muscles.
If it works, the output numbers interpreted as speech and action should reflect an active conscious mind.
The question is whether it will work. There are two opposing propositions:
❶ A numerical simulation can describe a physical system to a high degree of precision — in some cases greater precision than an analog copy can accomplish.
❷ Simulated X isn’t Y. (Simulated water isn’t wet. Simulated airplanes don’t fly. Simulated earthquakes don’t knock down buildings. Etc.)
Which proposition “wins” ultimately depends on what consciousness really is.
If it exists such that describing the system’s operation generates it, then a simulation should produce it. But if it supervenes on physical properties of the system, it might not.
An example that illustrates this nicely is lasers.
We can (and do) simulate laser behavior with a high degree of precision. We understand the physics of the physical system, so our numerical models are very good.
But no simulation of a laser, no matter how accurate, can produce photons.
That requires specific physical materials in a specific physical configuration (and has some specific energy requirements also).
[There is a sub-topic about exactly what the simulation is of, the brain or the mind. The former models the physical organ, similar to how we might model a heart or kidney. The latter involves the more algorithmic approach of replicating mind functionality. I treat them as the same here.]
Computationalists, very understandably, find the second proposition uncomfortable (because it might be true). As I mentioned above, a common counter is the assertion that the simulation is running on a physical system.
Which, as I also mentioned, makes no sense to me.
I can read it in two ways:
Firstly, that a numerical simulation can, through inputs and outputs, interact with the physical world.
Secondly, that the numerical simulation is, itself, running on physical hardware and all the information involved has physical instances in RAM or voltages or whatever.
My perception is that people usually mean the second sense.
The first sense dodges the issue in that the issue is whether those inputs and outputs engage a conscious mind within the simulation. The second sense seems to answer argument that physical systems are physical but information systems are abstract.
But of course these numerical systems are physical. That’s not what the second proposition is getting at.
The second proposition is saying that, no matter how accurately the numbers describe water, those numbers can never be wet. No matter how those numbers are physically reified.
It’s completely irrelevant that a numerical system is, itself, physical. (Because, of course it is.)
What’s relevant is the nature and content of the information in that system.
Which brings me to what I mentioned briefly in the Anniversary post and never got back to, the Mind Stacks:
These are intended to represent the structure hierarchies involved in two physical systems that might implement conscious minds (although the one on the right is currently an open question).
What I tried to do was find equivalencies between major organizational levels, starting with shared basic physics at the bottom of each stack.
I equate the biochemistry of humans with the electronics of machines as being a basic next level. Above that are cells (or biology in general) equated with electronic circuits — these levels organize the level below them into generally useful units.
So the bottom three levels apply to all living things (left) and all electron-using devices (right).
Neurons and logic gates (which are cells and circuits, respectively) are the basic building blocks of “thinking” systems — systems that make apparent choices or decisions based on inputs.
The next level up organizes neurons and gates into functioning machines, a brain and a computer, respectively. (And it’s really this very hierarchy that gives people the idea that computers can have minds in the first place.)
Finally, at the top, we know brains give rise to minds, but we’re not sure about computers just yet. Mainly because we don’t know exactly how brains give rise to minds.
The diagram also notes that the biological organization using chemistry is accomplished through evolution whereas the computational organization using electricity is designed.
What requires serious unpacking is the box labeled Computer. The whole computationalism debate centers on what’s going on inside that box.
What the diagram doesn’t show is that Computer is actually a combination of software (Program) and hardware (Engine).
What’s crucial towards the main point (that numeric systems being physical doesn’t matter) is that all the physicality in the left stack is directly linked with generating Mind whereas nearly all the physicality in the right stack is not.
To see this, recognize that the physical system that enables Computer is the same regardless of what software is run. The behavior of the physical system is identical — logic gates act the same no matter what software runs.
The Engine that enables Computer is not associated with Mind.
It is, in fact, a basic tenant of Computer Science that software is agnostic about the engine that runs it.
So the entire physical part of the right stack is unrelated to the software, which means that the numeric system being physical is irrelevant.
In contrast, everything in the left stack directly participates in generating mind. There is no discontinuity as you work upwards in the stack.
But there is a major discontinuity in the Computer box in the right stack. What’s above it (Mind) isn’t directly associated with what’s below it.
In fact, there are a number of levels of discontinuous abstraction in that box.
The CPU itself is likely a self-contained system — an implementation of the abstraction of the CPU. On top of that is the physical architecture of the computer — another implementation of an abstraction.
Then there’s the operating system which can involve several levels of abstraction all on its own (if high-level services are wrapped around an O/S core, plus there is the BIOS level).
On top of that, software is usually written in a programming language, which is another level of abstraction.
In any event, the running application software is the top level abstraction, and it is only at this top level that the causality of the numeric model exists.
Bottom line: Any causal topology implemented by the system is entirely disconnected from the physical causality of the system.
So the physicality of a numerical simulation is irrelevant to what it simulates.
After all, a numerical simulation can only give us numbers describing what the simulated system does.
Another fundamental tenant of computer science is that which physical system generates the numbers is irrelevant. Numbers in, numbers out; all that matters is the calculation.
So “The simulation is physical” isn’t a useful argument. It ignores the nature and content of the information the system processes. [For more about physical systems vs numeric simulations, see Magnitudes vs Numbers.]
It all boils down to those two propositions.
It boils down to what consciousness actually turns out to be.
Stay propositioned, my friends!