As a result of lurking on various online discussions, I’ve been thinking about computationalism in the context of structure versus function. It’s another way to frame the Yin-Yang tension between a simulation of a system’s functionality and that system’s physical structure.
In the end, I think it does boil down the two opposing propositions I discussed in my Real vs Simulated post:  An arbitrarily precise numerical simulation of a system’s function;  Simulated X isn’t Y.
It all depends on exactly what consciousness is. What can structure provide that could not be functionally simulated?
Let me start by exploring the two basic concepts:
Structure refers to a composite of parts and the relationships between those parts. Both are equally important.
Structures can be abstract or concrete. Software typically uses lots of abstract data structures, but even a shopping list is a simple abstract structure. Obviously anything physical has a concrete structure.
Structure can exist at different levels. Atoms have a structure. Building materials (made from atoms) have their own structure, and the building made from them also has a structure. (In fact, usually is a structure.)
Function is an abstraction that’s harder to define precisely. It refers to what something can do.
A mathematical function takes mathematical input(s) and returns a mathematical value. Computer software functions do the same (they are equivalent to mathematical functions).
A tool (concrete or abstract) has a functionality. It’s function is to perform a task of some kind: drilling a hole; frying an egg; applying a filter to a photo; correcting your spelling.
Note this is a slightly different flavor of function(ality) compared to a mathematical function. It’s another case of physical function versus numeric processing.
Let me also differentiate two other terms:
To emulate is to to copy or clone something. If you emulate someone, you try to copy their behavior.
A software emulation tries to provide everything that what it emulates does. For example, a Windows emulation running on MacOS tries to, for all intents and purposes, be Windows to the apps it runs.
To simulate is to describe or represent something.
Any virtual reality is a simulation that represents a real or imagined scene. A weather simulation describes how the weather behaves.
This seems more clear when we’re not talking about brains and minds. Consciousness tends to complicate any discussion. (Ironically, one of the first requirements for any discussion is possessing consciousness.)
We can ask two questions: What does an emulation of consciousness entail? Or, failing to emulate, can a simulation describe consciousness effectively?
Consider traditional gas-engine cars versus electric cars.
To the extent their function is to provide short-range transportation, which is probably the primary application, there is no real difference.
But their functional differences become apparent the more we examine them, and the more contexts we place them in. In some venues, one may perform noticeably better then the other. Electric motors have many advantages over gas combustion engines, the gas has a few advantages of its own.
This isn’t a good example, because here we’re really talking about an emulation. An electric car emulates a gas car. (A video game simulates a car. No simulation can provide transportation.)
Metaphorically, an electric car is the Positronic Brain compared to a gas car’s biological one. It’s a physical isomorph, at least at the car level.
Which brings up an important point about functional analysis: The granularity matters — sometimes a lot.
Gas and electric cars function the same (in their common context) as cars, but many of their internals differ greatly. That’s kind of the whole point with electric cars — they don’t have gas engines.
So the level of “black box” functionality is important.
One issue with brains is that we don’t know what functional unit is important. Where can we replace a gas engine with an electric one?
For instance, is the neural network (including synaptic behavior) enough, or are things like myelin sheathing and glial cells factor also vitally important? (I think the question is more, how can they not matter.)
On the other end of the spectrum, a black box brain — something that is nothing at all like a structural isomorph. Something with nothing in common with a brain, but which acts like one.
A full functional replacement, but not a structural one.
That certainly isn’t going to work with cars due to their specific physical requirements. But the brain can be viewed as a black box that connects to the body via the nervous system.
Is an alternate structure that faithfully replicates the function at that level possible? Can a mind arise in something that isn’t a structural isomorph?
Specifically, can a numerical simulation calculate mind?
It’s important to ask that question in three contexts:
- A physical simulation of the brain
- A composite functional emulation of the brain
- A unified “mind” algorithm
An interesting aspect of  and  is that those don’t require a full understanding of how consciousness, or even just the brain, works.
This is especially true of . We just need to simulate the physical structure at some appropriate level. The neuron level almost certainly won’t cut it, but something higher than quantum might (unless mind really does depend on quantum effects).
In Stephenson’s Fall, they used lots of powerful quantum computers to simulate reality at the quantum level. From their point of view, it was just a quantum system.
The middle option, , requires some understanding, but it also offers an investigative approach. By simulating selected brain functions, scientists can explore how well those might work.
Models using semantic vectors, for instance, can offer insights into how our minds work without necessarily duplicating the mechanism.
But  — assuming it actually exists — does require a unified theory of mind, a full understanding of consciousness.
Both of the second options are asking conventional computing to emulate decidedly unconventional computing. One way around the challenge is to assume the brain’s computing is actually more conventional than it appears, that the brain really is a Turing Machine in disguise.
The third option absolutely assumes this. For a “mind algorithm” to exist, the brain/mind system has to already be a discrete symbol processing machine.
So let’s assume one of those options seems to work. We have a black box that, hooked up to inputs and outputs appears conscious and responsive.
Is it conscious? How could we tell? What do we even mean by the question?
Alan Turing suggested we talk to it, and while that has problems, no one has really come up with anything better. (Part of what confounds this is lack of a clear definition of consciousness in the first place.)
I like the idea of a Rich Turing Test: It involves prolonged interaction over time; an exploration of likes and dislikes, opinions, ideas, humor, and curiosity. (That last one is huge to me.)
With a human (or similar) consciousness, I’d expect to explore tastes in music, movies, books, food, sports, hobbies, and friends.
So what about a system that reports or claims it has these qualities and seems to demonstrate them, especially over time?
Would we take its word that it was just as conscious as we are?
We already respond to robot dogs (and name our cars), so from a social point of view, chances are we’ll love’m like waffles. (Who doesn’t love waffles?)
At the very least we need to ask some serious questions about what consciousness actually is. (It bemuses me severely that all these people study consciousness without agreeing on a definition.)
I think the real question is whether such functionality is even possible.
For now, at least, it begs the question to ask if a black box that acts conscious really is conscious, because no such black boxes are in evidence, nor do we have clue one about how to make one.
John Searle made (or tried to make) the point that canned responses don’t strike us a truly modeling consciousness, and further that a numerical simulation of consciousness is such a canned system.
I think that’s partially right, but it overlooks things like memory and process. Searle’s Giant File Room, as presented, is just a lookup system. Brains do more than look things up.
Therefore, any functionality we simulate must include self-reflection, multi-level meta-thinking, and self-modifying code.
The bottom line for me regarding functionality is that we simply don’t know what might be missing or how it might be missing. There is a tension between those two propositions: simulated functional results (which are always numbers) versus physical structural results (which can be weather, laser light, bridge traffic, a growing tree, or whatever).
Will those numbers tweak those nerves in the right way?
What I’m saying is that we have a long road ahead to get to that black box. (If it can be gotten to at all. That remains to be seen.)
Maybe ultimately, in these technological gender, even person, fluid times we just treat all black boxes as ducks.
If it walks, talks, quacks, and flies, like a duck, then introduce it to some tasty orange sauce.
Stay structural, my friends!