In the nearly nine years of this blog I’ve written many posts about human consciousness with regard to computers. Human consciousness was a key topic from the beginning. So was the idea of conscious computers.
In the years since, there have been myriad posts and comment debates. It’s provided a nice opportunity to explore and test ideas (mine and others), and my views have evolved over time. One idea I’ve found increasingly skepticism for is computationalism, but it depends on which of two flavors of it we mean.
I find one flavor fascinating, but can see the other as only metaphor.
I’ve come to divide computationalism into strong and weak flavors. The former is the view that the brain is a computer and mind is computation (for some value of is).
The latter, the weak view, is the idea that a (conventional) computer can simulate a brain or mind. This is the one that fascinates me.
What I’ve come to understand is that the strong view evaluates to either a tautology or a metaphor. Statements equating brains and computers doesn’t refer to conventional computers or conventional computing, but to some broader type of computing or to a general metaphor based on conventional computing.
Either way, the stated equivalence between brains and “computers” has no specific meaning with regard to actual computers as we know them.
As such, I just don’t find it interesting. Strong computationalism speaks of unicorns, not horses. Until someone comes up with an actual unicorn, to me it’s kind of a moot point.
(Perhaps the analogy comes from a combination of  living in a computer-driven world that  is steeped in science fiction and robots combined with  a deep desire for mind uploading to be real. We all have computers on the brain, so it’s not surprising we would see it the other way around.)
I am sympathetic to weak computationalism, in part because I don’t have a good answer to what else happens in an accurate brain simulation — if not consciousness.
A canonical argument is some form of “simulated water isn’t wet,” which has defensive power, but which doesn’t answer the question.
My usual answer is that such a simulation might do nothing more than animate the meat — like a simulation of any other body organ. Blood would flow, cells would live, but there would be no coherent consciousness. It would be a mind in a deep coma.
Alternately, anything from the comatose patient to a crazy mind to just an empty static-filled mind is possible. Given all the possibilities, “working exactly right” is just one of many, so the statistical odds are against it working as expected. (When has complex software ever worked as expected?)
But I don’t have any response that rules out a simulation working. It’s entirely possible weak computationalism is a correct view.
That said, I’ve written lots of posts arguing for skepticism. 😉
As I mentioned, the attraction of strong computationalism might be that it makes brain uploading pretty much a given. If mind really is a computation running on a computer, it shouldn’t be that hard to transfer it to some other computer running some other computation.
Unfortunately, the contradiction of not meaning a conventional computer or conventional computing undercuts this.
If the mind is some unspecified kind of computation, it’s not obvious how — or even if — it can be transferred to different kind of system. That requires specifying exactly what kind of computing is happening.
If the brain is an unspecified kind of computer, then likewise, there’s nothing that can be said about how, or if, a different kind of system might work.
To the extent the brain can be said to be doing a computation, it is a distinctly analog system with myriad active influences. It is far more similar to the “computation” that occurs in a radio where stray capacitance, inductance, and electromagnetic effects, all contribute to the output signal.
But we don’t think of a radio as a “computer” even though it could be said that — on some level — a radio “computes” the sound it produces. This is a decidedly metaphoric usage, at least in the context of conventional computing.
[We might then also say that a watershed “computed” (and continues to “compute”) the streams and rivers that flow through it.]
Maybe part of what calls to people is that humans can do sums.
In fact, long ago, the term “computer” referred to a human whose job was doing mathematical computations (for log tables, for code-breaking, for astronomy, etc).
The irony is that, generally speaking, humans aren’t that good at math (we’re famous for being dumb about probabilities). It requires special training to acquire the skills, and some feel they can never learn math.
(In fact, in some circles, being bad at math is seen as the mark of greater humanity, which is seriously ironic to the point of tragedy.)
One thing I’ve never quite understood is that strong computationalism explicitly equates “computing” — a fairly well-defined and well-studied term — with brain and mind.
Yet any challenge to a strong computationalist is met with a disclaimer that it isn’t conventional computing that’s meant. I’ve heard phrases such as “not a Turing machine” or “not a von Neumann machine.”
The former, if taken faithfully, disclaims the brain from correspondence with any conventional computer. The latter disclaims the brain just from any ordinary computer — there are computers that do not follow (what pedantically is correctly referred to as) the von Neumann architecture.
So if conventional computing isn’t what’s meant, what exactly does the phrase “the brain is a computer” actually say, what is its value? That’s what I can’t figure out.
One final point about strong computationalism: Algorithms are an aspect of conventional computing.
To the extent an algorithm is synonymous with Turing Machine, algorithms are what we mean by conventional computing.
If strong computationalism explicitly denies it refers to conventional computing, then it can make no claims about algorithms. In particular, the kind of “computing” we might grant a radio is not algorithmic.
There is a sense in which everything in reality computes, and in that sense the brain (and everything) is a computer.
One can view quantum interactions as a form of computation in that it involves primitives and operations on them. We might view this as the “true” level of reality. All levels above that are (epistemologically) emergent.
But we can also view the atomic level as a kind of computation on much the same grounds as the quantum view. There are primitives and operations on them. (Atoms and things they do.) Chemistry then is above that with its own primitives (molecules) and operations.
By the time we get to biochemistry and cells as primitives, the operation space has become so vast it’s hard to think of cell interactions as a computation, but some do. (Obviously. Some see the entire brain as a computation.) In the sense of a system computing its next state, the view is valid (if a bit metaphorical).
All these layers of emergence raise an interesting question: What if simulating a higher emergent layer misses something important?
As emergent layers, there is necessarily more going on “under the hood” which might matter in how the emergent layer behaves. Consciousness might not arise if an emergent layer is pushed through transcribed changes without being motivated by the underlying real layer.
After all, it looks like water’s behavior has quantum-driven properties. We’re finding quantum behavior at ever larger real-world scales. It wouldn’t surprise me at all to learn the brain depends on quantum effects of some kind in order to produce mind.
I’m not saying mind is a quantum computation, but that the brain may be a system that leverages quantum (or electromagnetic) effects. The brain seems to operate in a kind of balance point, and that balance may, in part, be due to holistic systemic effects.
Given that strong computationalists tend to equate neurons with logic gates, the kind of “computing” I just described doesn’t seem to be what they have in mind.
Reasonably not: “computing” a mind isn’t much different from “computing” a tree at this level.
I’ll leave you with a term I heard for strong computationalism, due (as far as I know) to Susan Schneider: information patternism. I like the term. It’s essentially the view that consciousness can be found in patterns of information.
I lean towards the idea that, as with Integrated Information Theory (IIT), complex patterns are definitely necessary. I’m not certain they are sufficient, although that raises the question of what’s missing.
Something we haven’t discovered or recognized yet, is my only answer.
Stay uncomputable, my friends!