Strong Computationalism

In the nearly nine years of this blog I’ve written many posts about human consciousness with regard to computers. Human consciousness was a key topic from the beginning. So was the idea of conscious computers.

In the years since, there have been myriad posts and comment debates. It’s provided a nice opportunity to explore and test ideas (mine and others), and my views have evolved over time. One idea I’ve found increasingly skepticism for is computationalism, but it depends on which of two flavors of it we mean.

I find one flavor fascinating, but can see the other as only metaphor.

I’ve come to divide computationalism into strong and weak flavors. The former is the view that the brain is a computer and mind is computation (for some value of is).

The latter, the weak view, is the idea that a (conventional) computer can simulate a brain or mind. This is the one that fascinates me.

What I’ve come to understand is that the strong view evaluates to either a tautology or a metaphor. Statements equating brains and computers doesn’t refer to conventional computers or conventional computing, but to some broader type of computing or to a general metaphor based on conventional computing.

Either way, the stated equivalence between brains and “computers” has no specific meaning with regard to actual computers as we know them.

As such, I just don’t find it interesting. Strong computationalism speaks of unicorns, not horses. Until someone comes up with an actual unicorn, to me it’s kind of a moot point.

(Perhaps the analogy comes from a combination of [1] living in a computer-driven world that [2] is steeped in science fiction and robots combined with [3] a deep desire for mind uploading to be real. We all have computers on the brain, so it’s not surprising we would see it the other way around.)

§

I am sympathetic to weak computationalism, in part because I don’t have a good answer to what else happens in an accurate brain simulation — if not consciousness.

A canonical argument is some form of “simulated water isn’t wet,” which has defensive power, but which doesn’t answer the question.

My usual answer is that such a simulation might do nothing more than animate the meat — like a simulation of any other body organ. Blood would flow, cells would live, but there would be no coherent consciousness. It would be a mind in a deep coma.

Alternately, anything from the comatose patient to a crazy mind to just an empty static-filled mind is possible. Given all the possibilities, “working exactly right” is just one of many, so the statistical odds are against it working as expected. (When has complex software ever worked as expected?)

But I don’t have any response that rules out a simulation working. It’s entirely possible weak computationalism is a correct view.

That said, I’ve written lots of posts arguing for skepticism. 😉

§

As I mentioned, the attraction of strong computationalism might be that it makes brain uploading pretty much a given. If mind really is a computation running on a computer, it shouldn’t be that hard to transfer it to some other computer running some other computation.

Unfortunately, the contradiction of not meaning a conventional computer or conventional computing undercuts this.

If the mind is some unspecified kind of computation, it’s not obvious how — or even if — it can be transferred to different kind of system. That requires specifying exactly what kind of computing is happening.

If the brain is an unspecified kind of computer, then likewise, there’s nothing that can be said about how, or if, a different kind of system might work.

To the extent the brain can be said to be doing a computation, it is a distinctly analog system with myriad active influences. It is far more similar to the “computation” that occurs in a radio where stray capacitance, inductance, and electromagnetic effects, all contribute to the output signal.

But we don’t think of a radio as a “computer” even though it could be said that — on some level — a radio “computes” the sound it produces. This is a decidedly metaphoric usage, at least in the context of conventional computing.

[We might then also say that a watershed “computed” (and continues to “compute”) the streams and rivers that flow through it.]

§

Maybe part of what calls to people is that humans can do sums.

In fact, long ago, the term “computer” referred to a human whose job was doing mathematical computations (for log tables, for code-breaking, for astronomy, etc).

The irony is that, generally speaking, humans aren’t that good at math (we’re famous for being dumb about probabilities). It requires special training to acquire the skills, and some feel they can never learn math.

(In fact, in some circles, being bad at math is seen as the mark of greater humanity, which is seriously ironic to the point of tragedy.)

§

One thing I’ve never quite understood is that strong computationalism explicitly equates “computing” — a fairly well-defined and well-studied term — with brain and mind.

Yet any challenge to a strong computationalist is met with a disclaimer that it isn’t conventional computing that’s meant. I’ve heard phrases such as “not a Turing machine” or “not a von Neumann machine.”

The former, if taken faithfully, disclaims the brain from correspondence with any conventional computer. The latter disclaims the brain just from any ordinary computer — there are computers that do not follow (what pedantically is correctly referred to as) the von Neumann architecture.

So if conventional computing isn’t what’s meant, what exactly does the phrase “the brain is a computer” actually say, what is its value? That’s what I can’t figure out.

§

One final point about strong computationalism: Algorithms are an aspect of conventional computing.

To the extent an algorithm is synonymous with Turing Machine, algorithms are what we mean by conventional computing.

If strong computationalism explicitly denies it refers to conventional computing, then it can make no claims about algorithms. In particular, the kind of “computing” we might grant a radio is not algorithmic.

§ §

There is a sense in which everything in reality computes, and in that sense the brain (and everything) is a computer.

One can view quantum interactions as a form of computation in that it involves primitives and operations on them. We might view this as the “true” level of reality. All levels above that are (epistemologically) emergent.

But we can also view the atomic level as a kind of computation on much the same grounds as the quantum view. There are primitives and operations on them. (Atoms and things they do.) Chemistry then is above that with its own primitives (molecules) and operations.

By the time we get to biochemistry and cells as primitives, the operation space has become so vast it’s hard to think of cell interactions as a computation, but some do. (Obviously. Some see the entire brain as a computation.) In the sense of a system computing its next state, the view is valid (if a bit metaphorical).

All these layers of emergence raise an interesting question: What if simulating a higher emergent layer misses something important?

As emergent layers, there is necessarily more going on “under the hood” which might matter in how the emergent layer behaves. Consciousness might not arise if an emergent layer is pushed through transcribed changes without being motivated by the underlying real layer.

After all, it looks like water’s behavior has quantum-driven properties. We’re finding quantum behavior at ever larger real-world scales. It wouldn’t surprise me at all to learn the brain depends on quantum effects of some kind in order to produce mind.

I’m not saying mind is a quantum computation, but that the brain may be a system that leverages quantum (or electromagnetic) effects. The brain seems to operate in a kind of balance point, and that balance may, in part, be due to holistic systemic effects.

Given that strong computationalists tend to equate neurons with logic gates, the kind of “computing” I just described doesn’t seem to be what they have in mind.

Reasonably not: “computing” a mind isn’t much different from “computing” a tree at this level.

§

I’ll leave you with a term I heard for strong computationalism, due (as far as I know) to Susan Schneider: information patternism. I like the term. It’s essentially the view that consciousness can be found in patterns of information.

I lean towards the idea that, as with Integrated Information Theory (IIT), complex patterns are definitely necessary. I’m not certain they are sufficient, although that raises the question of what’s missing.

Something we haven’t discovered or recognized yet, is my only answer.

Stay uncomputable, my friends!

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

9 responses to “Strong Computationalism

  • Wyrd Smythe

    FWIW: The intent of the post isn’t to start a debate about whether or not the brain is like a computer or a “computer.” I see that as entirely moot.

    What I wanted to establish here is the idea of strong computationalism versus weak computationalism, so if I use that terminology somewhere I can refer back to this post.

  • SelfAwarePatterns

    Wyrd, I thought of you this morning. My school is currently interviewing candidates for CIO. The one this morning mentioned that his earliest experience was on Air Force missile control systems, which he noted were analog systems. It was an online presentation, so I didn’t get a chance to ask him about those systems, but it was the first time I’d directly encountered someone who’d worked on them.

    I actually am a pancomputationalist. I think computation happens everywhere. But it’s not typically productive to work with most systems as computational ones. It only becomes productive, I think, when the ratio of information to energy dynamics is high, as it is for computers and certainly seems to be much higher (currently) for brains.

    In my view, the key difference to focus on isn’t about whether or not they’re computational, it’s that their computational sophistication and density leaves current commercial computers in the dust. I do think a neuron works on the same principles as a logic gate, but a typical neuron does the work of several thousand logic gates. In my mind, that’s the real barrier to reproducing mental functionality in current computer systems.
    Neuromorphic engineering has the potential to eventually close that gap, but it’s very early days and there’s a long slog ahead.

    • Wyrd Smythe

      Flight controls, navigation, and missile control, all involve differential equations that are inherently analog. The real-time inputs, likewise, are analog in nature. Some airplanes still have all analog instruments, and they compute readings from various inputs. (They turn out to be a lot more complex in function than I realized.)

      “I actually am a pancomputationalist.”

      Yes. I touched on that in the post. Computing reality at the quantum level is what Neal Stephenson does in Fall; or, Dodge in Hell. Quantum computing becomes a commodity, so they can simulate — at the quantum level — the large VR island and the bodies (including scanned brains) of the inhabitants.

      That said, I’m not sure Stephenson always distinguishes between simulating reality and also computing higher brain functions. The way some of his characters can manipulate the VR world suggests more than just simulating the physical reality.

      (I really hate typing that title, but “Fall” is just too short and vague. It must be his thing; the one before it is The Rise and Fall of D.O.D.O., which is also a PITA to type. Even typing “D.O.D.O.” is a workout.)

      “[I]t’s that [the brain’s] computational sophistication and density leaves current commercial computers in the dust.”

      Yes. For the same reason even simple digital flight instruments are more complex than their analog counterparts. As you well know, a numerical simulation of an analog process is (at the gross level) necessarily far more complex. Nature just gets to leverage all that quantum-atomic-chemical-molecular computational machinery. A numerical simulation has to do the math.

      (There are many analog systems that are still computationally out of reach. Something as simple as the orbits of planets, for instance. We have very good, very computationally intense, approximations.)

      “I do think a neuron works on the same principles as a logic gate, but a typical neuron does the work of several thousand logic gates.”

      I see that as a contradiction of terms. An assembly of thousands of logic gates is, itself, a computational unit. A very sophisticated one with highly complex analog behaviors, so I agree a neuron is comprised of many “logical atoms” (that is, logic gates).

      The thing is, an actual logic gate also has “atoms” — transistors and whatnot. But as I’ve written about, very few. Basic logic gates require only one or two transistors (a few configurations don’t require any). Even a complex gate, like XOR, only needs a handful.

      Neurons, on the other hand, as you said, “thousands” — very complex computational nodes in an even more complex network.

      As far as computing any of it, as we’ve said before, the computational requirements are formidable for the easily foreseeable future.

      “Neuromorphic engineering has the potential to eventually close that gap,”

      Yes, and they get closer to the analog Positronic Brain model, which you know I think stands a much better chance of working than a computational model.

      Certainly early days, and as far as I know, no serious “in principle” limits in sight. Our understanding is so crude at this point. There are many parallels, I think, to trying to compute the weather. (Although we have far better weather models than we do brain models so far.)

      • SelfAwarePatterns

        I can see those Stephenson titles being a pain to type. And every time I read the Fall one, I trip over the semicolon and wonder if I’m interpreting the title right. (Probably not, which is likely the effect he’s going for.)

        Generally, proper nouns are minimized in book titles, but with “D.O.D.O.” (OK, that is a pain to type) he seems to be going for a proper acronym. I suppose it’s meant to be enticing, but I find it mildly irritating.

      • Wyrd Smythe

        I pretty sure the semi-colon is to distinguish from the comma. It’s a little weird using a semi-colon in a list with only two items, but that is what one does when the list items themselves have commas.

        When I posted about the book, I removed the periods, which he does in the text as well. It makes all the other acronyms funny. I liked it a lot more than Fall.

        Speaking of book titles, I just finished In the Name of the Rose, by Umberto Eco, and he has a really interesting discussion at the end of the book at how he wrote the book — but not about the book’s contents or meaning, he refuses to engage on that. He feels it entirely up to the reader to interpret. I forget now, but he either suggests himself, or quotes someone who suggests, that the ideal is that the author dies the moment the book is published.

        He means that metaphorically, of course, but it’s the Yang to the Yin of knowing the context, which can go as far as the author explaining the work. Eco feels any interpretation is valid to the reader, even if it surprises, even dismays, the author. There is no obligation to “correct the record” — Eco opposes the idea entirely.

        Along those lines, he does a really interesting riff on book titles. He points out the degree to which the prejudice the story to the reader, and suggests the best titles are the proper nouns that neutrally identify the book’s main subject. Barry Lyndon; Tom Jones; Hamlet; Tom Sawyer; Oedipus Rex; and so forth. They tell the reader nothing.

        The Name of the Rose — you don’t really find out what the title means until the very end. It has a parallel (earlier) meaning to Shakespeare’s “A rose by any other name.” The whole book is about symbols on multiple levels, particularly the difference between the sign and what it signifies.

        Quite an interesting read. And holy cow, talk about semi-colon-delimited lists. Eco seems to be channeling the Bible a bit structurally. (The story takes place over seven days.) He will list things, types of heretics, for example, and the friggin’ list will go on for three pages. Very much like the genealogies and lists in the Bible. The plot is a murder mystery set in a monk’s abbey in 1327. The theme is semiotics. The context is the turmoil the Christian religious world went through in those centuries.

        (It’s on Amazon Prime. I just wish the movie was. It stars Sean Connery.)

      • SelfAwarePatterns

        I can see an author not wanting to engage with interpretations of their work. Although if the nature of that work leaves people craving explanation, I tend to think they’ve cheated the reader. It’s one thing if the story is satisfying at at least a superficial level, but if it can’t be understood without interpretation, and the author refuses to provide any clues, then my view isn’t inclined toward charity. (I’ve read too much stuff along those lines, and see it as authorial laziness, throwing all the work on the reader rather than doing the writer’s part to help scaffold their imagination.)

        On titles, I can see that. Most of the time, a title ends up just being a tag to reference the book anyway. So expecting it to do too much work is probably unrealistic. As you note, often a title only makes sense in retrospect anyway, once you’ve finished the book.

        I think I might have seen the beginning of that movie once. It looked pretty interesting, but I had to be somewhere and couldn’t keep watching. And pretty much forgot about it until now.

      • Wyrd Smythe

        “…but if it can’t be understood without interpretation, and the author refuses to provide any clues,…”

        That’s my view of David Lynch in a nutshell. Surrealist who famously refuses to discuss his films. It always makes me wonder about an Emperor’s New Clothes situation. Does Lynch essentially just randomly throw paint at the canvas? (In which case there’s nothing to explain.)

        I dated a surrealist artist once; she made etchings. Very strange etchings. (I have a couple of her works. You can see one here.)

        “As you note, often a title only makes sense in retrospect anyway, once you’ve finished the book.”

        Eco refers to the “honest dishonesty” of an opaque title. It’s honest about no telling you anything about the story. That was his goal in picking “In the Name of the Rose.”

        It’s given me some interesting food for thought.

        “I think I might have seen the beginning of that movie once.”

        Sean Connery as a English monk who studied with Roger Bacon. Christian Slater as his assistant, a novice monk of another order. William (Connery) is a former Inquisitor who quit because he no longer believed in the Inquisition. He has a logically trained mind at odds with proper religious thinking.

        A lot of the fun of the book, which is told from Adso’s POV (Slater’s), writing decades after the events he describes, involves the mind-broadening he is subjected to by William’s logical pursuits.

  • Wyrd Smythe

    This is a very recent interview with Roger Penrose where he discusses how he decided against computationalism and went on to write a (very difficult to read; damn thing took me three years) book about it.

    It’s interesting historically in hearing that history, but also in seeing him at this age. (If the comments are correct, his teeth really hurt.) Of note, he mentions that Hameroff approached him after the book came out. Microtubules aren’t mentioned at all in The Emperor’s New Mind.

  • Wyrd Smythe

    As an aside, this is the 990th post I’ve published here (including Sidebands, Brain Bubbles, and the SR series; it’s the 822nd regular post).

    I’m kind of shooting for posting the 1000th post on July 4th, my Blog Anniversary.

    (A secondary goal is posting the 1000th regular post on January 1, 2021, but we’ll see how that goes.)

And what do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: