Consciousness: Final Answer

On the one hand, a main theme here is theories of consciousness. On the other hand, it’s been almost eight years blogging, and I’ve covered my views pretty well in numerous posts and comment threads. Our understanding of consciousness currently seems stuck pending new discoveries, either in answering hard questions, or in providing entirely new paths.

A while back I determined to step away from debates (even blogs) that center on topics with no resolution. Religion is a big one, but theories of mind is another. Your view depends on your axioms. Unless (or until) science provides objective answers, everyone is just guessing.

But it’s been three-and-a-half years, and, well,… I have some notes…

Recently I was asked why I’m so confident about my, perhaps counter-mainstream, view on consciousness, and this post is my long answer. (I can go on at more length here, but will try to keep it succinct.)

I say my view may be counter-mainstream because I do not believe what I perceive to be a common belief: that human consciousness can be simulated in a computer.

In particular, I do not believe downloading a human mind is, or will ever be, possible. I do not believe a simulation of consciousness (which a download requires) is possible, even in principle.

(Actually, I’ll hedge on that last just a smidge, because I can’t rule out that it’s entirely impossible. Almost certainly practically impossible, and probably even effectively impossible. But absolutely, literally impossible? Not 100% on that.)


A key distinction is between a software simulation of a brain (or mind) and a hardware replication of a brain. I think the latter could give rise to a new mind.

A simulation is a model — computed with numbers — of some process (including, potentially, itself). There are models of weather, vehicles, population, sales, and even models of other models.

A crucial point is that, with one exception, models never have the same effect as what they model.

Weather models don’t produce rain or wind.

A model of a laser doesn’t produce light.

The one exception is when a model models another model. Since the result of a model is a list of numbers, another model can produce the same list of numbers, even if it functions in a completely different way.

The idea that the human brain/mind can be modeled successfully (that it will produce consciousness) depends on the brain/mind being a model in the first place.


This is an important point that I think a lot of believers hand-wave away (or just ignore).

Effectively, it posits a form of Platonic dualism, a belief that mind is something not just distinct from its (only known) mechanism, but it is a mathematical object!

Which seems in conflict with a physicalism belief that mind emerges as ‘something the brain does.’

Even the idea that consciousness is ‘what information processing feels like’ depends on the idea of an emergent mind as a consequence of a physical process. How can a simulation of that process cause the same thing to emerge?


I do see it differently when it comes to hardware that replicates the structure and operation of the brain.

Something like the positronic brain that Isaac Asimov wrote about (in Star Trek, Lt. Cmdr. Data and his brother, Lore, both have a positronic brain as an homage to Asimov’s classic robot stories).

Specifically, a very large, fully parallel, massively interconnected network with analog properties. (At the least. There may be other spatial or physical constraints to making it work.)

Unless there is something special about biology, or something like souls are real, I see no reason hardware sufficiently like a brain wouldn’t work.

It might need to be carefully trained (like a bio-brain), but presumably hardware works much faster than bioware. And once it was trained, as hardware, it might be possible to clone copies easily.

Further, such a brain/mind, as hardware, potentially has access to digital information. The whole internet can be its memory.

Speed of thought vastly faster (potentially) and access to all that information. Perhaps in a durable very strong body (with laser eyes).

These would be new “people” — a new kind of actor on the stage!

(And we might be able to clone armies of them…)


Getting back to the question at hand, why has my view settled where it has, my logic goes something like this:

Fact #0: Cogito ergo sum. I perceive that I am conscious. In this context, I accept realism, including that other humans are equally conscious.

Fact #1: Human minds emerge from human brain function. Somehow.

Fact #2: There isn’t one. We don’t know!

So let’s speculate.

What else emerges from physical function? Lots of stuff. Emergence — that is to say, new behaviors emerging at higher levels of organization — seems a solid physical idea, a property of reality.

(This is not to argue that emergent behaviors can’t be fully explained by the component parts. For purposes of this discussion, I’m taking physicalism and reduction as axiomatic.)

Let us speculate that mind emerges in consequence of a large, complex, highly interconnected, parallel physical network, like a brain. As mentioned previously, this seems a solid assumption.

But what about a numerical model, a simulation, of a brain? Can that work?

Can a mind arise from crunching numbers?


Back in 2015 I wrote a long series of posts exploring the idea of simulations in general, including with regard to simulating the brain (let alone the mind). The final post summarizes the main points. (And has a list of links to the other 17 posts in the series!)

A key point I’ll just mention here is that speaking just computationally, I think a model of a human brain will fail in the same way weather models do: mathematical chaos makes the calculation diverge from reality.

This is an inescapable consequence of how digital computers work.

The computational requirement is along the lines of telling me precisely what the weather will be one year from today. As a physical system, that answer does exist right now, but it’s computationally inaccessible. (Practically and perhaps in principle.)


Bottom line here:

  1. Within the context of physicalism, I find it hard to accept the dualism of mind being distinct from brain such that it can be platform agnostic.
  2. I see the computational requirements of analog brain simulation potentially out of reach of conventional computing, and perhaps just plain out of reach (as with weather).
  3. I don’t see how a simulation can produce the same thing as what it models when its inputs and outputs are just numbers. Only physical processes produce physical effects.

I see almost no way around these conclusions.


A common assertion involves the brain, or our minds, being like a computer. That we can learn to do math is offered as supporting evidence.

Let me be clear: The brain is nothing like a computer. Nothing at all.

That neurons can be interpreted as “logic gates” doesn’t make any structure with “logic gates” necessarily a computer. A system of pipes and valves can act like it has logic gates, but it isn’t a computer.

Except in the most metaphorical sense, and part of the problem here is the vagueness of words (with “consciousness” itself at the top of the list). What we mean by “calculate” and “compute” can vary, with some taking a broad view and others (yours truly) taking a narrow view.

(I explored exactly what “calculate” and “compute” (and therefore “computer”) mean to computer scientists in that long series I mentioned.)


The relationship most humans have with math seems, to me, pretty solid proof the mind is absolutely not, no way, no how, a computer.

Just consider all the bullet points:

  • Nearly everyone needs to be taught math.
  • Many hate it. Some claim they are incapable of it.
  • Lots of math is so hard that we need computers!
  • Our math and statistical intuition is notoriously bad.
  • Many people struggle to use logic (a form of math).

There are more subtle points, such as our inability to see patterns in lists of numbers (but those patterns jump out when we use a computer to visualize the data for us).

It’s funny to me, this idea that just because some of us can learn math, it demonstrates how computer-like our minds are. Really, when you look at it closely, quite the opposite is true!

Our brains, and our minds, are far from being anything computer-like.

Not in any von Neumann architecture sense, nor, for that matter, even any digital computation sense. I’d question whether even quantum computing adds anything to this mix.


My final question with regard to brain simulation involves where the consciousness resides.

With humans, it clearly resides in the physical operation of the living brain. If we simulate that numerically, where is the consciousness?

Inputs would be numbers taken from various sensors, memory, and data from whatever sources. The simulation would do math on these numbers resulting in output numbers.

Presumably, for example, one input number stream might be from a microphone that allows the system to “hear.” An output stream might go to a speaker that allows the system to “speak.”

So I ask this system, “Are you conscious?”

It converts the sound of my question to numbers, analyzes those numbers, looks up a bunch of other numbers, and produces numbers that drive the speaker to produce the sound of, “Huh? Say what?”

Or whatever numbers it produces.

Where is the consciousness? In the numbers? In the process? In the result?

I’m skeptical — very, very skeptical — it can reside anywhere at all.


It does raise an interesting question if the system replies, “Hell, yes!”

Does a simulation just think it’s conscious?

(Remember Janet pleading desperately to not be rebooted in The Good Place?)

How can we know if it really has an inner life even if its output numbers claim one?

Does that even matter? What is the difference between a system that acts conscious and one that really is?

(Perhaps where it matters is in what rights we might accord.)


So, no, I’m not at all onboard with simulated or uploaded minds.

I could be wrong, of course, and maybe our brain/mind system really is a biological Turing Machine evolved across eons. It does seem to be the most complex biological system we’ve encountered.

Science will keep chipping away at it, and, who knows, maybe it’ll crack that nut at long last.

On the other hand, science has found areas beyond itself; Turing, Gödel, Cantor, and Heisenberg, all contributed to seeing the limits. Maybe some aspects of consciousness will remain forever ineffable.

Personally, I’d be fine with that.

Stay conscious, my friends!

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

8 responses to “Consciousness: Final Answer

  • SelfAwarePatterns

    You see Wyrd, you did have more to say on consciousness 🙂

    In some ways, I actually find that we’re not that far apart. I do think copying a mind is possible in principle, but I’m far from confident it would ever be practical with anything resembling current computer systems. The performance hit and capacity requirements may always put it out of reach.

    Although I think the situation might be more hopeful in massively parallel hardware specifically designed for the task, such as the technological brains you discuss. Although there’s nothing that says that such a brain couldn’t itself be part of a computer system.

    Ultimately, I think consciousness lies in the eye of the beholder. Whether such a copy is conscious amounts to whether we decide it’s conscious. The real question is whether such a system could convince us of its consciousness.

    Of course, some people would never be convinced, no matter how similar copied-grandma was to original-grandma. But if copied-grandma could duplicate original-grandma’s behavior to a close enough approximation across an extended time period, some portion of the population would come around.

    Not that I’d volunteer to be the first mind copied in any destructive copying process, at least unless I was at death’s door already.

    On computation and the brain, it doesn’t seem productive to me to look at how good or bad people are at math as any kind of evidence, one way or the other, for this question. I can’t use Tetris to program my computer or do math with it, but that doesn’t mean Tetris isn’t a program doing calculations. So human computers don’t prove computationalism, but people struggling with math don’t disprove it.

    “Stay conscious, my friends!”
    As long as I can! 😀

    • Wyrd Smythe

      “In some ways, I actually find that we’re not that far apart.”

      Yeah, it’s mainly in how we define “compute” and in how we regard a numerical model of brain/mind. We are on the same page with a physical replication of a brain. And lots of other things. 🙂

      I didn’t really mention (but the once) my laser analogy, but I think it’s the key point of difference.

      “Ultimately, I think consciousness lies in the eye of the beholder.”

      Certainly in practical ways, I’d agree. The caveat is that, if I’m studying consciousness with the goal of fully understanding it, then it does matter whether the system is “truly” conscious.

      As we’ve both touched on, that may be a tough nut all on its own.

      “So human computers don’t prove computationalism, but people struggling with math don’t disprove it.”

      Fair enough. It is true that an application’s abilities with math don’t reflect all the math going on under the hood.

      It does reflect the precision of that math, though. Tetris never forgets where it is or makes a mistake due to inattention. Everything it does reflects the precision of its mathematical algorithm (and the platform it runs on).

      Not seeing any of that precision in human behavior may not prove a lack of underlying computer (who can prove a lack?), but I think it’s suggestive.

      Put it this way: To the extent it says anything, it says “No!” to computers. 😀

      • SelfAwarePatterns

        “then it does matter whether the system is “truly” conscious.”

        Ah, I didn’t make myself clear. I actually don’t think there is any “truly” to it, at least unless we can agree on an objective definition of “consciousness”. But the only commonly accepted definitions are phenomenal ones such as “subjective experience” or “something it is like.”

        From an objective point of view, we can talk about cognitive capabilities such as perception, attention, memory, imagination, or introspection, capabilities which can be tested and measured. We can even talk about how similar or dissimilar those abilities are to ours.

        But whether those abilities amount to being conscious? I don’t think there’s any fact of the matter on that question. Indeed, I think regarding consciousness as an objective quality is a mistake. It’s like discussing whether a particular system is lovely.

        I think consciousness is like beauty. It only exists subjectively.

      • Wyrd Smythe

        “It’s like discussing whether a particular system is lovely.”

        Ah, well, we’ve found something else to disagree about. 🙂 I don’t think consciousness need be as purely subjective as beauty, although our current understanding of it certainly is.

        On Sabine Hossenfelder’s blog not too long ago there was a very long-running debate triggered by the idea of panpsychism. Much of it turned on Sabine’s dismissal of Chalmers’ “hard problem” as being hard at all.

        Her central point being that, assuming physicalism and reduction, if consciousness exists, then it is merely a property of brain function that science will ultimately decode. In her view, consciousness is simply what it “feels like” to be in a sophisticated sensory processing system.

        She absolutely sees consciousness as an objective property of the system, and while I disagree with her on quite a few points here, I, too, tend to see consciousness as an objective property.

        However, I can quite understand how an instrumentalist could view it as something to be accounted for only subjectively without needing to involve what may (or may not) be under the hood.

      • SelfAwarePatterns

        I did contribute one little comment in that panpsychism debate. It was right after Goff had posited that maybe quantum spin (or something like that) was consciousness. My point was that panpsychists use a definition of consciousness that doesn’t match the most common intuitions about it, one that most of us, once we unpack it, won’t find interesting.

        I personally think the real hard problem of consciousness is the psychological difficulty many people have in accepting that substance dualism isn’t true, or the full implications of that conclusion. I think once we have answers to Chalmers’ “easy problems”, we’ll have the only account we’ll ever have, but it will be enough for us to build a system that can convince many of us of its consciousness, although philosophers will argue endlessly over whether or not it really is.

        Actually, as an instrumentalist, I’m very interested in what happens under the hood. I’ve read dozens of books on neuroscience in pursuit of that interest. I just haven’t found any reason to conclude there is a single objective thing under the hood that matches most people’s conception of consciousness.

      • Wyrd Smythe

        I did notice your comment! IIRC, fairly early in the debate? (Which went to many hundreds of comments before it finally died down.)

        “I just haven’t found any reason to conclude there is a single objective thing under the hood that matches most people’s conception of consciousness.”

        Do we agree all humans do seem to have an objective something under the hood? That we all share in the subjective experience of consciousness? (There is something it is like to be human?)

        I agree we have difficulty defining it, perhaps because, as with most fundamental things, it is irreducible. It can be endlessly described, but not easily defined. (Love is another irreducible human experience with libraries of poems, songs, and stories describing it, but which is hard to pin down as to exactly what it is.)

        I don’t think consciousness is subjective like beauty (which truly is in the eye of the beholder). It may be forever ineffable, but I think it really exists as an objective, emergent property of reality.

        Hmmm… Is this a separate point of disagreement? We disagree about the algorithmic nature of consciousness. Do we also disagree consciousness is an objective property the right kind of system can possess? (That doesn’t sound right. Are we on a semantics issue here? Is it about what’s defined as “conscious”?)

      • SelfAwarePatterns

        I think it’s reasonable to assume that all humans have similar cognitive capabilities. But it seems like any single one of these capabilities can be diminished or absent without dispensing with our intuition that the person is conscious. Any small subset of those capabilities can be knocked out and we’ll still perceive them to be conscious.

        For example, someone’s prefrontal cortex can be destroyed, having dramatic effects on their ability to feel and plan, but we’ll still perceive the resulting person to be conscious. They can have damage to their superior parietal lobe, leading to dramatic gaps in perception, and we’ll still perceive them as conscious. However, if they have extensive damage to both the frontal and parietal lobes, there may not be much left to trigger our intuition that anyone’s still home.

        Which is to say that there appears to be no strict list of functionality that is necessary and sufficient to trigger our intuition of consciousness. That intuition apparently can be triggered by an amorphous and shifting subset of the capabilities of a healthy human, by in essence a quorum of those capabilities.

        Of course, you can define “consciousness” in such a way as to make it objective, but any such definition won’t match up with all our intuitions, because those intuitions aren’t consistent.

      • Wyrd Smythe

        Okay, I think we are on the same page here. I do agree that defining consciousness is nearly (if not exactly) impossible; it’s irreducible and multifaceted. Certainly in this sense, it is subjective.

        I’d only note that damage and other special conditions are in reference to the default condition, which is a human with “all their senses” (which I think is what we mean generally by consciousness). It is this consciousness property that a system can have (the only system we know of so far being the brain) that I’m saying is objective.

        At least in the abstract sense. It’s possible we may never get a handle on it and will forever remain stuck in subjective analysis of consciousness. But I do think (unlike beauty) it’s a real property. It’s possible (at least in principle) to recognize it in a system.

And what do you think?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: