Four Doors

four doorsLast time I introduced four levels of possibility regarding how mind is related to brain. Behind Door #1 is a Holy Grail of AI research, a fully algorithmic implementation of a human mind. Behind Door #4 is an ineffable metaphysical mind no machine can duplicate.

The two doors between lead to physical models that recapitulate the structure of the human brain. Behind Door #3 is the biology of the brain, a model we know creates mind. Behind Door #2 is the network of the brain, which we presume encodes the mind regardless of its physical construction.

This time we’ll look more closely at some distinguishing details.

We’ll walk past Door #4 quickly again because it’s a dead-end as far as AI is concerned. Computers might be very powerful and useful, but they will always be mindless

Behind Door #3 we find ourselves, our biological brain-mind, and the key point here is that humans experience qualia and a personal mind’s «I».

Vitruvian Man

This design works!

We assume that these define what it means to be conscious and self-aware.

Humans create biological minds routinely at the rate of about 350,000 per day

It seems possible this could be sped up and done artificially.

Biological minds have the disadvantage of taking time to train, but advances in our understanding of human memory might result in ways to install knowledge (even experience, which is just memory of past events), quickly.

The biological similarity between artificial clones and humans raises some ethical questions. A cloned human would seem due the same rights as one made the old-fashioned way.

But Door #3 serves mainly as a reference point. It embodies what we know works (given an assumption of physicalism). If we never get Door #2 (let alone the more distant Door #1) open, then we’re stuck just making new “people.”

An interesting point involves somehow transferring an existing mind to a cloned one. Presumably a genetic clone could grow the same network.³

But the person is more than the network. They’re also — crucially — all the learning encoded within that network by the LTP of the synapses.

brain network

A quintillion nodes!

Those need to be transferred from the existing brain to the clone or all you have is the blank slate of a new-born human.

A more sensible engineering goal is replicating the network, the connectome, along with the synapse weightings, in a non-biological manner. That abstract network — physically implemented — is what’s behind Door #2.

The network itself is the blank slate of a new-born human. All our instincts and basic drives are encoded in that complex highly connected network. Some synapse weightings may even be part of our genetic heritage.

Everything we learn once born is encoded in the LTP of the synapses. But to the extent we grant a new-born human a mind (and we do), an unweighted network should likewise give rise to some sort of (infant) mind.

For example, infants experience qualia. Their «I» may not be well-developed, or even apparent, at first, but they experience the world. Presumably (through the appropriate I/O), a new-born network could do the same.

We should be able to see the activity of the network experiencing qualia (just as fMRI shows that activity in human brains).

fMRI

Cool & burning thoughts!

The key point here is the physical correlation between experience or thought and what we see in the brain. The study of neural correlates is the study of exactly that.

To open Door #2 all we require is that, in replicating the physical operation of the brain, we also end up replicating the (physical!) operation of the mind.

Interestingly, we don’t even really need to understand exactly how the network gives rise to a mind. We just need to replicate the circumstances.⁴

The goal of transfer becomes more feasible behind Door #2. Creating (I really like the term weaving) a mechanical network that precisely matches an existing one certainly seems doable.

That leaves two big challenges: Firstly, the need to scan an existing human mind in enough detail (including the synapse strengths!). Secondly, the need to transfer both the connectome and the synapse strengths to the network.

As we saw in The Human Connectome, the amount of information involved is (at least) in the petabyte range. Even at extremely high data rates, it can take many hours to transfer that much data.

Perhaps one solution is a very high-resolution fMRI scan capable of capturing the brain’s structure in sufficient detail. Images, especially holographic ones, can carry huge amounts of data. Such a scan might capture the brain as holographic images in mere minutes.

brain scan

An image is worth 10,000 bits (at the very least).

Analysis of that image data and extraction of the connectome might take much, much longer, but at least the subject isn’t forced to sit there for it.⁵

Then all that extracted data would allow (perhaps through some kind of 3D printing) the weaving of a machine network that matched.

If the printing process was good enough, even the synapse strengths might be printed. If not, we need another stage to implant the synapse LTPs. (But if 3D printing was sufficient to print the whole thing, creating dozens of copies of yourself wouldn’t be a problem! “This little mind-clone went shopping. This little mind-clone stayed home…”)

Of the two, scanning and transfer, the former seems most formidable due to the need to obtain so much extremely high-resolution data. Even so, it seems primarily an engineering challenge.⁶

Note also that the formidable scanning challenge exists for all types of mind transfer from human to machine. Regardless of how that data is ultimately used, it still needs to be scanned and obtained.⁷

Which brings us to Door #1, behind which is a purely software mind.

This is truly the land of unknowns. We have no real reason to think mind is (purely) algorithmic. Nothing else natural is.

There is some small reason to doubt a physical model will work (maybe some form of dualism is true). That doubt becomes greatly magnified when we consider a software model.

slide rule

This slide rule is set for 1/13, so variations on that fraction appear all along the scale’s length. For example, 2/26 (or 154/200).

Consider the difference between a slide rule and a calculator.

In a slide rule, printed scales physically recapitulate the reality of (base ten) logarithms. By sliding one scale against another, we create a ratio (a rational number) and the ratio we set occurs along the length of the juxtaposed scales. We can read variations of the rational number (e.g. 1/2, 2/4, 3/6, etc.) at all points along the scales.

calculator

Calculating…

In a calculator, a clocked flow of electrons through logic circuits results in a series of binary logic steps that ultimately delivers the specific number requested.

(And just the one, not a continuum of answers such as provided by the slide rule).

Both processes can deliver an answer to whatever precision we put into their construction. Both can achieve a similar result. But they use starkly different processes to get it.

The crucial question with regard to consciousness is whether it arises as the result or the process.

If consciousness is the result of a process, it can be calculated by an algorithm, and we can open Door #1. It’s as simple as calculating the right result.

If consciousness is the result of a process, then it cannot, and we cannot. Consciousness lies, not in the result, but in the process, so all we can do is emulate that process.

lasersNext time we’ll wrap up this series by looking specifically at systems with physical effects (such as lasers) that emerge only from inherent properties of the physical system.

Specifically, we’ll look at how a software model of such a system can very precisely emulate its behavior and describe what happens, but it cannot cause the same physical phenomena to arise.

That requires those inherent physical properties!

To put it simply: A software model of a laser does not emit laser light.

A software tree falling in a software forest only creates numbers representing the sound it makes. A virtual falling tree does not make a physical sound!


[1] It’s possible they could act like conscious beings, and it’s even possible the act could be so convincing it questions what we really mean by conscious or self-aware. This is a partly philosophical question beyond the scope of this series.

[2] The process involves some fun, a long wait, and then a large investment in training, but if it’s done right, it usually produces decently functioning units.

[3] But do environmental conditions during growth affect the connectome? I suspect they do, but let’s assume technology allows somehow growing an identical physical brain.

[4] Which, again, is done routinely about 350,000 times a day by unskilled labor, so — really — how hard can it be?

synapse photo

A synapse!

[5] One little gotcha is the need to capture the synapse strengths that encode the lifetime of learning and experience. That requires extremely high resolution and the ability to determine those strengths just from the physical structure of the synapse.

[6] The more serious issue is that it might be an intractable engineering challenge. Or maybe too expensive for common use.

[7] That may forever pose a problem for uploading of our minds. It may be too difficult to extract the needed information from a human brain.

(What if it could be done at the expense of dissecting the existing brain? Would you sacrifice yourself for the chance to live in a computer?)

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

14 responses to “Four Doors

  • SelfAwarePatterns

    “What if it could be done at the expense of dissecting the existing brain? Would you sacrifice yourself for the chance to live in a computer?”

    I tend to think any foreseeable scan that can get at the necessary information will have to be destructive. I’d do it if it had been successfully done a number of times already, or if I was about to die anyway. Subjectively, it should feel like losing consciousness prior to the procedure and then waking up in your new incarnation.

    If non-destructive scanning is possible, it could be problematic if there isn’t a mechanism to synch memories between the new you and the old you. If memories can be synched, it would allow the old you to remember the feeling of being the new you. Subjectively, it would feel like you sometimes wake up in your original body, and other times wake up in the new one. I think it would make the mortality of the old body much easier to bear. Of course, if non-destructive scanning isn’t possible, then this isn’t an issue.

    Even without memory synching, maintaining backups seems prudent.

    • Wyrd Smythe

      “If non-destructive scanning is possible, it could be problematic if there isn’t a mechanism to synch memories between the new you and the old you.”

      Problematic in that the copy would diverge? It definitely would. But that means yet another new technology: Putting memories back into a biological brain. That’s a whole other thing!

      • SelfAwarePatterns

        It is a new problem. And it would definitely require detailed understanding of how memory and the mind works.

        But by problematic, I meant that without it, the original person might well feel left behind. In that scenario, I’m pretty sure I wouldn’t want my new copy running until after I was gone.

        On the diverging issue, even synching memories, the copies would diverge over time. They’d experience things in different orders, which would gradually fork their personalities further apart.

        They might even eventually decide that they’ve grown too far apart and break the memory link. It might be like a legal divorce, with lawyers involved to figure out who gets how much of the savings account and who gets the car.

      • Wyrd Smythe

        🙂 At the very least some good fodder for science fiction stories! There are interesting social and legal questions if creating new minds becomes possible. If we can clone existing minds? All the more so!

        (As I’ve said, I see creating new artificial minds in the foreseeable future, but uploading existing human minds at best in the distant future. So many issues to solve there.)

        Come to think of it, Greg Bear’s Eon series has AI people living in computer space (similar to Greg Egan’s Amalgam) and does get into some of the social and legal aspects.

        (BTW: Speaking of SF, I finished The Martian. It was okay. Good story. It’ll be interesting to see the film version.)

  • Steve Morris

    “We have no real reason to think mind is (purely) algorithmic. Nothing else natural is.”
    I don’t know what you mean by this. We run computer models of weather systems, galactic evolution, protein folding, and atomic structure, to name a few natural processes. Why not the brain?

    • Wyrd Smythe

      “I don’t know what you mean by this.”

      I mean that abstract mathematical objects (algorithms) do not appear as natural physical objects. They’re things we either invent or discover (depending on your basic philosophy about math).

      “We run computer models of weather systems, galactic evolution, protein folding, and atomic structure,…”

      Absolutely. Models that describe those things, but aren’t those things. (Monday’s post, already set to publish, gets into this in detail.)

      For now: The weather model doesn’t rain on you. The black hole in the galactic model doesn’t vanish matter from the universe. The protein models don’t nourish (or harm). The atomic models don’t emit alpha, beta, or gamma, particles. (They only describe those things.)

      I think it boils down to an important distinction between the result of a calculation and the process of a calculation.

      All any algorithm can do is take some numbers and do math to make new numbers. The idea that an algorithm can generate consciousness generally assumes that those new numbers somehow amount to self-awareness, that consciousness is the result of a calculation. (IF consciousness is algorithmic, then this is true.)

      The alternative is that somehow the process of calculating those numbers gives rise to self-awareness, but that seems an extreme view. That process is hugely different than the physical processes of the brain.

      Why would two vastly different physical processes both give rise to the same self-awareness? It doesn’t work that way with any other physical model.

      • Steve Morris

        A brain is a physical object, but a mind is not – that is why people have historically believed in the idea of a soul, which is explicitly not a physical thing.

        A financial trading algorithm makes real profits and losses. It doesn’t model trading, it trades. An email app communicates real messages between real people in the physical world. It doesn’t model communication, it communicates.

        Consciousness isn’t something we can touch. We deduce that people or animals are conscious by the way they behave. We can deduce whether a machine is conscious by the way it behaves. Consciousness seems like a perfect example of something that could be simulated by a computer, if we knew how to program it.

        When I think, all I am aware of doing is processing thoughts. I am not conscious of anything else happening inside my head. It feels very much like an algorithm. Maybe there is something wrong with me? Do you experience consciousness differently?

      • Wyrd Smythe

        “Do you experience consciousness differently?”

        Since I don’t see how algorithms ‘experience’ anything, and since I do, I’d have to answer: Yes! 😀

        “…the idea of a soul…”

        Yeah, Door #4. (Just FTR, here I’m considering all forms of dualism out of scope here. Throughout I assume physicalism.)

        “A brain is a physical object, but a mind is not…”

        Perhaps. We don’t know what a mind really is. The key point here is that, whatever mind is, it supervenes on the physical brain (assuming physicalism). We know of no instance otherwise.

        “A financial trading algorithm… An email app…”

        Both absolutely are models. They use internal abstract representations of (particular parts of) the world to convert input numbers to output numbers. That’s all they do.

        They happen to be running in physical machines connected to the real world, and because of clever programming, the numbers the algorithms spit out make things happen in that world.

        Look at the assembly code of your email app and tell me that isn’t an abstract internal model. An email is an abstract object consisting of labeled strings. Just using the internet involves layers of abstract models.

        “We deduce that people or animals are conscious by the way they behave.”

        There’s a crucial first step, though. Cogito ergo sum. We know we’re human and conscious. We observe others who claim they are also human and conscious, and based on their similarity to us (in behavior and in what they report), we grant authority to their testimony.

        “We can deduce whether a machine is conscious by the way it behaves.”

        But since a machine is not like us, it lacks the authority of similarity. Its behavior may only be apparent, not real. If it really is self-aware, presumably it will eventually convince us; no reason it wouldn’t.

        “Consciousness seems like a perfect example of something that could be simulated by a computer, if we knew how to program it.”

        Why? (Just to be sure we’re on the same page, we’re talking about self-awareness. Something capable of saying with the same authority we do, “I think, therefore I am.”)

        “When I think, all I am aware of doing is processing thoughts. I am not conscious of anything else happening inside my head.”

        I’m not quite sure why that’s meaningful. ❓ It’s true that much goes on below the “conscious horizon” of our minds. There’s the Freudian sense of the unconscious, but also the lack of any sense of the mechanism at work.

        (If you’re comparing this to algorithms, some code languages are reflective (to varying degrees) and some are not at all. Python is, like, 100% reflective; BASIC dialects are usually on the 0% side. Assembly might be the ultimately reflective code, but even so it can’t access the mechanisms that make it work.)

        “It feels very much like an algorithm. Maybe there is something wrong with me?”

        If your self-awareness really feels like an algorithm,… I’d have to think so, Steve! XD Is there a USB port anywhere on your body? (I have no idea where John Connor lives! No honestly! o_O )

        But seriously, is the love and joy you feel an algorithm? Or when you’re frustrated or sad? What happens when you drop a bowling ball on your foot? Algorithm branches to the “ouch!” branch? 🙂

      • Steve Morris

        The trading system and email app are connected to real-world systems, just like the consciousness of a human or animal is able to control the body. I see this as a perfect analogy. The brain produces electrical signals that travel along nerves. Computers produce electrical signals that travel along wires.

        I honestly don’t understand why you are doubting this. Here’s the algorithm I’m running:

        IF (Wyrd disputes reasonable case for no apparent reason) THEN (
        FEEL (frustrated);
        MAKE (reasoned reply);
        IF (Wyrd still disputes this and still no reason I can understand) THEN (
        FEEL (more frustrated);
        MAKE (nerd joke);
        ELSE
        FEEL (happier);
        )
        )

        Seriously, I understand that the algorithm my brain is running is not like the algorithm that runs WordPress. I get that. But it still feels like I’m running some kind of algorithm and that my brain is basically doing data processing, albeit with an enormous amount of data.

      • Wyrd Smythe

        “The trading system and email app are connected to real-world systems, just like the consciousness of a human or animal is able to control the body.”

        Absolutely!

        “The brain produces electrical signals that travel along nerves. Computers produce electrical signals that travel along wires.”

        Absolutely!

        Now look at the situation you’ve just painted. In both cases, we have a black box (a brain or a computer) that produces output. There is no disagreement whatsoever that the black boxes can produce outputs that can affect the world. I’ve talked about that from the beginning.

        But that’s not the point. The point is what’s happening inside the black boxes.

        “I honestly don’t understand why you are doubting this. Here’s the algorithm I’m running:”

        Are we still talking about computers affecting the outside world with signals? I don’t doubt it; I said it myself in the comment you’re replying to.

        Or are we switching topics from I/O and saying I doubt consciousness is an algorithm?

        If so, then, yes, I absolutely doubt it. That’s the point of all these posts!

        “IF (Wyrd disputes reasonable case for no apparent reason) THEN”

        Heh, cute! 🙂

        But other than the actual IF-THEN-ELSE, nothing in that “algorithm” is algorithmic.

        (If you dispute that, then go ahead and write the full algorithm in any language you choose. That we humans can use basic logic doesn’t mean that’s all we can use or all we are!)

        What, exactly, is the function “FEEL(frustrated)”? What object are you passing it? How are you defining “reasonable case” or “no apparent reason” or “reasoned reply”? That’s an awful lot of hand-waving.

        “But it still feels like I’m running some kind of algorithm and that my brain is basically doing data processing,”

        It doesn’t feel like that to me, but, really, what would it feel like to be an algorithm? I’m not sure feelings are reliable on this. (Think of all those who say it “feels like” the universe can’t possibly have just happened, but just had to have been created. That’s an entirely sensible way to feel but there’s no hard science supporting the view.)

        More to the point, really? The love we feel for family? The joy of accomplishment? The fun of a great meal with friends? The intensity of a well-told story? The fulfillment of creating something?

        This feels like an algorithm? Imagination, creativity, passion, come from algorithms?

        It might be useful to distinguish between an algorithm and information processing. There are cases where they are not the same thing. The real world can (obviously) affect the real world. Gravity and glass can bend light. The processes of RNA and DNA involve a lot of information, but no algorithms as such.

        The problem is that “information processing” is a fairly empty term so it needs to be well-defined in use. A lot of the places it’s used (“The Mind is Information Processing!”) it’s more of a slogan with no definition.

        In any event, I want to be very clear that the argument I’m making is against strictly algorithmic minds, not artificial minds (or even necessarily our minds running in a machine). I think there’s a good chance that a physical system that replicates the physical system of our brain will work like a brain (give rise to mind).

        But I don’t believe consciousness is an algorithm, an abstract mathematical object.

        I’ll stack my “feeling” (belief) on the matter against yours (and call it a tie), but also three weeks of posts arguing the point rationally. 😀 (The final argument is the two posts coming out Monday and Tuesday.)

        Ultimately, in the absence of hard science, we believe what makes the most sense to us. If mind as an algorithm continues to make the most sense to you, fair enough. I’ve been chewing on this a long time, and for me it keeps coming up, “No way, Jose!” 🙂

  • Steve Morris

    So, if I understand you correctly, you are postulating that we might one day build an artificial brain, but that it would not be a computer.

    You are also arguing that we might be able to model a mind on a computer, but this would not actually be a mind.

    Is this a fair characterisation?

    • Wyrd Smythe

      “[Y]ou are postulating that we might one day build an artificial brain, but that it would not be a computer.”

      Correct — so long as “computer” is defined as a digital (processes discrete symbols) machine running a stored program (an algorithm). What we would build would necessarily replicate the structure and general operation of a brain.

      “You are also arguing that we might be able to model a mind on a computer, but this would not actually be a mind.”

      Correct, again! It would not experience qualia (or anything), and it would not be self-aware.

      “Is this a fair characterisation?”

      Sounds like! 😀

      • Steve Morris

        OK, so I understand what you are saying, but I don’t understand your reasoning at all. Your differentiation between a result and a process baffles me, but I will tune in next time to see if I can follow your argument. 🙂

      • Wyrd Smythe

        You may just have to conclude I must be an idiot.

        WRT process and result, consider the result of: Wyrd… is in the house! o_O

        To have accomplished that result I could: [1] bulldoze through the wall; [2] throw a rock through a window and follow the rock; [3] throw myself against the door (breaking it down); [4] open the door and walk in; [5] other.

        The result is the same, the processes are vastly different. More to the point, what happens during those processes is vastly different. That’s the key.

And what do you think?