Information Processing

sanityOver the last few weeks I’ve written a series of posts leading up to the idea of human consciousness in a machine. In particular, I focused on the difference between a physical model and a software model, and especially on the requirements of the software model.

The series is over, I have nothing particularly new to add, but I’d like to try to summarize my points and provide an index to the posts in this series. It seems I may have given readers a bit of information overload — too much information to process.

Hopefully I can achieve better clarity and brevity here!

At the center of all of this is the “Holy Grail” of AI, replication of human self-aware consciousness, not by just a machine, but by software (which means some machine is required, but the exact kind of machine is irrelevant so long as it’s a Turing Complete machine).

human mindThe general description (more a reference, really) of such self-aware consciousness is the phrase “something it is like” (to be human).

This is related to the famous phrase due Descartes: “Cogito ergo sum.”

We all have a real-time life-long first-person narration of our lives inside our heads.

A private inner movie that we experience.

The question is why?

What’s going on there?

I’ve tried to make three points.

§

lasers#1. Perhaps mind supervenes on the physical brain.

Laser light, radio waves, pressure effects, sound vibration, heat from current or friction… these are physical effects that supervene on specific physical systems.

No software model of these system does what the physical systems do. It can describe them, even very precisely, but it cannot replicate the physical effects of the thing it models.

Perhaps mind, likewise, supervenes on specific characteristics of the brain. I listed some possible sources of such dependence in the No Ouch! post, which focuses on this first point.

My unmet challenge has been this: Name one system for which the software model (without special hardware) gives rise to the same effects as the physical system it models.

There is only one system I can think of for which the software model accomplishes the same thing. And that is a software model itself.

The idea that a software model of mind can replicate mind, because of Church-Turing, requires that mind already be some kind of software.

But it doesn’t look like any software we know nor is it clear to what extent the mind’s putative software can be separated from its hardware. It may be, that as with lasers and every thing else, both are required.

§

transcendent mind#2. The digital and analog worlds are irreconcilably different.

It’s hard enough to calculate with the real numbers. A larger class of numbers, the transcendental numbers, present an additional problem in not being algebraic. The Transcendental Territory post focuses on this.

Chaos theory tells us that calculation with finite numbers (such as digital computers require) cannot model some analog systems with sufficient precision for any length of time.

The longer such systems run, the further they diverge.

Assuming software minds work, one possibility is that such a mind diverges from an identical physical mind in a way similar to how two human twins might as they experience different things. It would be a normal mind, just different.

Or, still assuming software minds work, another possibility is that such divergence results in complete failure to launch or in an insane (or otherwise useless) mind.

Assuming a digital system can even accomplish the necessary calculations with the necessary precision. Given we have no clue what those calculations might be, no one can say.

§

calculation-0#3. Calculation is limited.

I just mentioned how it’s limited in terms of numerical precision. It’s also limited in ways similar to Gödel’s Incompleteness theorems. There are things that cannot be calculated.

One example is the Turing Halting problem (discussed in Halt! (or not)). (Given that Gödel’s theorems are about mathematics, it’s possible they apply as well.)

Simply put, you can’t calculate with numbers if you can’t calculate those numbers in the first place.

§

algo brainFairly common in discussions is the assertion that mind is “information processing.”

The problem here is a lack of definition. The phrase “information processing” is too general to mean anything. All kinds of things can be said to process information.

Worse, all types of information processing we know show no indication of any aggregate or collective behavior that might give us reason to suspect more is going on than simple binary logic.

In fact, binary logic goes to great lengths to prevent that sort of thing!

The belief that consciousness emerges from such a system is just that: a belief. One with no supporting evidence. It’s extrapolation, at best.

It is clear the processing information is part of consciousness, just as use of logic is, but it’s a stretch to argue those things are consciousness.

§

brain scanThere is also the assertion “the brain is like a computer.”

I don’t see how. The brain’s architecture is nothing like any digital computer. There is little that matches between them.

Brains are analog; computers are discrete. Brains use bio-chemistry (in which electrons do play a role, of course); computers use only electron flows and voltages in metallic wires.

Low-level components in brains are extremely complex; low-level components in computers are dead simple. The interaction of those components in brains is also complex; in computers, again, dead simple.

Brains are a very large, hugely interconnected network of active nodes; computers have a very simple architecture. Brains are not Von-Neumann machines; computers are.

Computers are precise and have perfect memory and math skills; brains are imprecise, forgetful, and generally terrible at math.

The more correct phrase seems, “Brains are nothing like computers!”

§

number brainSo far no one has explained to me how 1+1=2 gives rise to consciousness.

And it’s not even 1+1=2; it’s 1+1=10, because it’s binary.

As I showed you last time, it’s not even that. It’s just some simple logic gates.

So how do numbers create an illusion of consciousness? It’s claimed they do, but no one can say how.

And it’s not clear to me proponents of this view have a clear idea of exactly where the consciousness comes from.

Is it in the operation of the algorithm — a byproduct of running the code?

Or is it in the numbers themselves?

Either answer assumes facts not in evidence.

§

great debateThe matter of belief or disbelief in an algorithmic mind is one thing. Set that aside; we can agree or disagree. Either way it’s a belief.

My views are based on the facts we know to be true — the one actual data point we have regarding consciousness: us.

I do assume physicalism (such that a physical brain replica most likely would be conscious).

The belief in an algorithmic brain assumes facts not in evidence. It even assumes facts contrary to observed evidence (no conscious algorithms, limits of algorithms and math).

It’s a leap of faith.

It might be a correct one; that remains to be seen. But it’s still a leap, is all I’m saying (and that I, for one, am skeptical it’s possible to make the leap).

§

For reference, here’s a list of the posts in this series:

  1. Inevitable Math Foundations and universality of math.
  2. Beautiful Math A look at Euler’s Identity.
  3. Moar Math! As the name implies!
  4. Halt! (or not) A look at Turing’s Halting Problem.
  5. Calculated Math Introduces the idea of calculation.
  6. Coded Math Explores code and algorithms.
  7. Running Code More about how computers run code.
  8. System Code A look at operating systems and apps.
  9. Model Code Introduces software models.
  10. The Computer Connectome How to model a computer.
  11. The Human Connectome How to model a brain.
  12. Model Minds Different levels of mind modeling.
  13. Four Doors More about those levels.
  14. No Ouch! Comparing hardware and software models.
  15. Transcendental Territory Explores more math limits.
  16. Turing’s Machine A detailed look at algorithms.
  17. Logically Speaking A detailed look at computer logic.

§

Class dismissed!

full-adder

In Logically Speaking, I suggested you try to create a full-adder circuit given the truth table at the top of the post. Here’s one solution.

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

26 responses to “Information Processing

  • Steve Morris

    I can’t resist having one more poke at your thesis. Please forgive me, Wyrd.

    “Name one system for which the software model (without special hardware) gives rise to the same effects as the physical system it models.”
    Fair question. But can you name any physical effects of the brain that we are trying to reproduce with our AI computer? Apart from generating waste heat, which our CPU does rather well, and outputting electrical signals, which a computer can also do, I don’t know what you’re getting at.

    “The digital and analog worlds are irreconcilably different”
    Although transistors are analog systems that we choose to treat as if they were digital.

    “Calculation is limited”
    And so are the powers of the human brain, in all kinds of ways.

    • Wyrd Smythe

      “I can’t resist having one more poke at your thesis. Please forgive me, Wyrd.”

      Not at all! Not a problem. 🙂

      “Fair question.”

      An unanswered one, too, by anyone arguing hard AI. (I’m surprised the inability to come up with a single answer doesn’t weaken the faith in the certainty of hard AI. The power of faith, I guess.)

      It is a fair question, a telling question, and until it’s answered, I can’t see hard AI as anything but wishful thinking.

      “But can you name any physical effects of the brain that we are trying to reproduce with our AI computer?”

      Consciousness.

      We know it (whatever “it” is) arises from a biological human brain.

      The idea “it” arises from a physically similar brain requires mainly the assumption of physicalism. Physically similar systems should function similarly. Not a big stretch.

      The idea that “it” arises from a completely different physical process — one based on a putative abstract mathematical structure — requires lots of assumptions and is a view not supported by any facts in evidence.

      “Although transistors are analog systems that we choose to treat as if they were digital.”

      Not a response to the assertion: “The digital and analog worlds are irreconcilably different.”

      Do you dispute that assertion?

      (Do you understand why we use transistors as switches?)

      “And so are the powers of the human brain, in all kinds of ways.”

      Are you arguing that ‘things that are limited are similar’?

      (Since nothing is infinite, all things are limited.)

      • Steve Morris

        “Consciousness”
        But why is consciousness a physical thing? This seems to be the core of your argument and I just don’t see it.

        “Do you understand why we use transistors as switches?”
        Because they tend to be quite high or quite low. Not so much in the middle.

        “Are you arguing that ‘things that are limited are similar’?”
        Not at all. I am arguing that “calculation is limited” is an invalid argument for proving that the mind cannot be a result of calculation.

      • Wyrd Smythe

        “But why is consciousness a physical thing?”

        As opposed to what? In the context of physicalism, what else could it be?

        More to the point, whatever it is, consciousness clearly supervenes on the physical human brain.

        “Because they tend to be quite high or quite low. Not so much in the middle.”

        No, you were right the first time, they are analog devices (at least down to quantum level). Biased correctly, they’re fine in the middle. In computers they’re biased as switches because binary is easy to engineer. In this context, their analog properties are actually a problem.

        Binary machines eliminate as much ambiguity as possible. Determining if a light is on or off is trivial. Determining which of ten dimmer settings it’s set to isn’t.

        (Which is all a sidebar to the original point: Digital and analog worlds are irreconcilable.)

        “I am arguing that ‘calculation is limited’ is an invalid argument for proving that the mind cannot be a result of calculation.”

        Agreed. On its own, it’s not much of an argument. As a part of a larger argument, it fits in consistently.

        But, surely the burden of proof is on the extraordinary claim that calculation can result in consciousness.

        The claim it can’t is so far born out by all observation. (Not unlike the claim that God doesn’t exist.)

      • Steve Morris

        “But why is consciousness a physical thing?”
        As opposed to what? In the context of physicalism, what else could it be?

        Physical things have properties that we can measure – mass, volume, electric charge, temperature, reflectivity and so on. The mind has none of these. The brain, yes.

      • Wyrd Smythe

        I’m pretty sure you’re not a dualist believing that mind is some magical substance. If mind is not part of the physical world, then what is it?

      • Steve Morris

        Clearly, it is what happens when a brain is in operation. The collective firing of interconnected neurons and electro-chemical activity of the brain is what we call the mind. It is an emergent pattern of information.

        In a computer, all we can see at the microscopic level is transistors and their voltages, currents and associated electrical activity. The program that the computer is running is not a physical aspect, and cannot be found with a microscope.

      • Wyrd Smythe

        “Clearly, [mind] is what happens when a brain is in operation.”

        Agreed. It emerges from that physical operation.

        “The collective firing of interconnected neurons and electro-chemical activity of the brain is what we call the mind.”

        All we can say for sure is that it’s the activity of the brain’s operation (the “neural correlates” of mind). Clearly they’re connected.

        “It is an emergent pattern of information.”

        Certainly something emerges from that physical operation. Calling it a “pattern of information” seems both vague and inadequate, but we clearly agree that something emerges.

        “In a computer, all we can see at the microscopic level is transistors and their voltages, currents and associated electrical activity.”

        Aren’t these the physical correlates of the running code? We have in both cases the manifestations of the physical operation of the system, yes?

        “The program that the computer is running is not a physical aspect, and cannot be found with a microscope.”

        Agree. But the “program that the computer is running” emerges from the physical operation of the machine, yes?

        In both cases, we have some combination of “hardware” and “software” that comprise a dynamic system.

        In one of those cases, we know mind emerges.

        I’m going to stop there and make sure we agree on all these points!

      • Steve Morris

        I think we agree on the points here.

        I’m drawing an analogy between the transistors and other components in a computer with the neurons and other cells in a brain. At the level of transistors, there is no data, no software, no code. There aren’t even any logic gates at this level. Likewise, at the level of individual neurons there are no thoughts, memories or anything else that we would recognise as being part of consciousness.

      • Wyrd Smythe

        “I’m drawing an analogy between the transistors and other components in a computer with the neurons and other cells in a brain.”

        Yes. Absolutely. I agree with the analogy.

        My argument is that the two systems operate so differently that the assumption of identical physical effects emerging seems suspect to me.

        I’ve used the analogy of lasers (and microwaves and heat and others). Laser light is the result of specific physical effects. This is notable for being direct cause and effect (electrons do this, photons do that, laser light emerges) and for being proportional (more in, more out).

        No software model of a lasing system emits a single photon of laser light.

        You can hook a software model up to something that does lase, but that just demonstrates that the lasing is in the physical system, not the algorithm.

        My question is simply this: What if consciousness, like laser light, only arises from specific physical processes?

        If so, then the process of a brain in operation, which is different in almost all regards from a software model of a mind, may be crucial and not accessible to software.

      • Steve Morris

        We diverged again as soon as you wrote “physical effects.” I thought we had agreed that the only physical processes are electro-chemical. The mind is not a physical process, just like a software program isn’t. What is this physical effect that you keep returning to?

        The mind is not laser light. I can see laser light. I can measure its wavelength, intensity, direction. I can’t measure any of those things for your mind, nor for a computer program, nor a poem, nor 1+1=2.

      • Wyrd Smythe

        I know we diverge. 😀

        I’m just trying to explain why I see it the way I do. Disagreeing on the conclusion is a given! It would be nice if people stopped saying they didn’t even understand my point.

        “What is this physical effect that you keep returning to?”

        Consciousness. Whatever it is, it arises from a physical system. We agree on this point.

        You say it’s not physical. I agree it’s not concrete, but it is emergent. Physicalism includes emergent properties, so it’s part of the physical world on that account.

        “I can measure its wavelength, intensity, direction. I can’t measure any of those things for your mind, nor for a computer program, nor a poem, nor 1+1=2.”

        I agree they don’t have the properties you list, but those things do have other properties. There are measurable properties of minds (IQ tests, crosswords, music), computer programs (complexity, minimum runtime) and poems (metre, tone) and mathematical equations (degree, complexity, type).

        [In point of fact, I rather suspect we can measure the intensity and direction of our minds! 😎 ]

        You believe (apparently with a great deal of certainty) the brain is some sort of Turing Machine, a blend of hardware+software. You further believe that the system can separated into the “brain machine” and the “brain algorithm” with the result that the algorithm can be run on completely different hardware (such as a 1000 monks with abacuses doing one calculation per day).

        You might be right. Maybe the brain is some sort of TM.

        But at this point that is a belief. It’s not a fact or certainty.

      • Steve Morris

        Let’s leave brains and Turing Machines out of this for now, because I suspect our disagreement lies elsewhere 🙂

        Let’s talk about computers. We agree that a computer is nothing more than a collection of transistors and other components and that nothing happens inside a computer that is not described by the laws of physics.

        And yet computers execute algorithms. They make mathematical calculations and process language and 3D graphics.

        I am arguing that this abstract information processing is essentially non-physical. Yes, it arises from a physical process, but it is something quite distinct.

        Let’s dive deeper. Consider an electron. The electron can exist in a state of spin “up” or spin “down”, or a superposition of those states. That’s the physics. But the electron encodes information. It can store a binary bit of information.

        Let’s go up to a higher level. Think of a poem handwritten on a piece of paper. The physical thing (paper, ink, indentations, etc) is quite distinct from the poem that is written on it. If we scrambled the order of the letters, or changed the writing to something equally complex but without any recognisable letters, then the poem would be lost.

        In each case, there is something present that is not described by the physics. It is possibly described by a word like “information” and you have listed some ways of measuring and characterising it – complexity, operations, metre, degree, etc.

        Now, intriguingly, you have argued elsewhere that mathematics has a kind of platonic existence beyond the physical world, and I have always countered that. Here, our roles seem to have switched, and you are denying the existence of abstract concepts, saying they are merely physical manifestations, and I am saying that the abstract objects are encoded in the physical world, but are not themselves part of it.

        Do you see what I am getting at, and can you resolve the paradox of why we have apparently switched sides at some point in the discussion?

      • Wyrd Smythe

        “Do you see what I am getting at, and can you resolve the paradox of why we have apparently switched sides at some point in the discussion?”

        I believe so, and I believe so…

        “I am arguing that this abstract information processing is essentially non-physical. Yes, it arises from a physical process, but it is something quite distinct.”

        If we agree it arises from physical processes, then we agree on the key point. The exact nature of mind doesn’t matter. I’m not defining it other than as “something that emerges from a physical process.”

        So there isn’t really any disagreement over “mind is abstract information” because it’s not a relevant point here. (Part of the problem is we don’t know what mind is. Another is that “information” is so poorly defined as to be meaningless. Depending on a specific definition, I may will agree with the phrase “mind is abstract information.”)

        My argument then is simply: Given mind emerges from physical process B (for brain), why is it certain it will also emerge from a completely different physical process C (for computer)?

        “…you have argued elsewhere that mathematics has a kind of platonic existence beyond the physical world…”

        I’ve always said I’m not strictly a Platonist in the literal sense of believing in his world of perfect forms as distinct from our reality, but that I absolutely do grant abstractions a level of ontological reality in this world.

        The position is entirely consistent with everything I’ve said here.

      • Steve Morris

        OK, then I think we are (to use a phrase of yours) on the same page with regards to what a mind actually is. It appeared to me that you were arguing something different for a while with the discussion of laser light. You still might be, in which case I am still missing the point.

        Our disagreement seems to revolve around physical process B for brain and C for computer. You are arguing that B and C are so different that C cannot give rise to mind. I am arguing that mind seems to me rather like software, and that therefore I am open to C creating a mind. I propose that neither of us can be sure about this, and therefore we leave this point now.

      • Wyrd Smythe

        No, it sounds like you’ve got the point.

        The laser (and other analogies) were to highlight the idea of effects arising from physical processes. The only point there is that laser light supervenes on specific physical processes. It causes me to ask the question: What if mind also supervenes on specific physical processes.

        “I am arguing that mind seems to me rather like software”

        Yes, understood. That’s what this is all about. As you say, neither of us knows. I’ve just laid out my observations and reasoning for why I see it as I do.

        The only thing I would say is that, while I’ve provided reasons why brains are not like computers, and why minds are not like software, it remains mostly an assertion that minds are anything at all like software (or brains like computers).

        You may absolutely assert that, and you may turn out to be right, but at the moment it seems more an article of faith than analysis.

      • Wyrd Smythe

        As a sidebar, remember we talked about how, if mind somehow transcends algorithmic processing, Turing Halting or Gödel Incompleteness which limits calculation, might not limit mental intuition?

        In that context, it’s occurred to me that the mind’s ability to contain paradox might be interesting. In the mathematical world (the world of algorithms and calculation) paradox indicates inconsistency — one of Gödel’s two main planks. Consistent mathematical systems, by definition, do not allow paradox (it’s what “consistent” means mathematically).

        Which suggests that if mind is mathematical, it must be a mathematics that allows inconsistency, which means it can be used to prove anything (1=0!).

        Which seems to suggest that if the mind is mathematical, it’s not mathematics as we know it. Or if it is, it’s gotten so complex as to allow paradox.

        (And you’ll admit, I’m sure, that most peoples’ thinking isn’t very mathematical.)

      • Steve Morris

        Yes, although people are usually troubled by paradoxes. Our minds have the ability to recognise a paradox and understand why it is a paradox, and we are often fascinated by them. We recognise that they are “not of this world,” i.e. don’t fit our usual experience of how the physical world behaves.

        In common thinking, we often hold inconsistent beliefs. We might believe two things that are mutually incompatible, and only when it is pointed out do we realise the problem. That’s because our thinking is often intuitive, not reasoned logically. So intuition seems like a shortcut to get to an answer.

        You talked about the travelling salesman problem and how bees solve it. It is an NP-hard problem if you want to do it analytically and find the shortest route, but if you’re happy with a short-enough route, it’s quite trivial, and newspaper delivery boys can solve it with ease.

        On another sidebar, I am convinced that much of our intelligence arises from the ability to discard information and to consider only the most salient points. In so many problems, finding the optimal solution has little survival advantage over finding a good-enough solution quickly. Of course, this can lead to arguments if two people have chosen to analyse a problem in different ways 🙂

      • Wyrd Smythe

        “We might believe two things that are mutually incompatible, and only when it is pointed out do we realise the problem.”

        And even then we are fully capable of ignoring the paradox and carrying on. That’s really what I’m getting at here. Our ability to operate just fine with paradox. And I’m thinking more about the little daily ones, the love-hate relationships, the little logical inconsistencies we cheerfully ignore all the time.

        That’s so not computer-like.

        “So intuition seems like a shortcut to get to an answer.”

        And it turns out it often is correct. Studies have shown that, sometimes when you intuit something, but then second-guess yourself by re-thinking it through logically, it turns out your intuition was correct.

        Our ability to think things through logically often isn’t very good. We can’t consciously manage complex situations, but it seems our background processes are much more capable of it.

        “You talked about the travelling salesman problem…”

        Right. The reason it attracted attention is that bees were notably better at it than a newspaper delivery boy. They were apparently coming up with extremely good routes, not just pretty good ones.

        It may be a case of a network of routes seeking the lowest energy level. Essentially a kind of analog “computing” rather than algorithmic (where it’s intractable).

        “I am convinced that much of our intelligence arises from the ability to discard information and to consider only the most salient points.”

        I don’t know about “arises” but it’s absolutely a huge part of how we experience reality. Some of the hallucinogenic drugs are thought to suppress that filtering process which results in a sensory overload (that can be interesting and enjoyable if you’re seeking that kind of thing).

  • Wyrd Smythe

    As a way of expressing the basic arguments here in symbolic logic form:

    given BrainMind
    if ComputerMind
    BrainComputer ?


    Brain ≡ (HW+SW)brain
    given BrainMind
    ∴ (HW+SW)brainMind


    Computer ≡ (HW+SW)comp
    ComputerTM


    if (HW+SW)compMind
    ∴ (HW+SW)brain ≡ (HW+SW)compTM
    but HWcompHWbrain
    ∴ |SWcomp| ≡ |SWbrain| ?

    The primary given is that mind emerges from brain. The first three statements start with that and consider the implication of assuming mind can also emerge from a computer. That implies an equivalence between brains and computers, yet those systems are strikingly different in almost all regards.

    The second set of statements assumes that brains function as some combination of “hardware” and “software” (in some separable fashion). That means mind springs from that particular combination.

    The third set says what we know to be true of computers. They are a combination of hardware and software, and they are Turing Machines.

    The fourth set examines the consequences of again assuming that mind can emerge from computers. (Not that, as with the entire series, this in no way attacks the possibility that hard AI is true. Rather, it demonstrates the assumptions necessary for it to be true.)

    The final statement, per Church-Turing, indicates that at some absolute level, the software running in the brain “computer” has to be the equivalent of the software running in the digital computer.

  • rung2diotimasladder

    It seems to me we’re running with the assumption that consciousness is emergent from the brain. Okay. Fair enough. Not sure where I stand on this question, but I’m happy to take it as given.

    Next is the question of whether the brain-stuff matters. 😉

    I think this is a good question. It’s a pretty simple question in certain ways. Say you break a dish and want to rescue it. You want to know which type of glue you should use to make it dishwasher-safe. Rice and water (a grade school glue used in Korea back in the day) probably wouldn’t be a good idea. Sure, it’s glue. It’s similar in that respect, but it’s bound to fail. (Sorry…had to go with “bound” there.)

    Whether or not computers can give rise to consciousness is entirely outside my realm, but I see no problem with the question. If we assume that the mind (not brain) is something that emerges from physical components, it’s not clear that we can replicate the mind using altogether different material.

    What we need to figure out is how to have babies that are exactly like us, instead of some potentially disappointing new entity via standard procreation. Forget all this mind uploading stuff. If we want immortality, why not focus on creating a biological mini-me that creates its own mini-me ad infinitum? 🙂

    • Wyrd Smythe

      “It seems to me we’re running with the assumption that consciousness is emergent from the brain.”

      Yes. In particular, as opposed to any dualistic theory. (I happen to have some dualist sympathies, but for purposes of this series, I’m assuming physicalism. As you say, for this, it’s assumed as a given.)

      “Next is the question of whether the brain-stuff matters.”

      Yep. And whether biology is a necessary component (hard to see why it would be) or whether the physical network (of any type, biological or otherwise) is necessary.

      If not, then mind could be algorithmic.

      “It’s a pretty simple question in certain ways.”

      Exactly so. In certain lights, you have to ask how it could be otherwise? This idea that a completely different architecture running software (which is to say, doing simple binary logic) could accomplish the same thing seems almost absurd.

      “Sure, it’s glue. It’s similar in that respect, but it’s bound to fail.”

      And that’s an even more subtle case where you have two things that are, under one description (“glue”), identical. And they’re known to have similar properties.

      But not similar enough!

      So, again, that a completely different architecture doing stupid simple binary math should give rise to the same emergent phenomenon seems like a fantasy. (And so far, it is exactly that.)

      “If we assume that the mind (not brain) is something that emerges from physical components, it’s not clear that we can replicate the mind using altogether different material.”

      Seems pretty obvious, right?

      At best, right now it’s a leap. Might be a valid one, but all I’m saying is that it is a leap.

      “What we need to figure out is how to have babies that are exactly like us…”

      ROFL!

      But no. There’s only room in this world for one ego as big as mine. XD

      • rung2diotimasladder

        Yeah, the glue example sort of fell into my lap. I happened to have a dish that I broke while trying to lift a table with things on it. Absolutely everything on the table fell to the floor. I don’t know what I was thinking, but clearly I wasn’t. What we need is AI capable of stopping me from such errors!

        I then noticed the glue example was interesting in that way you point out.

        On the mini-me…suppose the mini version of yourself could contain your ego and all of it accompanying apperceptions, would it be worthwhile, after all? 🙂

        (Just after I wrote that, I thought of Hitler replicating himself and shuddered at that thought.)

      • Wyrd Smythe

        There’s an interesting question here… If I could actually meet myself as a distinct individual, what would that be like? Would I like myself? ❓

      • rung2diotimasladder

        I’m sure I’d be able to finish her sentences, and things could get boring very quickly. She wouldn’t be able to help me do a crossword puzzle, that’s for sure. 🙂

%d bloggers like this: