System Levels

Moving on from system states (and states of the system), today I’d like to fly over the landscape of different systems. In particular, systems that are — or are not — viewed as conscious.

Two views make this especially interesting. The first holds that everything is computing everything and — under computationalism — this includes conscious computations. The second (if I understand it) holds that anything that processes input data into some kind of output is conscious. (I’m not clear if the view also sees an input-output system as a computer.)

So I want to explore what I see as major landmarks in the landscape of systems that… well, about the only thing we can probably all agree on is that they do something.

I’ll start a list of putative ways to generate human (or human-like or high-level or whatever, you know what I mean) consciousness:

  1. The Human Brain
  2. A “Positronic” Brain
  3. A Physics Simulation (p-zombies?)
  4. A Neural Net Simulation
  5. A Functional Emulation (b-zombies?)
  6. (something new)

We know consciousness arises in members of group #1.

Or, per skeptics, we know something happens (something that is often casually and, in many opinions, sloppily or even incorrectly, named “consciousness” — probably by fools who don’t even know what’s happening inside their own heads).[1]

Whatever that something is, it seems to allow its possessors to reflect on their own (illusionary) thought process in addition to processing (possibly illusionary) thoughts about the world. It’s even given them the idea they have free will (as if)!

This something allows members of this group to build imaginative creative mental models of things that don’t even exist and, in many cases, go on to build them. Or in others, to write about them.

This something has allowed its holders to take over the planet. And possibly also break it. (Good going, humanity.)

This something also seems connected with language (especially literature) and art and music and mathematics.

This something, whatever it is, is pretty fucking amazing.[2]

§

So what about group #2 on the list?

This features a “brain” that replicates the structure and behavior of a human brain. It has “neurons” and they are connected with “synapses” — albeit possibly with a larger population, more connections, or faster operation.[3]

Given a sufficiently similar architecture and behavior, the question seems more why it wouldn’t work. Why would biology matter?

If we replicate all the properties of the human brain, except its biology, it would be startling to discover the key properties were in the biology.

Let’s assume Positronic brains work approximately like ours and, therefore, the FA something arises for them, too.

§

That brings us to #3, a physics-level simulation of the human brain.

This is one of the few strong arguments I see for computationalism. Given a simulation of the physics of the brain, why wouldn’t it produce the same results as the brain.[4]

An important question involves the level of the simulation. There are many choices:

  1. Quantum — simulate the fabric of reality.
  2. Atomic — simulate the atoms.
  3. Chemical — simulate the valence electrons.
  4. Molecular — simulate the compounds.
  5. Cellular — simulate the basic bio-machinery.

Above cellular we get into level #4, which comes next. Note that, in all of these, the brain doesn’t exist, as such. These simulations could just as easily simulate a pot roast (the first four could simulate a bowling ball).

To the extent that “everything is chemistry,” quantum- or atomic-level simulations might be overkill. (Which would be good because they’d be huge! And slow.)

But we don’t know what properties are important, yet, so it’s not a given that, when it comes to consciousness that “everything is chemistry.”

One thing about this group, and the next two, is that we’ve gone from the physical world to the world of numbers. Many believe this doesn’t matter, but it’s a huge difference in approach — literally the difference between analog and digital.

Kind of a Yin-Yang thing, in other words.

§

Now comes item #4, a simulation of the neural net.

We know such simulations can display learning and identification behavior similar to how ours does.

But the objection for a physics simulation applies here also: We don’t know what properties are important for consciousness.

They might involve real-time physical behaviors. My example is how some physical materials, in the right circumstances, emit coherent photons (laser light).

This behavior can be simulated with great precision. But simulations don’t ever emit photons of any kind, let alone coherent ones generated by a physical process of “Light Amplification by Stimulated Emission of Radiation.”

(For me, this argument kills computationalism. It says the outputs are appearances and byproducts; it’s the process that matters.)

§

Lastly (for now), comes #5, a functional representation of consciousness with no necessary similarity to how our brains work and no attempt to emulate or simulate its states.

On a crude level, this is old-fashioned AI, the old approach. It’s seen some success with expert and knowledge systems. And, I believe, in robotics (not an area I follow).

The arguments for, or against, computationalism apply here, plus a functional representation more strongly raises the issue of something that acts conscious being conscious.

Given the appearance comes from a different mechanism, perhaps one with no apparent phenomenal machinery, what might that suggest about our consciousness (is a question some ask).

The behavioral type of philosophical zombies live here. The standard p-zombies live up on level #3. That zombies live on these two levels might suggest something about their coherency (or not).

(Item #6 is an unknown unknown I’ll leave alone for now.)

§ § §

Now here’s another list.

  1. A Human.
  2. Lt Cmdr Data (Star Trek).
  3. HAL (2001: A Space Odyssey).
  4. My Dell laptop.
  5. A Relay-based PBX.
  6. An old-fashioned Thermostat.

We can generally agree (minus a few holdouts) that members of group #1 are conscious. (Items #1 and #2 here more-or-less match items #1 and #2 on the first list. The lists diverge from there, though.)

Commander Data, we are told, is conscious. He certainly acts conscious. And he reports having phenomenal feelings. Data appears to have a Positronic brain.

HAL seems to form an interesting border case. He goes insane, and his breakdown as Dave shuts him down seems phenomenal (What are you doing Dave?). I’d have to watch the movie again, but I wonder if it’s possible to argue HAL did not have phenomenal experience.[5]

The crucial distinction with these robots is they do not (as far as I know) have a Positronic brain, but work by computational principles.

Whether they can have phenomenal experience is the topic of heated debate.[6]

§

With some exceptions, most believe laptops do not have phenomenal experience. Specifically, there is nothing it is like to be a laptop.

I included the PBX as an example of a system with complex behaviors arising from lots of basic components (lots and lots of relays).

Note that a relay-based PBX is electro-mechanical (rather than electronic). It consists entirely of electromagnets and mechanical switches.

Lastly, a thermostat is just a temperature-operated switch. There are also light-operated, time-operated, and motion-operated, switches.

In all cases, from laptops down to thermostats, some believe there is phenomenal experience, even consciousness.

I don’t fathom that view at all. I don’t believe in Pixies or in the fundamental consciousness of rocks or thermostats or even my laptop.

I’ll explain why in the next post.

Stay conscious, my friends!


[1] Attitude? What attitude?? (Look, take all of this a little tongue-in-cheek. No one knows the right answers here, so I’m just amusing myself.)

[2] Sorry! There’s really just no other way that says it as right.

[3] Or they’re really hard to make and have fewer neurons, fewer connections, and run slower. They make dim-witted robots that are still capable of complex tasks.

[4] Much of the argument’s strength comes from the success of such simulations in other aspects of reality, in particular biology.

Its weak point is that we can’t know if the properties of consciousness arise from a simulation. The laser simulation analogy (or weather simulation analogy) attacks that weak point. It all depends on whether consciousness is in the process or the outputs.

[5] If not, there are other computers from SF we could use. Colossus maybe? How about Robbie? Or his pal, Robot, from Lost in Space?

[6] I might argue that Data, HAL, and all the rest, being science fiction speak to a real gap between #1 and #4, but, yes, I know exactly the counter-argument for that.

OTOH: The Lunar Hilton, my flying car, transporters, and FTL. 😀

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

14 responses to “System Levels

  • David Davis

    Whenever I’m faced with a decision, I set up a model in my head. I will mentally list the factors on each side, assign a weight to each factor, and run a calculation. Even though the process depends on estimates and educated guesses, the outcome is better than if I had not used the model.

    • Wyrd Smythe

      So a reasoned approach is better than a wild guess? I think I can go along with that! 🙂

      OTOH, some studies seem to suggest our gut instincts can be better than we imagine (due, I guess, to our sub-conscious reasoning). It seems that sometimes that first instant response is your brain’s way of telling you what deeper processes already think. And it can be right more often than one might think reasonable. Then we go mess it up with a lot of second-guessing.

      But then the trick is figuring out when that gut instinct is right and when it’s just hungry or horny or scared. That’s when the analysis can be a lifesaver!

  • SelfAwarePatterns

    In terms of comparing systems, we can choose to ignore the concept of consciousness and compare them purely in terms of intelligence and data sophistication. Per your second list, in the Star Trek universe, Cmdr. Data may actually be more sophisticated than a human. (Although the show purposely kept this ambiguous.)

    And we can’t really say how sophisticated HAL was based on the information provided in 2001, except that he had to be far more sophisticated than a modern laptop. I don’t know where the PBX sits in relation to the laptop, but it seems like it would be less sophisticated than humans, Data, or HAL, but obviously an old style thermostat is the simplest.

    The question is, what does consciousness add to this conversation? If an alien from Andromeda, with no concept of consciousness, were evaluating these various systems, would there be anything it would see aside from varying rates of intelligence and data processing capabilities? If so, what?

    If the alien manages to establish communication with these systems, and some of them start talking about something called “consciousness” that only they possess, what would the alien think about that concept? Might it see it only as those ego-centered systems privileging the way they work in relation to everything else?

    • Wyrd Smythe

      “Per your second list, in the Star Trek universe, Cmdr. Data may actually be more sophisticated than a human.”

      In many ways, yes, I think he was. But Star Trek also (as they did with Spock originally) showed that, despite his superiority, humans were often the better beast. (Because they’re so human.)

      We might compare HAL to Alexa, which has server support, so definitely more advanced than a laptop. In terms of just complexity and intelligence, I’d rate a PBX below a laptop (as a single purpose machine versus a general purpose computer).

      “[W]ould there be anything [an alien from Andromeda with no concept of consciousness] would see aside from varying rates of intelligence and data processing capabilities?”

      It would see that the humans had taken over their planet. It would likely see Cmndr Data as on par due to his obvious capabilities. They’d recognize HAL by his capabilities. The others they’d recognize as dumb tools.

      Do you really believe, even with no context whatsoever, observers would be confused between the relative abilities of humans versus laptops?

      “If the alien manages to establish communication with these systems,…”

      There are only two on the list that would attest to their own consciousness. (Assuming HAL isn’t and wouldn’t lie about it.)

      • SelfAwarePatterns

        ” showed that, despite his superiority, humans were often the better beast. (Because they’re so human.)”

        Yeah, Star Trek was big on that. That and the whole emotionless being thing, the idea that it was coherent for an entity to exist without its own primal impulses.

        “Do you really believe, even with no context whatsoever, observers would be confused between the relative abilities of humans versus laptops?”

        I don’t think I implied that. As noted, there would definitely be differences in levels of intelligence and capabilities. The alien would probably observe that humans had exteroception, able to take in and use information about its environment, something lacking from the laptop, although not necessarily from autonomous robots such as self driving cars, as well as a host of other abilities.

        But assuming the alien had access to the intermediate examples, at what point would it conclude that something else is needed, something besides the information processing capabilities? At what point would it agree with the humans that there was something special about them?

        “There are only two on the list that would attest to their own consciousness. (Assuming HAL isn’t and wouldn’t lie about it.)”

        I think if you go back and watch the movie, HAL pretty much makes that attestation. He is interviewed by a reporter and says that he is happy to be productive, which, he says, is all any intelligent entity can ask for. That and he defends his existence when he perceives it’s in danger.

      • Wyrd Smythe

        “That and the whole emotionless being thing, the idea that it was coherent for an entity to exist without its own primal impulses.”

        Talking about Spock? I always wanted to sit down and have a chat with him to explain the simple basic facts about humanity he never seemed to understand. Like humans are actually pretty logical once you understand their inputs and axioms. I never understood why he couldn’t figure it out.

        Vulcans were very emotional; the unemotional thing was all an act. And they were usually pretty bad about it. Spock’s uber-Vulcan dad was pretty much an asshat, and that takes emotions.

        Yeah,… unemotional, my ass.

        (The friend I had dinner with tonight suggested that general human dickishness was a trait we’d never manage to program into computers, and therefore computers would never think like us. That’s actually a doubly interesting point: For one, it brilliantly speaks to the messiness and contrariness of human consciousness. And then, if being a dick is part of being human, would we want to build that in? (I didn’t mention to her that I knew a fair bit of SF with dickish robots. 🙂 ))

        “At what point would it agree with the humans that there was something special about them?”

        What other species has taken over the planet? What other species has created the complex tools we have? What other species has literature, mathematics, and science? Isn’t that noticeably special?

        (As it turns out, this is exactly the topic of tomorrow’s post.)

        “He is interviewed by a reporter and says that he is happy to be productive, which, he says, is all any intelligent entity can ask for.”

        Yeah, and there’s the scene when Dave shuts him down. That’s why I wasn’t sure if HAL qualified for what I had in mind. The category may have to be split in two: non-Positronic robots with, and without, phenomenal experience.

        Most pop-SF tends towards the C3-PO type, not just phenomenal, but downright emotional. I’ll have to come up with a decent example. I’m thinking maybe Robbie. Janet (on The Good Place) used to qualify, sort of, she’s kind of a supernatural being, not a “robot” as such, and now she’s evolved and definitely phenomenal.

        “That and he defends his existence when he perceives it’s in danger.”

        I think I’ve seen you point out that doesn’t require what we casually define as consciousness — just some awareness and smarts.

      • SelfAwarePatterns

        “Talking about Spock?”

        I was actually talking about both Spock and Data. Definitely it became clear that the Vulcans had made logic their religion. In later series, it almost became a sort of caricature. And Data’s desire to have human emotions…was itself an emotion.

        I do think human and animal minds have a lot of idiosyncrasies from our evolutionary background that it probably won’t ever make sense to put in an artificial intelligence, unless we’re explicitly working to make that AI as human like as possible.

        “What other species has taken over the planet?”
        Bacteria are in places we’ve never gone, were here long before us, and will likely be here long after us. Insects outnumber us. So a lot depends on how you define “taken over”.

        On tools, math, and science, is that anything that degrees of intelligence can’t account for? I actually think intelligence is a major part of our intuition of consciousness, but a lot of people seem to disagree.

        “I think I’ve seen you point out that doesn’t require what we casually define as consciousness — just some awareness and smarts.”

        I have written something like that before, noting that I see a type of consciousness in an autonomous robot that takes in and acts on information from its environment. But I’ve gotten a lot of pushback, because it’s an almost universal intuition that consciousness includes self concern and/or the ability to suffer. If we look at robots in fiction, a dawning self concern is almost always depicted as a dawning consciousness.

      • Wyrd Smythe

        “I was actually talking about both Spock and Data.”

        Oh, right. Data got a lot of Spock’s traits, didn’t he. (Troi got some, too. Alien, vague mental powers.) It is true, we tend to make robots in our own image.

        Some of this might be fiction-related. Truly unemotional robots or beings might be hard to write and keep interesting? (Can’t imagine why, although I’ve certainly never tried it.) Or we just can’t help but project emotions into storytelling?

        (On of my many complaints about Spielberg’s A.I. was that the “unemotional” robots were anything but.)

        “…unless we’re explicitly working to make that AI as human like as possible. “

        Which raises some questions for a program of trying to replicate consciousness by replicating or simulating (or emulating) the human mind. Our “crap” seems pretty deeply entwined with our humanity. (That’s why I found my friend’s comment so interesting. It does target something essential about our consciousness. And raise a question about what we’re trying to recreate.)

        OTOH, in Brin’s Existence we do reach AGI and it turns out it needs to be “raised” a bit like an adolescent for a period. And it’s personality during that time is like that all growing adolescents — a pain in the ass.

        Be pretty funny if it really turned out that way. 😀

        “So a lot depends on how you define ‘taken over’.”

        Aw, come on. I’ve always been very clear what I mean! 🙂 Same species inhabiting nearly every niche on the planet. Creating global travel and communication. A complex worldwide culture of trade and economics. The body of law. The body of art and literature.

        Stuff like that. Far beyond any bacteria or insects.

        “On tools, math, and science, is that anything that degrees of intelligence can’t account for?”

        Just on those? Is curiosity a trait of intelligence or consciousness? Is Cmndr. Data’s curiosity as aspect of his intelligence or his consciousness? (And on reflection, can we really even say Data was unemotional? He was judged to be conscious, he had a Positronic brain… He was absolutely Pinocchio wanting to be a real boy.)

        “I actually think intelligence is a major part of our intuition of consciousness, but a lot of people seem to disagree.”

        That’s a good point. Dogs have low intelligence and low consciousness. Humans rate high on both. There does seem a correlation. Huh. Given the correlation, I wonder which is the attribute of the other (or if either one is; maybe it’s just a parallel deal).

      • SelfAwarePatterns

        “Truly unemotional robots or beings might be hard to write and keep interesting?”

        I think all you’d have would be a general purpose information processing system with no programs loaded, an empty unloaded computer (or positronic brain, or whatever). Of course, you could write about one with purely utilitarian drives, but it’d just be the Enterprise ship computer or something along those lines.

        “Our “crap” seems pretty deeply entwined with our humanity.”

        One of the dilemmas that might come up is how much the original mind should be “fixed.” Every organic brain probably has at least some damage, the effects of which have been incorporated into the personality of the organism. Copying it might provide the ability to fix that damage, but would we necessarily want to? (Reminds me of the debate in the deaf community about Cochlear implants.)

        “Be pretty funny if it really turned out that way.”

        Depending on what other technology is available, it might make them redundant. Why raise a technological intelligence if you have plenty of the old fashioned kind around?

        “Aw, come on. I’ve always been very clear what I mean!”

        My point is that your assessment of specialness is anthropocentric. Of course humans are going to seem like the most special things around, to humans. The zoodalbart from Andromeda might not agree. Noting how little zoodalbart-ness we have, not being created in the image of the great zoodalbart in the sky, it might decide that we’re at best industrious automatons.

        “I wonder which is the attribute of the other (or if either one is; maybe it’s just a parallel deal).”

        I think consciousness can be productively thought of as a type of intelligence, or more accurately, as a class of intelligent capabilities.

      • Wyrd Smythe

        “I think all you’d have would be a general purpose information processing system with no programs loaded,…”

        (Why no programs loaded?) I like the comparison to the Enterprise computer. That’s about all it would be. Heh, given that I’m already using Cmndr. Data and HAL, I should just go ahead and use the Enterprise Computer.

        Which, very oddly, was capable of creating an AI Moriarty all based on Gerodi’s vague description…

        “(Reminds me of the debate in the deaf community about Cochlear implants.)”

        Yeah, good comparison, and what if the imperfections are important, somehow. (That was always kinda Kirk’s argument about humanity. It’s view I’ve seen other places, too.)

        (There is a similar question in the blind community with regard to technology enabling sight. A personal question, I think, and not one that should be seen as “betraying” the community.)

        “My point is that your assessment of specialness is anthropocentric.”

        How so? I see it as an objective assessment of what a single species has visibly done to the planet (including altering, probably damaging, it’s oceans and air, and certainly strewing garbage across the land). We’ve filled the air with RF, built transportation infrastructure, etc.

        Objectively speaking, with no context, isn’t that far beyond what any other species on the planet has done? Even granting an entire class, insects or bacteria, have they re-shaped the world as we have?

        I don’t think this is a matter of interpretation, it’s a matter of observation.

        “Noting how little zoodalbart-ness we have, not being created in the image of the great zoodalbart in the sky, it might decide that we’re at best industrious automatons.”

        But you’re attributing a pretty ignorant and culturally based view on the zoodalbart. If they’re just a bit more clear-minded, the physical reality is there to be observed.

      • SelfAwarePatterns

        “Why no programs loaded?”

        I think as soon as you add programming, you’re adding primal goals, the foundations of emotions. Of course, you could define “emotion” more narrowly to be only what humans feel, but Star Trek never made that stipulation.

        “Which, very oddly, was capable of creating an AI Moriarty all based on Gerodi’s vague description…”

        Which does imply that the ship computer could be sentient, if it had a reason to, but it’s programming never required it. On the other hand, the show did make clear that Data was unique and that no one really understood how he worked, his inventor / father being dead / missing. The limitations of using a fictional show’s inconsistent worldbuilding.

        “Even granting an entire class, insects or bacteria, have they re-shaped the world as we have?”

        Cyanobacteria have arguably done far more to the environment. Indeed, all of complex life is a side effect of the oxygenation they provided. We tend to assess things within the size and time scales of our evolutionary affordances. If humans go extinct, a few million years from now the environment will likely have shrugged us off. A hundred million years from now, decay, erosion, and tectonic activity will have erased most, if not all evidence we were ever here.

        A zoodalbart just finishing a multi-million year trip from Andromeda might barely notice our 10,000 year eye blink of a civilization.

      • Wyrd Smythe

        “I think as soon as you add programming, you’re adding primal goals, the foundations of emotions.”

        Okay. What about the Enterprise computer (or any computer, really)? We’re talking about unemotional robots, so there has to be some programming, doesn’t there? Or are you saying any programming constitutes emotions by way of primal drives?

        “The limitations of using a fictional show’s inconsistent worldbuilding.”

        Yep, exactly. (That Moriarty business always stuck out like a sore thumb.)

        “Cyanobacteria have arguably done far more to the environment.”

        In how many hundreds of millions of years?

        “If humans go extinct, a few million years from now the environment will likely have shrugged us off.”

        Indeed. And, at least until tectonics buries all the evidence, our effect on the planet will be quite visible.

        “A zoodalbart just finishing a multi-million year trip from Andromeda might barely notice our 10,000 year eye blink of a civilization.”

        Sure, but you’ve changed the premise. If there’s no evidence of our existence, there’s nothing to judge.

        Look, I give up. If you really can’t agree human civilization stands out from everything else this planet has produced, then you can’t agree.

And what do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: