Does Not Compute

I’ve been on a post-a-day marathon for two weeks now, and I’m seeing this as the penultimate post (for now). Over the course of these, I’ve written a lot about various low-level aspects of computing, truth tables and system state, for instance. And I’ve weighed in on what I think consciousness amounts to.

How we view, interpret, or define, consciousness aside, a major point of debate involves whether machines can have the same “consciousness” properties we do. In particular, what is the role of subjective experience when it comes to us and to machines?

For me it boils down to a couple of key points.

The lesser point has to do with the contrast between designed machines versus evolved machines (and I’m not talking at all about biology). It also has to do with what algorithms are.

The greater point (in my view) has to do with the contrast between the outputs of a system and the behavior of that system. The question involves the location and nature of the essential part of the system.

[In what follows, I assume two things: (1) Human brains definitely produce something we casually label “consciousness.” (2) It’s a striking (“loud”) property that creates a new and very powerful kind of thing in the universe.]

§ § §

With regard to designed machines versus evolved ones, this is not about biology, but about natural systems versus designed systems.

The difference that strikes me is that natural systems don’t have a designer and don’t have an end goal. There is no blueprint that results in a final product.

The Grand Canyon wasn’t designed (no watershed is). It was shaped by various physical factors over time and just turned out that way.

Similarly, whatever brains are, whatever it is they do, they are a result of myriad tiny successful solutions that occurred along the way with no specific end goal — other than survive!

In contrast, a designed machine starts with a blueprint created by an intelligent mind with a specific goal, a reason for the machine.

So, bottom line here, speaking in terms of ontological classes, it seems to me that conflating naturally evolved brains with intelligently designed machines might be a category error.

At the very least when it comes to certain kinds of machines.

§ §

Which brings me to the greater point: outputs versus behavior.

A notable characteristic of computation is that only the outputs matter. The platform doesn’t matter. The nature of the computation doesn’t matter. Only the result, the outputs, matter.

A notable characteristic of the real world is that, very often, process matters as much, if not more than, the outputs.

If the “output” is being in New York, when you’re currently in Los Angeles, the end result may be the goal, but how you get there matters. (Walking? Biking? Driving? Flying? Star Trek transporter?)

The reason for the difference is that computation is abstract, like a blueprint. It’s always in reference to something that gives it meaning.

But physical things are only in reference to themselves. There is no interpretation to apply — there is only understanding the true account of the physical system.

So in a physical system, behavior matters — platform matters.

§

I’ve talked about three broad ways machines could try to be conscious.

Firstly, they can seek to duplicate the physical brain — the Positronic brain.

Since this is a physical system that behaves similar to a brain, it’s hard to see why it wouldn’t also produce a conscious being.

Secondly, they can simulate the meat, the physics, or the biology.

A simulation of the real world can be arbitrarily precise if we have the data points. (One problem with weather modeling is how course-grained our data gathering is.)

The crucial point in question is what, exactly, gets simulated. Specifically, if consciousness is a property of a physical system, can we account for it in a simulation of that system?

Thirdly, machines could emulate the neural network or the brain, or even seek to emulate higher functions of the brain.

The distinguishing feature here is modeling functional aspects of the system above its physics. Commonly this involves a model of the neural network, but it can involve higher functionality.

§

The key point is that the second and third approaches behave quite differently from brains. They generate outputs through a quite different processes.

On one side of a gap, the physical systems of the human brain (case 0) and the Positronic brain (case 1). On the other side, case 2 and case 3, which involve calculating the necessary outputs with numbers.

Everything depends on whether consciousness lies within those numbers, those outputs.

If it lies in the processes of the brain, it might not show up in the numbers.

§ § §

The analogy of laser light expresses what I mean.

Only certain physical materials, under certain physical conditions, emit the coherent photons we call laser light. The behavior of the physical system produces the light.

This behavior can be simulated, at various levels, on a computer, just as with brains. In all cases, the outputs produced describe — perhaps quite accurately — what the physical system is doing.

But, of course, the simulations don’t produce any photons.

Laser light cannot arise as a consequence of the behavior of the system, nor can it arise from its outputs (which are just numbers).

If consciousness results from the behavior of a physical system, then one has to question whether a completely different process will produce it.

§ § §

I think the strongest argument computationalism has — and I admit it’s a strong point — simply suggests that a good enough simulation of the system must produce whatever the system produces.

The counter-argument is what I’ve laid out above, that the process may matter more than the outputs.

The question, then, is what does an accurate simulation do (if not produce consciousness)? I’m already on record thinking it may just produce a biologically functioning — but comatose — brain.

Alternately they might produce: a gibberish personality; a raving lunatic; a zombie; or any of a wide variety of failure modes.

(One thing about humans writing software? We’re not that good at it. That alone should raise some concerns with regard to AI.)

§ § §

Some general objections I have to computationalism:

¶ Where’s the algorithm? If the brain is computational, there is an algorithm. Where is it? If the mind really is a computation, then that algorithm exists somehow.

I don’t mean where is it physically located. I mean, how do we find it in the natural analog behavior of a physical system. It’s hard to even tell what brain states might be.

When it comes to simulating the brain algorithm, we don’t have a clue where to even start. We don’t know such a thing even exists.

Undecidablity is a computational problem (defined in terms of Gödel and Turing). It means certain inputs can cause a system to loop infinitely.

The upshot is that computations require escape conditions and error handling. It’s another aspect of how different a computational process is from a natural one — there’s no such thing as an infinite loop in physical processes.

(In fact, it’s the physical limits that ultimately end an infinite computation with no escape clause. As far as the computation is concerned, an infinite loop is no different from any other execution thread. It’s all just steps.)

¶ Computers are not analog. They compute.

Both those things, no matter what broad umbrella is used to lump them together, are visibly and strikingly different.

One need only compare the bit pits on a CD with the grooves of a vinyl record (or magnetized domains on tape) to see that striking difference.

Alternately, compare doing long division (a calculation) with the single real numerical quantity represented by the fraction.

The difference is truly a Yin and Yang contrast. They are exclusive of each other.

¶ Computation is not that relative. Data is, because it’s abstract.

And while the exact meaning of a given computation is relative to how its data is interpreted, that’s not the same as the computation itself being relative.

We realized what the Antikythera mechanism was despite it being corroded and missing pieces. We realized the computational aspects of Stonehenge and other ancient computational efforts.

Computation is recognizable. And it’s not what’s happening in our brains.

¶ So the brain is not a computer.

It’s obviously not a “general purpose computer” — as pretty much everyone agrees. The question is whether it’s any kind of “computer.”

We can (I believe) agree the brain is absolutely an analog signal processor just on account of its network flow. The behavior of synapses and neurons justifies calling it an analog computer.

[I think analog “computer” confuses the issue, so I don’t favor it.]

§ § §

My bottom line is that I’m very skeptical of computationalism.

Consciousness, as I read it, arises from a physical system behaving in a certain way. I a sense, its outputs are byproducts.

I suspect the right outputs cannot be generated without consciousness. There is the suggestion that generating the right outputs implies consciousness with the implication generating those states or outputs is easier than generating consciousness.

My guess is it doesn’t work that way.

Stay non-computational, my friends!


[0] No footnotes today!

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

17 responses to “Does Not Compute

  • SelfAwarePatterns

    On the output vs process discussion, a lot depends on what you define as “output”. For example, to me, behavior is just a type of output. And the time frame someone reaches Los Angeles is another variation in output.

    As I noted before, the distinctions between process and output can be problematic. Do we care about the output of every neuron? If so obviously the replicated version needs to be at least a neural network. Or is the output of every protein crucial? If so, reproducing the overall system starts to look impractical.

    But ultimately, whether a simulation is reproducing the original successfully is going to be a matter of judgment. If we can’t tell the difference, or the system itself can’t tell, then it will be a successful reproduction. What’s likely to be difficult is if we can detect small variances, and whether those variances will be enough to make us reject uploaded-grandma as grandma.

    Our response to the variances matter, because even a “perfect” reproduction is going to have variances. I have variances with myself from yesterday, much less last year. And adult-me has wide variances from child-me. Different stages of life, and being uploaded would be a major life transition.

    On the laser analogy, again I think it depends on exactly what you’re saying here. As we discussed before, the purpose of the laser matters. A laser used in a CD-ROM drive is an information processing component, and we’ll reproduce the relevant output if we can reproduce the information flow.

    But if you’re saying that it’s the physical output of the laser itself that is relevant, that the output separate and apart from the information content is what matters, then (and I know you disagree with this), I think you’re positing a type of soul generator.

    I’ll totally agree that no simulation will reproduce a soul generator. But I don’t think soul generators exist.

    None of this is to say that the physical implementation of an information processing system doesn’t matter. It always does. If I attempt to reproduce the function of my phone with mechanical switches, while possible in principle, the resulting device will be slow, loud, and huge, not at all reproducing the function of my phone.

    Attempting to reproduce the massive parallelism of a brain, operating within the same time frames, with a serial processor is probably not possible within the laws of physics. But it probably is possible with an alternate massively parallel processing strategy. Power requirements may further constrain the available physical strategies.

    It’s an interesting question whether the performance and efficiency constraints inevitably bring us back to wetware, or whether some metal based solution is feasible. Which is why I find the neuromorphic developments so interesting.

    • Wyrd Smythe

      “For example, to me, behavior is just a type of output.”

      In the sense of “how a person behaves” I agree. The sense here is “how a system behaves (due to physical forces)”. We’re talking about physics. The flight path of a thrown ball is not an output, but the actions of the person who threw it are. I’m talking about flight paths here.

      “As I noted before, the distinctions between process and output can be problematic. Do we care about the output of every neuron?”

      You lost me on how that’s a distinction between process and output, but in the past two weeks I’ve written quite a bit about how what level a simulation attempts is crucial because it needs to capture the properties of interest.

      So I agree the level of detail is important, but I’m not seeing the distinction between process and outputs?

      “If we can’t tell the difference, or the system itself can’t tell, then it will be a successful reproduction.”

      I’ve been thinking about that, and it seems improbable the problem will ever come up. I think that, by the time we can actually create such a system, we’d have such a good handle on the problem of consciousness that there wouldn’t be much question.

      You’re posing a “what if” scenario about the future, and my “room” can only reply, “I have no clue.” 🙂

      “I have variances with myself from yesterday, much less last year.”

      Here we seem to be talking about whether a simulation of grandma meets our expectations, which is kind of another topic. I know I’ve mentioned I consider brain uploading the least likely among computationalism scenarios. I’m even willing to go out on a limb on that one and say it will never happen.

      OTOH, this whole discussion is about variances between numerical simulations and reality. The laser light analogy is meant to target that…

      “On the laser analogy, again I think it depends on exactly what you’re saying here.”

      I have explained the model to you many times. (And I plan tomorrow’s post to center on it, so more details there.)

      “As we discussed before, the purpose of the laser matters.”

      No. It doesn’t. (It’s my analogy, and I should know.) The purpose has nothing to do with it. It is 100% about the generation of laser light. Period.

      You are jumping this into a discussion about communication lasers versus power lasers. I debunked the difference when we talked about it. That thread left off with my acknowledging, “Ah, okay, I’m with ya now. I was conflating it with my laser light analogy. No connection.”

      So I think we did establish there is no connection between use scenarios and my analogy.

      To be clear, the laser is not a component in some scenario we’re simulating. This is entirely and only about simulating the actual lasing. Which is essentially the same regardless of how the laser is used.

      The scenario doesn’t care how the photons are used, only that they are generated.

      “But if you’re saying that it’s the physical output of the laser itself that is relevant,”

      Nope. Merely that it does occur. As I’ve said repeatedly, the outputs can almost be considered a byproduct.

      “I’ll totally agree that no simulation will reproduce a soul generator.”

      More to the point, no simulation will ever reproduce photons. It’s that simple.

      “None of this is to say that the physical implementation of an information processing system doesn’t matter. It always does.”

      For some value of “matter,” of course. Above, regarding the cross country trip, you seemed to think variances of process were less important. Compare and contrast.

      “Attempting to reproduce the massive parallelism of a brain, operating within the same time frames, with a serial processor is probably not possible within the laws of physics.”

      Yet a key tenet of computationalism is that time frame doesn’t matter. You once agreed that, per computationalism, a billion monks with abacuses and an equal number of acolytes acting as message runners could calculate a conscious being. Is that something you still agree with?

      That said, I do think there is a lot to explore with regard to physical limits and way and means of trying to replicate the human mind. The System Levels post was an attempt to start a conversation along those lines.

      • SelfAwarePatterns

        “Is that something you still agree with?”

        Sounds like we had an interesting discussion. Wish I remembered it. I principle I do still agree. But obviously timing matters for an adaptive or useful system. Such an arrangement can also run Tetris, but not in a way that any game player is likely to find useful.

      • Wyrd Smythe

        It was pretty early on. I remember it because it established that you were (and are) all in on computationalism, so that memory is part of my Mike model.

        And, yes, as with Turing Machines, we’re speaking in principle. (I’m sure the monks wouldn’t stand for it.)

  • JamesOfSeattle

    [The following is a response in the thread on the Great File Room. When trying to post it I simply get the response that it could not be posted. I’m wondering/hoping that we simply hit a limit there and we can continue here. Let’s see:]

    I said: A semantic process is one where the input was created for the “purpose” of representing a pattern, the mechanism was created for the “purpose” of recognizing the input and generating an output which is “valuable” with respect to the meaning (the pattern represented) of the input.

    You said: I see no semantics in that without reference to what designed the IPO module.

    I guess the reference to the designer was too indirect? I specifically said the input must be generated for a purpose, and the mechanism must also be generated for a purpose. While the generator of the input need not be the generator of the mechanism, there must be some form of coordination between the two.

    As for the bacteria, I said that the input must be created for the purpose of representing a pattern. I could have been more explicit, as I have been in other places, and said the input was generated for the purpose of being a symbolic sign in the semiotics sense. The exact form of the material of the symbol is not important as long as the mechanism is coordinated to interpret that symbol correctly. This works for a neuron which responds to a neurotransmitter. There are lots of possibilities for the neurotransmitter that would work just as well. The same cannot be said folks or the bacteria responding to a chemical associated with a food source, unless the chemical in question is completely arbitrary and is designed to attract bacteria

    Where did the “purpose” and “semantics” in the GFR come from? How did they get there?

    Semantics is not a substance or a pattern that exists in a place, so there is no semantics in the GFR. Semantics is about processes, and so there is semantics in the interactions between the room and the interlocutor. The interlocutor generates the input symbols. One or more designers of the room generated the mechanism. The input and the mechanism are coordinated in that they both rely on the cultural development of the Chinese language.

    *

    • Wyrd Smythe

      “When trying to post it I simply get the response that it could not be posted.”

      Weird! I can’t account for it. There’s no “max comments” setting. Possibly related to whatever online issues you were having previously?

      “I guess the reference to the designer was too indirect?”

      Or too confusing, because if we agree it’s put there by the designer, then we’d agree. That’s what I said many comments ago that I thought you disagreed with.

      “This works for a neuron which responds to a neurotransmitter.”

      Sure, but why not for a bacteria following a food signal? In both cases I see a mechanism responding usefully to input. Your explanation is:

      “…unless the chemical in question is completely arbitrary and is designed to attract bacteria”

      Well, “designed” in the same sense the neurotransmitter was “designed” — through the agency of evolution. The bacteria evolved to respond to food signals, the neuron evolved to respond to a different chemical signal.

      I see those as more similar than different.

      “Semantics is about processes, and so there is semantics in the interactions between the room and the interlocutor.”

      Disagree to the first clause, but do agree there is semantics in the interaction. That, in my view, is because both parties contain the semantic meaning necessary for communication at all. The questioner due to their experience as a human, the GFR due to another human coding their experience in the room.

      Under your definition, if no one interacts with the room, it has no semantics. But if someone does, then the room suddenly has semantics? I can’t agree with that at all. To me the semantics were always there in the room.

      I take the (mainstream) view that semantics and syntax are general concepts referring, respectively, to the meaning and structure of information, nothing more.

      Structure can exist independently. Meaning cannot (and I think we agree on that much). Meaning is always in reference to something that defines that meaning.

      Process, to me, is an entirely separate concept.

      Let me ask this: Is there such a thing, in your mind, as a process with no semantics? Or do you see them as synonyms for the same thing?

      • JamesOfSeattle

        Sure, but why not for a bacteria following a food signal? In both cases I see a mechanism responding usefully to input.
        […]
        The bacteria evolved to respond to food signals, the neuron evolved to respond to a different chemical signal.
        I see those as more similar than different.

        The difference is in the type of signal. In semiotic terms, the bacteria is responding to an indexical sign vehicle. The neuron is responding to a symbolic sign vehicle. An indexical sign vehicle is not designed. A symbolic sign vehicle is designed. An indexical sign vehicle serves no function. A symbolic sign vehicle serves a function. A billowing column of smoke is an indexical sign vehicle for fire. The word “fire”, spoken or written, is a symbolic sign vehicle.

        Is there such a thing, in your mind, as a process with no semantics? Or do you see them as synonyms for the same thing?

        I hope what I wrote above explains my answer. The rock rolling down the hill does not have semantics. If we use your definition of semantics as “meaning”, then I can see how you might view the bacterial response to a “food signal” as having meaning, and thus, semantics, in which case I would have to say that the kind of “meaning” I’m interested in for the purpose of this discussion is “intended meaning”, and when I say “intended”, I include teleonomically intended.

        *

      • Wyrd Smythe

        “In semiotic terms, the bacteria is responding to an indexical sign vehicle.”

        How does a chemical gradient index food when it essentially is food. It’s far more symbolic, if not iconic! (And, in any event, the modes are not exclusive.)

        “An indexical sign vehicle is not designed. A symbolic sign vehicle is designed.”

        I’ll need a source on that. As far as I know, symbolic, iconic, and indexed, are basic modes with no reference to how they are created. All three can be designed, as far as I know.

        “An indexical sign vehicle serves no function. A symbolic sign vehicle serves a function.”

        They’re both signs that have the function of linking to concepts. What function, or lack, do you mean?

        “A billowing column of smoke is an indexical sign vehicle for fire. The word “fire”, spoken or written, is a symbolic sign vehicle.”

        Disagree. The column of smoke is iconic. (The word “fire” is symbolic, obviously.)

        “I would have to say that the kind of ‘meaning’ I’m interested in for the purpose of this discussion is ‘intended meaning’, and when I say ‘intended’, I include teleonomically intended.”

        That’s fine. What teleonomic difference do you see between bacteria and neurons? Both are evolved products of their respective environments. In particular, in neither case is there design intent.

        The GFR does have design intent.

        Do I gather you lump together teleonomy and teleology?

      • JamesOfSeattle

        Googling “indexical sign”:

        Symbolic (arbitrary) signs: signs where the relation between signifier and signified is purely conventional and culturally specific [James says “coordinated”], e.g., most words.

        Iconic signs: signs where the signifier resembles the signified, e.g., a picture.

        Indexical Signs: signs where the signifier is caused by the signified, e.g., smoke signifies fire.

        Granted that each type of sign can be designed, but only symbolic signs must be designed.

        What teleonomic difference do you see between bacteria and neurons?

        The important difference is not between the bacteria and the neuron. The difference is between the inputs, i.e., the sign vehicles. The neurotransmitter 1, the input to the neuron, was teleonomically designed to function as a symbolic sign. The input to the bacterium, the chemical gradient, was not teleonomically designed.

        In particular, in neither case is there design intent.

        So here’s the issue. There is design intent, it’s just not human-style conceptual design intent. If you’re going to make me come up with another word for it, fine. But it works pretty much the same as human-style conceptual design intent. “Teleonomic design” and “teleonomic intent” in contrast with teleologic design and intent works for me.

        *

      • Wyrd Smythe

        “Indexical Signs: signs where the signifier is caused by the signified, e.g., smoke signifies fire.”

        Smoke in general, okay, however you wrote, “A billowing column of smoke,” which I find at least a little iconic.

        “The input to the bacterium, the chemical gradient, was not teleonomically designed.”

        In my view, both are “designed” by evolution (although that’s not the word I’d use). Both serve their respective purposes. Both process a chemical signal. I don’t see a distinction.

        Do you, perhaps, see them as different on the grounds that the neurons are part of a contained system whereas the bacteria is responding to external signals? Is that why you see the inputs of neurons as symbols but the inputs to the bacteria as not?

        In your own language, aren’t both systems that respond to understood inputs with useful responses?

        “But it works pretty much the same as human-style conceptual design intent.”

        In my view, it absolutely does not. Design intent is entirely different from what results in an evolutionary process.

        I mentioned the watershed of a river. Do you consider that “designed”?

        “‘Teleonomic design’ and ‘teleonomic intent’ in contrast with teleologic design and intent works for me.”

        Okay.

        Do we agree the GFR is teleologic?

        Do we agree the semantics it has come from that teleology?

      • JamesOfSeattle

        In my view, both are “designed” by evolution (although that’s not the word I’d use). Both serve their respective purposes.

        In what sense is the chemical gradient designed by evolution? What is the purpose of the chemical gradient?

        Design intent is entirely different from what results in an evolutionary process.

        What’s the difference that makes a difference?

        [Agreed that GFR semantics are teleologic, and the point is … so what?]

        [Not sure about the watershed]

        *

      • Wyrd Smythe

        “In what sense is the chemical gradient designed by evolution? What is the purpose of the chemical gradient?”

        Presumably what the bacteria eats also evolved. The purpose to the bacteria is that it signals: “Food this way!”

        “What’s the difference that makes a difference?”

        One is designed by an intelligence with a specific end goal, the other is, essentially, a random outcome of tiny successes due to mutations with myriad failed mutations along the way.

        Note, for example, how with billions of years at her disposal, evolution never came up with internal combustion engines, prop planes, or computers. We did in a very short time.

        That’s the significant difference.

        “Agreed that GFR semantics are teleologic, and the point is … so what?”

        I was under the impression you disagreed and that’s what this whole discussion was about. Although recently it came up that you require process as a part of the room’s semantics, whereas I don’t. I think a book has semantics, for instance.

        “Not sure about the watershed”

        To me a watershed and what evolution produces occur through very similar processes.

        What makes teleonomy a bit interesting to me is to contemplate how inevitable the end result might be. To what extent is the watershed already there in the land before it begins shaping?

        Yet there are many random factors influencing that as well — weather being the biggest. A glacier moving through can change everything.

  • JamesOfSeattle

    Presumably what the bacteria eats also evolved. The purpose to the bacteria is that it signals: “Food this way!”

    So this is not what I’m saying. To be a symbol, the input has to be designed for the purpose of being a symbol. The chemical gradient was not designed for the purpose of being a symbol. The neurotransmitter was designed for the purpose of being a symbol. There is no other reason for the neurotransmitter to exist. The symbol has to be designed with the expectation that it will be interpreted correctly. That takes some form of coordination (or convention) with the mechanism that is expected to do the interpreting. These criteria do not hold for the chemical gradient.

    One is designed by an intelligence with a specific end goal, the other is, essentially, a random outcome of tiny successes due to mutations with myriad failed mutations along the way.

    Not a difference that makes a difference. The amount of time it takes to generate the mechanism is irrelevant. The specific method used to generate the mechanism is irrelevant. Teleological or teleonomic is irrelevant. What’s relevant is that the input is a symbol (generated for the purpose of being a symbol) and the mechanism is created for the purpose of interpreting that symbol.

    *

    • Wyrd Smythe

      “These criteria do not hold for the chemical gradient.”

      Okay, fair enough. So?

      “What’s relevant is that the input is a symbol (generated for the purpose of being a symbol) and the mechanism is created for the purpose of interpreting that symbol.”

      Okay. How does this apply to the GFR?

      • JamesOfSeattle

        Contra Searle, the GFR has everything a human has with respect to semantics, and consciousness. Everything relevant, anyways.

        *

      • Wyrd Smythe

        On your account of things, that would be true. (Which I think speaks to the coherence of the GFR as much as anything. If the room truly could answer any reasonable question, certainly under the “as if” view it would be indistinguishable from a human. It would essentially be a clone of the designer.)

        There is the question of phenomenal experience, which I don’t think the room would have. (Possibly merely on the grounds of timing — subjective experience might require some aspect of real-time.) But it still gets into p-zombie territory and “what’s the difference” questions.

        Ultimately I see these imaginary scenarios as a kind of science fiction. 😀

  • Wyrd Smythe

    Just happened to re-read this post… It’s actually a very good summary of my position. Years of debating this have certainly honed the view!

%d bloggers like this: