Interpret This!

My illusion of free will decided the month of May must be made for Mind (and maybe a dash of Mandelbrot). Lately, online discussions about consciousness have me pondering it again. I never posted on topics such as Chinese Rooms or Philosophical Zombies, largely because sensible arguments exist both ways, and I never decided exactly where I fell in the argument space.

It’s not that I’ve decided[1] on the topics so much as I’ve decided to write about them (and other topics). I’ve found that writing about a topic does a lot to clarify my mind about it. (Trying to teach a topic does that even more.)

I’ll start today with some personal observations and points of view.

First, a disclaimer: This is just my take on things. I’m not claiming reality is the way I see it, but that this is how I see reality, if you catch my meaning. Reality is gonna be however it is!

This is, at best, half-assed philosophy and not science (except for some bits).

The main point of contention with online friends of mine will regard computationalism — the idea that a software program can be conscious. (I’m dubious.) That debate begins with ideas of what conscious even means, let alone considering whether software can emulate it.

There are more unknowns (and unknown unknowns) than knowns, so the matter is far from settled. Questions about effective and literal possibility remain. There is much to unpack.

Much of this turns on one’s axioms — one’s fundamental, typically unshakable, beliefs about the nature of reality and how things work.

Some of the things I’ll write about have scientific antecedents, and I’ll try to be clear about pointing that out. Everything else, as always, is just thinking.

And a kind of writing-out-loud type of thinking, on top of that!

§

One thing I’ve noticed in philosophy about the mind is what can amount almost to cargo cult thinking — seeing “A” as “B” because “A” has similarities to “B.”

In particular this happens when seeing the brain as a computer.

From my perspective, the brain looks nothing like a computer and works nothing like a computer.

Crucially, in the above sentence, the word “computer” means “computer as computer science currently defines a normal computer.”

To me, if the comparison requires redefining a well-defined term to work, that means it works because the term is redefined. The comparison is circular, a self-defined tautology.

I’ll take up the brain=computer idea (which is essentially computationalism) throughout these posts. For now I want to focus on the idea of similarities.

§

Let me start with the basic terms: similar, different, identical. (One of them is a ringer.)

Consider those three terms in the context of matching two sets of things, call them set A and set B. Think of them as big bags of random objects.

We’ll go through them and match objects one-by-one to decide if the sets are similar, different, or identical. Given an object from one set, we seek its match in the other set.

Scenario #1: The first 1000 items from A match items in B. So far the matching is perfect, A (so far) appears not just similar, but identical, to B.

But item 1001 does not match. The sets are still similar, but no longer identical. Note that, now, the sets are also different.

If all further matches fail, the sets are increasingly different. Ultimately, the total match ratio determines how similar or different the sets are.

Scenario #2: The first item doesn’t match. Immediately, the sets are different (no option on identical). No further match can change that. As in the first scenario, now it’s only a matter of the ratio of different to similar.

§

The ringer, clearly, is identical. That’s a perfect condition, any mismatch ruins identical. This is entropy in action.

Similar is a low-entropy condition, with the highest possible value being identical, which is essentially zero entropy.

Different introduces entropy. One difference destroys zero entropy (identical). With the maximum amount of entropy, the two sets have nothing in common.

(We don’t really have a common word, opposing identical, that means two things with nothing in common — other than the phrase “nothing in common.” The technical term disjoint does, though.)

The main point is that objecting to differences is more powerful argument than comparing similarities (because entropy). Differences are harder to ignore than similarities.[2]

My bottom line is the need to be wary of conflating things that seem similar when the differences argue more powerfully for distinction.

§ § §

The other thing I want to discuss is the idea of interpreting some A as some X.

One of the classics is Putnam’s interpretation of a rock as a computer. Others include Searle’s interpretation of a wall as a computer (running Wordstar) and Bishop’s interpretation of pixies (consciousnesses) everywhere.

Recently I ran into one where the truth table for a logical AND is interpreted as the table for a logical OR. (That one turns out to be a tautology. I’ll show you why in a future post.)

The main thing about interpretations is that they tend to be fictions.

§

I don’t mean that in a bad way, just that they are made up.

The exception is the single actually correct interpretation — that one is true. And it isn’t an interpretation so much as a true accounting.

In quantum physics, for instance, there are many competing interpretations of experimental and theoretical results. But (assuming realism) there is only one correct account of reality.

One one level, science is about finding the true account, but it can move forward with just interpretation and instrumentalism. Philosophy also concerns itself with true accounts.

§

What we make up with an interpretation is a map, Mi, that translates properties of A into properties of X.

The quality of an interpretation is inversely proportional to the complexity and entropy. The more complex or entropic a map is, the lower its value.

Put simply, A must do the heavy lifting of being X.

If A is an X, then A needs to look mostly like an X. The further A is from looking like an X — the less it participates in looking like an X — the more the map does the heavy lifting.

By entropic I mean: Given map Mi, how indistinguishable is Mi from Mj, some other map providing a similar interpretation.

For instance, in Searle’s Wall, MWordstar is going to look a lot like Memail and MWarcraft and MExcel. All those maps will be roughly similar.[3]

Conversely, consider an actual computer running Wordstar. It also has a map MWordstar — one that is obvious and easy to pick out.

Of course, that computer also has, for instance, a map MExcel based on interpreting the particles of the computer in exactly the right way. As with the Wall, the computer can be interpreted to be running every other computation we can imagine.

But the true account map, MWordstar, is a much different, vastly simpler interpretation that, as it turns out, mostly just links the abstraction with its reification, the computer. It’s essentially a very simple one-to-one map.

The MWordstar interpretation here has low entropy compared to the infinite number of mostly similar high-entropy interpretations for all possible computations.

§

The bottom line is that interpretations are not all created equal.

There are objective ways — complexity and entropy — to judge their value.

In particular, Searle’s Wall, and Putnam’s Rock, involve high-entropy, very complex interpretations that do all the heavy lifting.

That means any arguments based on these thought ideas require careful examination of the interpretations involved.

In general, this all involves computationalism, which I expect to be a central theme over this series of posts. So I’ll be back to the Wall and the Rock (and the Pixies).

Maybe I’ll try to take all your base with Smythe’s Quantum Computer Rock Wall of Dancing Chinese Zombie Pixies (with Clocks).

Stay conscious, my friends!


[1] Or rather that my illusion of free will decided…

[See: Free Will: Compatibilism]

[2] When dealing with logic, mathematics, and science in general. When dealing with other human beings it’s wiser to focus on similarities. (Because compassion, empathy, love.)

[3] That is, there will be a large, powerful software engine, common to each, that does the job of interpreting states of the Wall and linking them to the Wordstar or whatever program. The only thing that changes is the program being linked to.

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

9 responses to “Interpret This!

  • Athena Minerva

    Have you read Oliver Sacks contribution Rivers of Consciousness?

  • James Cross

    One could argue that no two things could ever be identical in the strictest sense of the word. Even two rubber duckies coming off a manufacturing line would differ by molecules and atoms here or there.

    In an even more strict sense, for two items to be identical, they would need to occupy the same space-time location – something logically impossible. Julian Barbour argues that motion is an illusion based on this. His cat that jumps into the air isn’t the same cat that lands back on the ground. It is different because it occupies a different location in space-time. Its molecules also have undergone subtle quantum changes.

    This becomes more relevant to the computer question when the idea of replicating a human brain-mind in a computer (or even another brain) comes up. The best that could ever be replicated would a snapshot of each particle/quantum state disconnected from its history. Even then capturing the state of each particle at exactly the same moment would be impossible.

    • Wyrd Smythe

      “One could argue that no two things could ever be identical in the strictest sense of the word.”

      One could, and in the strictest sense, one would be right!

      This is essentially The Ship of Thesus question. What constitutes identity? At some level of reduction, identity apparently vanishes.

      One approach is to talk about identity-significant properties. These allow us to make meaningful comparisons.

      The quantum particles comprising the cat have properties, of course, but not properties that are, in themselves, significant to the cat’s identity. Any other quantum particles can, and do, replace them without significantly altering the cat’s identity.

      The Ship of Thesus has significant ship properties, but the properties of the wood, iron, hemp, components aren’t on the same level. A “good enough” replacement is a flawless replacement from the perspective of ship identity properties.

      (“Good enough” in the sense of, “Gee, this replacement bearing doesn’t have any of the same atoms and the dimensions are way off (at the micron scale), plus the metal alloy isn’t quite the same… but it’s the right general shape and strong enough… so, it’s good enough.”)

      Likewise, our location in time and space is not relevant to our identity.

      “This becomes more relevant to the computer question when the idea of replicating a human brain-mind in a computer (or even another brain) comes up.”

      Yes, totally!

      I think the question then is: What properties are significant to consciousness such that they must be correctly present? What constitutes a “good enough” replacement?

      The question even arose with Star Trek transporters. 🙂 By the time TNG came around, the writers decided they needed “Heisenberg Compensaters” in the transporter circuits. 😀

      And as far as quantum level, the no-cloning theorem probably makes quantum-level duplication impossible, even in principle.

And what do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: