My illusion of free will decided the month of May must be made for Mind (and maybe a dash of Mandelbrot). Lately, online discussions about consciousness have me pondering it again. I never posted on topics such as Chinese Rooms or Philosophical Zombies, largely because sensible arguments exist both ways, and I never decided exactly where I fell in the argument space.
It’s not that I’ve decided on the topics so much as I’ve decided to write about them (and other topics). I’ve found that writing about a topic does a lot to clarify my mind about it. (Trying to teach a topic does that even more.)
I’ll start today with some personal observations and points of view.
First, a disclaimer: This is just my take on things. I’m not claiming reality is the way I see it, but that this is how I see reality, if you catch my meaning. Reality is gonna be however it is!
This is, at best, half-assed philosophy and not science (except for some bits).
The main point of contention with online friends of mine will regard computationalism — the idea that a software program can be conscious. (I’m dubious.) That debate begins with ideas of what conscious even means, let alone considering whether software can emulate it.
There are more unknowns (and unknown unknowns) than knowns, so the matter is far from settled. Questions about effective and literal possibility remain. There is much to unpack.
Much of this turns on one’s axioms — one’s fundamental, typically unshakable, beliefs about the nature of reality and how things work.
Some of the things I’ll write about have scientific antecedents, and I’ll try to be clear about pointing that out. Everything else, as always, is just thinking.
And a kind of writing-out-loud type of thinking, on top of that!
One thing I’ve noticed in philosophy about the mind is what can amount almost to cargo cult thinking — seeing “A” as “B” because “A” has similarities to “B.”
In particular this happens when seeing the brain as a computer.
From my perspective, the brain looks nothing like a computer and works nothing like a computer.
Crucially, in the above sentence, the word “computer” means “computer as computer science currently defines a normal computer.”
To me, if the comparison requires redefining a well-defined term to work, that means it works because the term is redefined. The comparison is circular, a self-defined tautology.
I’ll take up the brain=computer idea (which is essentially computationalism) throughout these posts. For now I want to focus on the idea of similarities.
Let me start with the basic terms: similar, different, identical. (One of them is a ringer.)
Consider those three terms in the context of matching two sets of things, call them set A and set B. Think of them as big bags of random objects.
We’ll go through them and match objects one-by-one to decide if the sets are similar, different, or identical. Given an object from one set, we seek its match in the other set.
Scenario #1: The first 1000 items from A match items in B. So far the matching is perfect, A (so far) appears not just similar, but identical, to B.
But item 1001 does not match. The sets are still similar, but no longer identical. Note that, now, the sets are also different.
If all further matches fail, the sets are increasingly different. Ultimately, the total match ratio determines how similar or different the sets are.
Scenario #2: The first item doesn’t match. Immediately, the sets are different (no option on identical). No further match can change that. As in the first scenario, now it’s only a matter of the ratio of different to similar.
The ringer, clearly, is identical. That’s a perfect condition, any mismatch ruins identical. This is entropy in action.
Similar is a low-entropy condition, with the highest possible value being identical, which is essentially zero entropy.
Different introduces entropy. One difference destroys zero entropy (identical). With the maximum amount of entropy, the two sets have nothing in common.
(We don’t really have a common word, opposing identical, that means two things with nothing in common — other than the phrase “nothing in common.” The technical term disjoint does, though.)
The main point is that objecting to differences is more powerful argument than comparing similarities (because entropy). Differences are harder to ignore than similarities.
My bottom line is the need to be wary of conflating things that seem similar when the differences argue more powerfully for distinction.
§ § §
The other thing I want to discuss is the idea of interpreting some A as some X.
One of the classics is Putnam’s interpretation of a rock as a computer. Others include Searle’s interpretation of a wall as a computer (running Wordstar) and Bishop’s interpretation of pixies (consciousnesses) everywhere.
Recently I ran into one where the truth table for a logical AND is interpreted as the table for a logical OR. (That one turns out to be a tautology. I’ll show you why in a future post.)
The main thing about interpretations is that they tend to be fictions.
I don’t mean that in a bad way, just that they are made up.
The exception is the single actually correct interpretation — that one is true. And it isn’t an interpretation so much as a true accounting.
In quantum physics, for instance, there are many competing interpretations of experimental and theoretical results. But (assuming realism) there is only one correct account of reality.
One one level, science is about finding the true account, but it can move forward with just interpretation and instrumentalism. Philosophy also concerns itself with true accounts.
What we make up with an interpretation is a map, Mi, that translates properties of A into properties of X.
The quality of an interpretation is inversely proportional to the complexity and entropy. The more complex or entropic a map is, the lower its value.
Put simply, A must do the heavy lifting of being X.
If A is an X, then A needs to look mostly like an X. The further A is from looking like an X — the less it participates in looking like an X — the more the map does the heavy lifting.
By entropic I mean: Given map Mi, how indistinguishable is Mi from Mj, some other map providing a similar interpretation.
For instance, in Searle’s Wall, MWordstar is going to look a lot like Memail and MWarcraft and MExcel. All those maps will be roughly similar.
Conversely, consider an actual computer running Wordstar. It also has a map MWordstar — one that is obvious and easy to pick out.
Of course, that computer also has, for instance, a map MExcel based on interpreting the particles of the computer in exactly the right way. As with the Wall, the computer can be interpreted to be running every other computation we can imagine.
But the true account map, MWordstar, is a much different, vastly simpler interpretation that, as it turns out, mostly just links the abstraction with its reification, the computer. It’s essentially a very simple one-to-one map.
The MWordstar interpretation here has low entropy compared to the infinite number of mostly similar high-entropy interpretations for all possible computations.
The bottom line is that interpretations are not all created equal.
There are objective ways — complexity and entropy — to judge their value.
In particular, Searle’s Wall, and Putnam’s Rock, involve high-entropy, very complex interpretations that do all the heavy lifting.
That means any arguments based on these thought ideas require careful examination of the interpretations involved.
In general, this all involves computationalism, which I expect to be a central theme over this series of posts. So I’ll be back to the Wall and the Rock (and the Pixies).
Maybe I’ll try to take all your base with Smythe’s Quantum Computer Rock Wall of Dancing Chinese Zombie Pixies (with Clocks).
Stay conscious, my friends!
 Or rather that my illusion of free will decided…
[See: Free Will: Compatibilism]
 When dealing with logic, mathematics, and science in general. When dealing with other human beings it’s wiser to focus on similarities. (Because compassion, empathy, love.)
 That is, there will be a large, powerful software engine, common to each, that does the job of interpreting states of the Wall and linking them to the Wordstar or whatever program. The only thing that changes is the program being linked to.