Previously, I wrote that I’m skeptical of interpretation as an analytic tool. In physical reality, generally speaking, I think there is a single correct interpretation (more of a true account than an interpretation). Every other interpretation is a fiction, usually made obvious by complexity and entropy.
I recently encountered an argument for interpretation that involved the truth table for the boolean logical AND being seen — if one inverts the interpretation of all the values — as the truth table for the logical OR.
It turns out to be a tautology. A logical AND mirrors a logical OR.
The argument is due to John Mark Bishop from his 2009 paper, Why Computers Can’t Feel Pain. While I quite agree with Bishop’s conclusion, I disagree with his response to the two objections — primarily the latter, which is the main topic of this post.
I think I also disagree with his general thesis that computational states are observer-relative. I think they’re far more obvious.
The objection is stated: “Computational states are not observer-relative but are intrinsic properties of any genuine computational system.”
I think the objection is correct.
Simply put: Genuine computational systems are obviously computational systems on the account of their entropy, complexity, and intentionality.
Put another way: There is no obvious physical argument favoring the Pixies (where are they?). There is an obvious physical argument favoring the computation (just look at it).
Bishop uses, as an example of observer-relative computing, the truth table for an AND. He suggests that interpreting the signal values the other way around makes the table table for an AND something else entirely.
But it really doesn’t.
Seeing them as different at all depends on seeing logical AND as something rather different from logical OR.
But that’s not the case.
They are intimately connected, somewhat like +4 is connected to -4.
That is, they’re both 4, one is a kind of “mirror image” of the other, and we can go back and forth freely, from one to the other, just by inverting the value.
The same thing is true of AND and OR.
To see how, we need these two important logical identities:
- NOT (a AND b) = (NOT a OR NOT b)
- NOT (a OR b) = (NOT a AND NOT b)
The first expression is known as a NAND (not-and); the second is known as a NOR (not-or). As you see, they are equivalent to their mirror partner with inverted inputs.
As a side note: The built-in NOT in the NAND and NOR gates, combined with the possibilities inherent in the identities above, make those gates the more common and useful gates in logic circuits.
Here’s one way to illustrate their mirror identity:
- Given: (a AND b) = x
- Mirror: (NOT(a) AND NOT(b)) = NOT(x)
- Rule #2: NOT(a OR b) = NOT(x)
- NOTs cancel: (a OR b) = x
Or we can do it without involving x:
- Given: (a AND b)
- Mirror: NOT(NOT(a) AND NOT(b))
- Rule #2: NOT(NOT(a OR b))
- NOTs cancel: (a OR b)
Either way, the magic happens in step 2, when we invert all the logic.
It’s the equivalent of multiplying +4×-1 to get -4. (Or -4×-1 to get +4.) Effectively, we’ve multiplied logical AND×NOT and gotten OR.
And, of course, it works the other way around. We can multiply OR×NOT to get back the AND.
In a sense, when we say, “interpret all the voltages the opposite way,” we’re actually performing the multiplication that inverts the logic.
But perhaps the reply is: Well, the logic still goes from AND logic to OR logic.
Well, no it doesn’t (I reply), on two counts.
Firstly, when we invert the values and see an OR, we’re also inverting the table top and bottom. (See the tables shown up top. The horizontal orange is the same inputs.)
Looking at the actual gate, nothing changes. The inputs both being +5 volts still causes +5 volts on the output, and the other three combinations of inputs still result in 0 volts on the output.
Secondly, you’d have to look at the entire circuit in the mirror, and that might demonstrate the mirror interpretation wrong. (I have to think about that.)
I very much suspect that, especially with regard to inputs and outputs, the one true account interpretation of the circuit would become clear.
Even if replacing an AND with an OR turns out to make sense in the entire inverted circuit, the nature of the logic is still super obvious.
For example, you can’t interpret the gate as an XOR or a half-adder. It’s clearly not performing those functions.
It’s performing the functions described by the truth table. The over all circuit (not the observers) determines what that truth table means.
Something to consider as well is that logical truth tables in general are invented mathematical abstractions.
The operations are basic enough that there are some vague analogues in nature, but, generally speaking, logic gates are a creation of intelligence.
I think that’s an important point to keep in mind when we talk about algorithms. Just because we can model nature with an algorithm, that doesn’t mean nature uses them.
More to the point here, the putative physical logic gates and their voltages are a reification of that invented abstraction. Any analysis of such physical instances has to trace back to the abstract origins.
The point is the intentionality of an algorithmic implementation. I’m saying it always exists, and it’s always ultimately pretty obvious.
Unlike eyes, algorithms really are watches made by a watchmaker.
Bishop cites a second example to illustrate that computation is relative. This one involves a chess-playing computer.
In the first case, the computer displays a board and interacts with the user to play an obvious game of chess. Clearly a chess computer, right? Right.
In the second case, the computer displays the board as a one-dimensional strip of lights where colors indicates pieces. The user can still interact, but now the computer is… what? A work of art?
No, it’s still a chess computer. It just has a really weird display. But imagine someone learning to play chess according to the new system. They’d still be playing the same game offered by the underlying algorithm.
Which is clear and obvious.
I’m not necessarily suggesting an observer would identify it as chess (although I think they would after careful observation).
I think they would identify it as a system with complex rules right away.
Once they noticed the number of colored lights decreasing, and certain states ending the interaction, they might guess it’s a game.
If someone who knows chess studied it for a long time, they might well recognize it. (Especially if its opening moves were predictable.)
The first objection Bishop handles involves counterfactuals in the flow of states that are assumed to represent consciousness.
That’s actually an interesting enough topic on its own that I think I’ll get into it another time.
Briefly, the argument involves selection in algorithms (e.g. If-Then-Else). The branches taken represent a given set of states. The debate is whether the branches not taken, and the states they represent, matter in whether the actual states experience consciousness or not.
The discussion of states gets us into the Rock Wall of Dancing Pixies (with Clocks), and that’s definitely a topic for other posts.
The the extent Bishop argues against computationalism on the account that computation is relative, I don’t agree. I don’t think it is.
I think a correct computational interpretation is obvious and clear from its complexity, low entropy, and intention.
Stay logical, my friends!
 The general idea is there is a closed group under a “multiplication” operation. The integers are closed under (actual) multiplication — multiplying two integers always results in an integer.
A requirement is a multiplicative inverse: a value that multiplied against a member of the group returns that member. For integer multiplication, that value is 1.
Logic gates form a closed group of many operations on boolean values (the result of a logical operation is always a boolean). We can consider the boolean multiplicative inverse to be a NOP (“no op”) gate — a gate that outputs its input.
That makes the NOT (or INV) gate the equivalent of -1 for the integers. The thing you can “multiply” a thing with to get the mirror thing.
 Reality comes first; intelligence comes later. Physical processes come first and are primary; algorithms are secondary models created by intelligence.