Last time I began exploring what we mean by the terms “computer” or “computation.” Upon examination, these turn out to be not entirely obvious, so some resort to the edge cases: *Computers are Turing Machines*; or *Everything is a computer*.

Even then the situation remains stubbornly not obvious. Turing Machines are abstractions quite different from what we typically call computers. Saying *everything computes* creates such a broad umbrella that it renders the notion of computation nearly useless.

This series explores the territory between those edge cases.

I’ll reference the Stanford Encyclopedia of Philosophy article ** “Computation in Physical Systems”** which reviews the question of computation. (The question,

*“What is a computer?”*is easily answered,

*“A mechanism that computes!”*The actual — difficult — question is,

*“What is computation?”*)

Last time I distinguished *computation* from the closely related *calculation* and both from the distinctly different *evaluation*. Roughly, computation is a series of related calculations. Evaluation speaks to the equivalence between two ways of writing the same specific value.

I also determined an abacus isn’t really a computer, but a slide rule is, at least a little bit. I left off with whether a rock-clock is (I lean towards no). Various versions of that rock are central players in the Stanford article.

I’ll get to that, but first I’ll mention that the intro and Section 1 offer an excellent overview of the territory. The intro looks at the many questions, and Section 1 explores why the answers might matter.

**§**

There are two important points in Section 1:

Firstly that most of the discussion concerns *digital* computers. Turing Machines and the lambda calculus are explicitly about digital computation — symbol processing. Note that *digital* is not necessarily *binary*, but the symbols are usually numbers. A key characteristic of digital computers is that they are all equivalent (given certain constraints). What one can do, all can do (although efficiency varies greatly).

Secondly that digital computers not only can’t compute everything, they can’t compute most things. The list of all possible algorithms can be enumerated, so it’s *countable*. But the domain of all possible functions is *uncountable* — a much larger infinity.

Section 1 also claims a tension between the abstract definition of computing and concrete implementations. I’m not sure I see the problem. The number three is abstract, but three beers, three bears, or three boats, are all concrete instances of it. So what?

[The article later briefly brushes the idea that our computational abstractions are derived from our physical experience. Indeed, and I think the deeper truth is that our math in general is derived. Its regular patterns and notorious eerie effectiveness are *because* of that. Then it builds on itself, and new abstractions then lead back to new concrete implementations. Observe dogs sleeping in natural shelters; derive dog houses; design better ones; build them; happy dogs.]

The Section ends with: *“Prima facie, only relatively few and quite special systems compute. Explaining what makes them special — or explaining away our feeling that they are special — is the job of an account of concrete computation.”*

I would just say that, at least in this case, our intuition is correct. Computation is a special feature of reality. Generally I think, like morality or math, it’s the product of intelligence. (In fact, computation essentially *is* math.)

**§**

Section 2 (and the rocks) triggered these posts. *‘I come neither to praise nor to bury; merely to comment.’* That said, some of this raised my eyebrows a bit. I think with philosophy it can be a problem being either too reductive or too constructive.

As an aside I have to mention something about SEP articles. I don’t read a lot of formal philosophy (lack of time, not lack of interest), so maybe this is a common phrase, but SEP articles often use *“just in case”* which threw me the first times I saw it.

It’s one of those places where philosophy uses a common phrase in an uncommon way. Most might say bring an umbrella *just in case* it rains, and by that imply an outside chance. But what the statement really implies is that an umbrella will be useful, but *just in (the) case* where it rains (otherwise not). It’s a reference to a specific circumstance.

Anyway, the first part of Section 2 explains how we bridge the abstract-concrete gap. We map the physical states of some system to the abstract states of the computation.

We can do this because a digital computation consists of a set of states and transitions between them. This is apparent in the table that defines a Turing Machine. Such a table exactly is a list of possible states and, for each, how it transitions to others. More visually, a flowchart: the boxes are states, and the arrows are transitions.

**Algorithm** ≡ **Flowchart** ≡ **Turing Machine** ≡ **Lambda Calculus**

I’ll reiterate one of my premises: No natural system is a computer. Both the design of the algorithm, and the mapping of some system’s physical states to that algorithm, supervene on intelligence.

**§**

Section 2 illustrates how deep the rabbit hole is (especially § 2.2). Mapping physical states to abstract ones gets complicated once philosophers start chewing at it. A lot of the complexity they create involves counterfactuals — things an algorithm *might* have done but didn’t in this case.

This brings me back to the **rock-clock**, which makes it first appearance in that section. I find some of the manifestations a bit suspect.

One such maps alternating temperature states to a NOT gate with its output looped to its input. The problem is this is clearly a two-state system, not an infinite set of linear states, even if actual execution might “unroll” the loop. *As defined* the second state loops back to the first.

A valid mapping of progressive t-states is to a *chain* of gates. Each t-state is distinct, so its abstract state must be unique.

**§**

Section 2.3 continues down the rabbit hole seeking to justify a mapping between a physical system and an abstract computation, in this case through semantics. Quintessentially, *‘computation is what we say it is.’*

A key issue with a semantic account is to what extent computational states must be representative — must *mean* something — to be considered computational. With semantics, there is also the issue of whether identical computational states, but with semantic differences, are the same state.

Ultimately, as the text concludes, the fixation on representation makes a semantic account at odds with computer science notions, and it’s the latter that created the idea of computation in the first place.

**§**

Section 2.4 tries a syntactic account that seeks to avoid issues with representation. To some extent this parallels the philosophical view that math is just a syntactical manipulation of symbols. (And recall that, per lambda calculus, an algorithm is math.)

The article seems to get mildly confused about syntax by assigning it to language. In computer science, a language can be simple and so can syntax. Consider a simple language consisting of ‘A’ and ‘B’ and a syntax that says ‘A’ can follow ‘A’ and ‘B’ can follow ‘A’ or ‘B’. Sentences in this language are then either valid or invalid *syntactically*. (Pretty clearly they have no semantics.)

So ‘AAAAABBB’ is a valid sentence, but ‘ABBA’ is not.

But while syntax analysis is an aspect of computation, I don’t think it’s the whole story. It’s necessary, but not sufficient.

We’re getting there, though.

**§**

The last part of this section, Section 2.5, explores a mechanistic view that does an end run around both semantics and syntax. I think this is the best of the three. For one, it expresses the dual nature of computation.

It also includes, but still distinguishes, digital, analog, and quantum, computing. They are included as computational mechanisms, but differentiated by their vehicles (symbols, numbers, magnitudes, qubits, etc). As mechanism, this view also has a notion of *malfunction*.

Ironically, the mechanistic view reifies the notion of a Turing Machine as physical mechanism while including analog and quantum systems. As I said, beers, bears, boats. We’ve wandered the map, looked under rocks, and ended up back where we started with the obvious. (But the territory must be explored. One never knows what’s under the next rock.)

The original Turing Machine definition of (digital) computation was actually pretty good despite its abstraction. Nailing down a working physical theory from it is a bit of an exercise, though, especially if we want the company of analog and quantum computing. I think the trick is not to overcomplicate things.

**§ §**

Section 3 turns from whether a given system computes to whether *every* system computes — ** pancomputation** — the eponymous topic of this series.

There are two broad categories here: Firstly, that random physical states can be mapped *post hoc* to some computation. For instance, mapping the temperature of a rock to the hours of the day. Secondly, an equivalence between physics and computing. Essentially, reality computes itself.

**§**

The first category is interesting, but it can lead to pixies dancing in the walls. If random physical states can be interpreted as valid computation, then anything with enough physical states performs all computations. By “enough” I mean enough such that we can always find a subset of states that is the computation we desire. The rock-clock is one example of this.

I think this category fails the mechanism account. Any subset of disconnected physical states almost certainly fails the syntax account. It also illustrates the weakness of the semantic account. Further, there’s no dualism, and obviously no prior intent or design. (Pancomputation most raises the question of *post hoc* intent. Allowing it makes it hard to deny pixies.)

The key here, I think, is that an ordered list of computational states isn’t sufficient to define a computation. It’s better seen as the *result* of one — a listing of how a computation executed.

Imagine the computation of a complex fractal; something taking hours of computing time. The result, displayed on the screen, is an image comprised of pixels. We can save the pixels, *or* we could save a recording of all the states of the computation.

By playing back the states into a system that mimics only the outputs to the graphics card we can see the image again, but without performing the computation. The playback will be faster and much less computationally intense (no hot CPU when there isn’t one). What’s more, we can skip any step that doesn’t output to the graphics card.

An ordered list of computational states is just a listing. What is significant is the *causal* system that generated them. Note we cannot generate a different image with a playback — it’s just a complicated way to save the image. With a computation we can generate new images.

**§**

The second category, at a basic level, is trivially correct given the equivalence between physics and computing. Each particle “computes” its path. The rock is busily computing being a rock. I think this is like the punchline to the Microsoft joke (the one about the helicopter): it’s both true and useless.

However, consider the physical dynamics of a desktop computer. It uses voltages to represent numbers and logic to process those numbers. (It has memory to store them in.) This is a physical system; one that’s obviously a computer. There are two levels here: the low-level physics of matter (same as the rock) and the higher-level physics of the computer electronics (in between, planetary orbits).

Per the mechanism view, that higher-level physics is what defines the computer. What the atoms in the computer are “computing” is a matter for quantum physics, not computer science.

**§**

My take is that *unlimited* pancomputation (§ 3.2) is fairly meaningless, and I don’t see *limited* pancomputation (§ 3.3) as much better. It’s not limited *enough*.

Both those sections involve the first category, *post hoc* state mapping. The last section (§ 3.4) involves the second, physics-as-a-computer. In particular, systems like cellular automata. It’s a perfectly valid view, one of great interest to quantum mechanics, but meaningless for deciding what a “computer” is.

Bottom line, I don’t find much value in pancomputationalism, but I do want to circle back next time to explore certain aspects of it.

**§ §**

Section 4 gets into physical computability — what can be computed and by what kind of computers. It’s a complex topic all of its own, and I’m not going to get into it here.

The last section, though, about hypercomputing (§ 4.3) is kind of interesting. A bit on the science fiction side, but fun.

That’s it for the SEP article, but there’s more to discuss. In these two posts I’ve stayed away from any discussion of computationalism. (Ah, yes, that again.) In the next post I’ll brave those waters a little.

Stay rockin’, my friends! Go forth and spread beauty and light.

∇

September 2nd, 2021 at 12:08 pm

Pan-fried pancomputation! 🙂

September 2nd, 2021 at 5:16 pm

One theme I’ll explore in more detail down the road is the notion:

Computation is obvious.This is one case where the Blind Watchmaker argument turns out to be correct. A computation is strong evidence of intelligence construction.

The “out-of-the-box” notion is again helpful. (Recall that it’s a clear-eyed knowledgeable look at the full apparent function of a device. Information that can be gleaned from a close inspection.) A computer, for instance, has a very apparent function of computing. So does a slide rule, but an abacus is less obvious — it could be just a toy for a child. That’s because the slide rule has computational juice whereas the abacus really doesn’t.

As an aside, humans aren’t dual (because they’re natural objects), and medicine metaphorically studies out-of-the-box humans. Psychiatry and sociology tend to study humans with apps installed, the former for when the app has bugs.

September 2nd, 2021 at 6:04 pm

If I do write a fourth post for the series, I really want to call it:

Pancomputation IV: A New Hope…September 3rd, 2021 at 11:21 am

Quote for the Day:

Attributed to John R. Pierce, although I haven’t found a source for the quote. If it is the same John R. Pierce (and if he really did say it), well that’s the same guy who named the transistor.

I saw the quote in Frank Wilczek’s new book,

Fundamentals: Ten Keys to Reality, which I just started reading.(I also just started reading Lee Smolin’s

Time Reborn. I hadFundamentalson hold with my library, and it just became available, so I had to borrow it as well. I’ve been waiting for quite a few weeks for it to come available. Both books look to be hits with me.)