In this corner, philosopher John Searle (1932–), weighing in with what I like to call the Giant File Room (GFR). The essential idea is of a vast database capable of answering any question. The question it poses is whether we see this ability as “consciousness” behavior. (Searle’s implication is that we would not.)
In that corner, philosopher and mathematician Kurt Gödel (1906–1978), weighing in with his Incompleteness Theorems. The essential idea there is that no consistent (arithmetic) system can prove all possible truths about itself.
It’s possible that Gödel has a knockout punch for Searle…
This post isn’t really about the GFR and its application as a thought experiment for consciousness. That’s been well explored. Seeing it in terms of Gödel, though; that seems a new twist.
[At least to me; this is likely plowed ground. I’d say “no doubt some ancient Greek though of it first,” but Gödel came two millennia after them. (And even so, one probably did.)]
Just seeing those two philosopher names in juxtaposition may have made your light bulb go on. If so, bear with me while I lay some groundwork for any novitiates.
Firstly, John Searle and his Giant File Room.
We can imagine the Librarian of a vast Library — something on the order of the Library of Babel (as famously described by Jorge Luis Borges). The Librarian uses the Library to answer any question we ask.
A further wrinkle is that the Librarian doesn’t speak the language used in the Library, including in our questions. Instead, a special Librarian protocol links the text of our question with its correct answer. (Think of it as a magic Dewey Decimal system.)
Searle’s implication is that the Librarian seems to understand our questions, because we always get correct answers. But — according to Searle — there is no understanding present. Certainly the Librarian doesn’t understand our questions, and it’s hard to argue an inanimate object (such as a collection of books) is conscious.
Overall, Searle is saying a computer wouldn’t be conscious, even though it seems to understand our questions.
At the moment, the question is theoretical, since we don’t have systems complex enough (or the full understanding required to even design them). But it is certainly a question we’ll face down the road.
Secondly, Kurt Gödel and Incompleteness.
This one is a little harder. It’s important to emphasize this applies strictly to systems capable of doing arithmetic.
The basic idea is that an arithmetic system is logical and, assuming you do the math correctly, always gives true (logical) answers. The old hope for such systems is that they can mechanically crank out all possible truths.
Gödel dashed that hope. Arithmetic systems cannot enumerate all possible true statements.
The limit is theoretical, not merely effective or practical. There is no way, even in principle, to accomplish it.
[However: A more complex system can prove all possible truths about a less complex system, but still cannot prove all truths about itself. This continues up the complexity scale. No system can prove all truths about itself. I find the open-endedness comforting.]
Okay, so how do Searle and Gödel connect? And what’s the knock-out punch?
Well, consider what happens if we ask the Librarian mathematical questions…
The basic premise is that the Library acts as a lookup system. By fiat (and probably in truth), there is no understanding in the process of looking up — of indexing — an answer to a given question.
[The understanding is in the design and construction of the Library, how it came to be capable of answering questions.]
As a lookup system, it would seem these have to be separate questions:
- What is two plus two?
- What is one plus three?
Even if they have the same answer. And there are an infinite number of such questions for which the answer is four. Likewise an infinite number of questions about all numbers!
Which, at first blush, seems like it might be okay. The whole premise is that we can — at least in theory — enumerate all possible questions and create the protocol that links them to correct answers.
But Gödel showed this isn’t possible, even in theory.
As a context, imagine one of those city-to-city mileage charts, such as usually found in a road atlas.
These charts are tables of rows and columns. A given set of major cities is listed twice, once along the rows, once along the columns. To find the distance between any two cities, locate one on a row the other on a column, and find where they intersect. That table cell shows the mileage.
No calculation or geographical knowledge required. Just lookup the cities and lookup their intersection.
So easy a simple computer could do it.
Likewise, a simple computer — a hand calculator — can be programmed with the rules of mathematical symbol manipulation (perhaps along with some trig and log tables for lookup of the tetchier stuff).
And, indeed, no understanding is required in these simple lookup devices.
We might try to include those mathematical symbol manipulation rules in our Library protocol. (The Aristotelian view sees math as nothing more than that anyway.)
Doing so would allow the Librarian to answer most mathematical questions.
But here’s the gotcha: Gödel showed that any consistent symbol manipulation system is necessarily incomplete. Any rule-based math system we create for the Librarian will have that flaw.
I’ll note that Gödel’s idea of incompleteness is sometimes applied as a metaphor to non-arithmetic systems. While the poetry may be apt (or not), the technical points usually miss the mark, sometimes badly.
Applying Gödel to the Library, however, is not an error or a metaphor, because we are talking about a mathematical system — more specifically, a computer system.
Because a computational system (or the Librarian’s protocol) is fully deterministic — we can always predict outcomes — Gödel absolutely applies. (As does Turing’s Halting Problem, which is essentially the same thing.)
Logic is math and both are subject to Gödel’s limits. Any purely logical, or purely mathematical, system is necessarily incomplete.
A computer, therefore, is necessarily incomplete.
The deeper question is: Is a human also incomplete that way? Is the human brain, as some hold, ultimately just a mechanical device fully limited by Gödel and Turing?
Or do our brains, because they are not computers, work in a way that transcends the limits of math and logic — which is to say, the limits of computation?
As I mentioned, it is usually considered a category error to apply Gödel to real life (or any non-arithmetic system), but if our brains really are computational devices, then Gödel has to apply.
An alternate view sees the brain as an analog signal processing system (of mind-boggling complexity) not necessarily subject to computational limits.
To tie this together with recent posts, the difference here involves algorithms and virtual versus concrete reality.
The Turing limit obviously directly concerns algorithms, but it may not be obvious that Gödel does, too. The process of enumerating true statements is fundamentally algorithmic.
For that matter, mathematics is deeply algorithmic. (Algorithms are a form of mathematics, as demonstrated by lambda calculus. Formal logic, in general, is explicitly mathematical.)
What links math, logic, and algorithms, is that they are all abstractions.
They are all descriptions of real things.
And those descriptions, those abstractions, are subject to the informational limits proved by Gödel and Turing.
As I said, the deep question is whether a concrete physical object such as a brain reifies mathematical computational abstractions.
If it does, it’s the only object nature has ever produced that works that way.
Stay non-Gödelean, my friends!