I’ve been on a post-a-day marathon for two weeks now, and I’m seeing this as the penultimate post (for now). Over the course of these, I’ve written a lot about various low-level aspects of computing, truth tables and system state, for instance. And I’ve weighed in on what I think consciousness amounts to.
How we view, interpret, or define, consciousness aside, a major point of debate involves whether machines can have the same “consciousness” properties we do. In particular, what is the role of subjective experience when it comes to us and to machines?
For me it boils down to a couple of key points.
The lesser point has to do with the contrast between designed machines versus evolved machines (and I’m not talking at all about biology). It also has to do with what algorithms are.
The greater point (in my view) has to do with the contrast between the outputs of a system and the behavior of that system. The question involves the location and nature of the essential part of the system.
[In what follows, I assume two things: (1) Human brains definitely produce something we casually label “consciousness.” (2) It’s a striking (“loud”) property that creates a new and very powerful kind of thing in the universe.]
§ § §
With regard to designed machines versus evolved ones, this is not about biology, but about natural systems versus designed systems.
The difference that strikes me is that natural systems don’t have a designer and don’t have an end goal. There is no blueprint that results in a final product.
The Grand Canyon wasn’t designed (no watershed is). It was shaped by various physical factors over time and just turned out that way.
Similarly, whatever brains are, whatever it is they do, they are a result of myriad tiny successful solutions that occurred along the way with no specific end goal — other than survive!
In contrast, a designed machine starts with a blueprint created by an intelligent mind with a specific goal, a reason for the machine.
So, bottom line here, speaking in terms of ontological classes, it seems to me that conflating naturally evolved brains with intelligently designed machines might be a category error.
At the very least when it comes to certain kinds of machines.
Which brings me to the greater point: outputs versus behavior.
A notable characteristic of computation is that only the outputs matter. The platform doesn’t matter. The nature of the computation doesn’t matter. Only the result, the outputs, matter.
A notable characteristic of the real world is that, very often, process matters as much, if not more than, the outputs.
If the “output” is being in New York, when you’re currently in Los Angeles, the end result may be the goal, but how you get there matters. (Walking? Biking? Driving? Flying? Star Trek transporter?)
The reason for the difference is that computation is abstract, like a blueprint. It’s always in reference to something that gives it meaning.
But physical things are only in reference to themselves. There is no interpretation to apply — there is only understanding the true account of the physical system.
So in a physical system, behavior matters — platform matters.
I’ve talked about three broad ways machines could try to be conscious.
Firstly, they can seek to duplicate the physical brain — the Positronic brain.
Since this is a physical system that behaves similar to a brain, it’s hard to see why it wouldn’t also produce a conscious being.
Secondly, they can simulate the meat, the physics, or the biology.
A simulation of the real world can be arbitrarily precise if we have the data points. (One problem with weather modeling is how course-grained our data gathering is.)
The crucial point in question is what, exactly, gets simulated. Specifically, if consciousness is a property of a physical system, can we account for it in a simulation of that system?
Thirdly, machines could emulate the neural network or the brain, or even seek to emulate higher functions of the brain.
The distinguishing feature here is modeling functional aspects of the system above its physics. Commonly this involves a model of the neural network, but it can involve higher functionality.
The key point is that the second and third approaches behave quite differently from brains. They generate outputs through a quite different processes.
On one side of a gap, the physical systems of the human brain (case 0) and the Positronic brain (case 1). On the other side, case 2 and case 3, which involve calculating the necessary outputs with numbers.
Everything depends on whether consciousness lies within those numbers, those outputs.
If it lies in the processes of the brain, it might not show up in the numbers.
§ § §
The analogy of laser light expresses what I mean.
Only certain physical materials, under certain physical conditions, emit the coherent photons we call laser light. The behavior of the physical system produces the light.
This behavior can be simulated, at various levels, on a computer, just as with brains. In all cases, the outputs produced describe — perhaps quite accurately — what the physical system is doing.
But, of course, the simulations don’t produce any photons.
Laser light cannot arise as a consequence of the behavior of the system, nor can it arise from its outputs (which are just numbers).
If consciousness results from the behavior of a physical system, then one has to question whether a completely different process will produce it.
§ § §
I think the strongest argument computationalism has — and I admit it’s a strong point — simply suggests that a good enough simulation of the system must produce whatever the system produces.
The counter-argument is what I’ve laid out above, that the process may matter more than the outputs.
The question, then, is what does an accurate simulation do (if not produce consciousness)? I’m already on record thinking it may just produce a biologically functioning — but comatose — brain.
Alternately they might produce: a gibberish personality; a raving lunatic; a zombie; or any of a wide variety of failure modes.
(One thing about humans writing software? We’re not that good at it. That alone should raise some concerns with regard to AI.)
§ § §
Some general objections I have to computationalism:
¶ Where’s the algorithm? If the brain is computational, there is an algorithm. Where is it? If the mind really is a computation, then that algorithm exists somehow.
I don’t mean where is it physically located. I mean, how do we find it in the natural analog behavior of a physical system. It’s hard to even tell what brain states might be.
When it comes to simulating the brain algorithm, we don’t have a clue where to even start. We don’t know such a thing even exists.
¶ Undecidablity is a computational problem (defined in terms of Gödel and Turing). It means certain inputs can cause a system to loop infinitely.
The upshot is that computations require escape conditions and error handling. It’s another aspect of how different a computational process is from a natural one — there’s no such thing as an infinite loop in physical processes.
(In fact, it’s the physical limits that ultimately end an infinite computation with no escape clause. As far as the computation is concerned, an infinite loop is no different from any other execution thread. It’s all just steps.)
¶ Computers are not analog. They compute.
Both those things, no matter what broad umbrella is used to lump them together, are visibly and strikingly different.
One need only compare the bit pits on a CD with the grooves of a vinyl record (or magnetized domains on tape) to see that striking difference.
Alternately, compare doing long division (a calculation) with the single real numerical quantity represented by the fraction.
The difference is truly a Yin and Yang contrast. They are exclusive of each other.
¶ Computation is not that relative. Data is, because it’s abstract.
And while the exact meaning of a given computation is relative to how its data is interpreted, that’s not the same as the computation itself being relative.
Computation is recognizable. And it’s not what’s happening in our brains.
¶ So the brain is not a computer.
It’s obviously not a “general purpose computer” — as pretty much everyone agrees. The question is whether it’s any kind of “computer.”
We can (I believe) agree the brain is absolutely an analog signal processor just on account of its network flow. The behavior of synapses and neurons justifies calling it an analog computer.
[I think analog “computer” confuses the issue, so I don’t favor it.]
§ § §
My bottom line is that I’m very skeptical of computationalism.
Consciousness, as I read it, arises from a physical system behaving in a certain way. I a sense, its outputs are byproducts.
I suspect the right outputs cannot be generated without consciousness. There is the suggestion that generating the right outputs implies consciousness with the implication generating those states or outputs is easier than generating consciousness.
My guess is it doesn’t work that way.
Stay non-computational, my friends!
 No footnotes today!