Over the last few weeks I’ve written a series of posts leading up to the idea of human consciousness in a machine. In particular, I focused on the difference between a physical model and a software model, and especially on the requirements of the software model.
The series is over, I have nothing particularly new to add, but I’d like to try to summarize my points and provide an index to the posts in this series. It seems I may have given readers a bit of information overload — too much information to process.
Hopefully I can achieve better clarity and brevity here!
At the center of all of this is the “Holy Grail” of AI, replication of human self-aware consciousness, not by just a machine, but by software (which means some machine is required, but the exact kind of machine is irrelevant so long as it’s a Turing Complete machine).
The general description (more a reference, really) of such self-aware consciousness is the phrase “something it is like” (to be human).
We all have a real-time life-long first-person narration of our lives inside our heads.
A private inner movie that we experience.
The question is why?
What’s going on there?
I’ve tried to make three points.
#1. Perhaps mind supervenes on the physical brain.
Laser light, radio waves, pressure effects, sound vibration, heat from current or friction… these are physical effects that supervene on specific physical systems.
No software model of these system does what the physical systems do. It can describe them, even very precisely, but it cannot replicate the physical effects of the thing it models.
Perhaps mind, likewise, supervenes on specific characteristics of the brain. I listed some possible sources of such dependence in the No Ouch! post, which focuses on this first point.
My unmet challenge has been this: Name one system for which the software model (without special hardware) gives rise to the same effects as the physical system it models.
There is only one system I can think of for which the software model accomplishes the same thing. And that is a software model itself.
The idea that a software model of mind can replicate mind, because of Church-Turing, requires that mind already be some kind of software.
But it doesn’t look like any software we know nor is it clear to what extent the mind’s putative software can be separated from its hardware. It may be, that as with lasers and every thing else, both are required.
#2. The digital and analog worlds are irreconcilably different.
It’s hard enough to calculate with the real numbers. A larger class of numbers, the transcendental numbers, present an additional problem in not being algebraic. The Transcendental Territory post focuses on this.
Chaos theory tells us that calculation with finite numbers (such as digital computers require) cannot model some analog systems with sufficient precision for any length of time.
The longer such systems run, the further they diverge.
Assuming software minds work, one possibility is that such a mind diverges from an identical physical mind in a way similar to how two human twins might as they experience different things. It would be a normal mind, just different.
Or, still assuming software minds work, another possibility is that such divergence results in complete failure to launch or in an insane (or otherwise useless) mind.
Assuming a digital system can even accomplish the necessary calculations with the necessary precision. Given we have no clue what those calculations might be, no one can say.
#3. Calculation is limited.
I just mentioned how it’s limited in terms of numerical precision. It’s also limited in ways similar to Gödel’s Incompleteness theorems. There are things that cannot be calculated.
Simply put, you can’t calculate with numbers if you can’t calculate those numbers in the first place.
Fairly common in discussions is the assertion that mind is “information processing.”
The problem here is a lack of definition. The phrase “information processing” is too general to mean anything. All kinds of things can be said to process information.
Worse, all types of information processing we know show no indication of any aggregate or collective behavior that might give us reason to suspect more is going on than simple binary logic.
In fact, binary logic goes to great lengths to prevent that sort of thing!
The belief that consciousness emerges from such a system is just that: a belief. One with no supporting evidence. It’s extrapolation, at best.
It is clear the processing information is part of consciousness, just as use of logic is, but it’s a stretch to argue those things are consciousness.
There is also the assertion “the brain is like a computer.”
I don’t see how. The brain’s architecture is nothing like any digital computer. There is little that matches between them.
Brains are analog; computers are discrete. Brains use bio-chemistry (in which electrons do play a role, of course); computers use only electron flows and voltages in metallic wires.
Low-level components in brains are extremely complex; low-level components in computers are dead simple. The interaction of those components in brains is also complex; in computers, again, dead simple.
Brains are a very large, hugely interconnected network of active nodes; computers have a very simple architecture. Brains are not Von-Neumann machines; computers are.
Computers are precise and have perfect memory and math skills; brains are imprecise, forgetful, and generally terrible at math.
The more correct phrase seems, “Brains are nothing like computers!”
So far no one has explained to me how 1+1=2 gives rise to consciousness.
And it’s not even 1+1=2; it’s 1+1=10, because it’s binary.
As I showed you last time, it’s not even that. It’s just some simple logic gates.
So how do numbers create an illusion of consciousness? It’s claimed they do, but no one can say how.
And it’s not clear to me proponents of this view have a clear idea of exactly where the consciousness comes from.
Is it in the operation of the algorithm — a byproduct of running the code?
Or is it in the numbers themselves?
Either answer assumes facts not in evidence.
The matter of belief or disbelief in an algorithmic mind is one thing. Set that aside; we can agree or disagree. Either way it’s a belief.
My views are based on the facts we know to be true — the one actual data point we have regarding consciousness: us.
I do assume physicalism (such that a physical brain replica most likely would be conscious).
The belief in an algorithmic brain assumes facts not in evidence. It even assumes facts contrary to observed evidence (no conscious algorithms, limits of algorithms and math).
It’s a leap of faith.
It might be a correct one; that remains to be seen. But it’s still a leap, is all I’m saying (and that I, for one, am skeptical it’s possible to make the leap).
For reference, here’s a list of the posts in this series:
- Inevitable Math Foundations and universality of math.
- Beautiful Math A look at Euler’s Identity.
- Moar Math! As the name implies!
- Halt! (or not) A look at Turing’s Halting Problem.
- Calculated Math Introduces the idea of calculation.
- Coded Math Explores code and algorithms.
- Running Code More about how computers run code.
- System Code A look at operating systems and apps.
- Model Code Introduces software models.
- The Computer Connectome How to model a computer.
- The Human Connectome How to model a brain.
- Model Minds Different levels of mind modeling.
- Four Doors More about those levels.
- No Ouch! Comparing hardware and software models.
- Transcendental Territory Explores more math limits.
- Turing’s Machine A detailed look at algorithms.
- Logically Speaking A detailed look at computer logic.