Last week we took a look at a simple computer software model of a human brain. (We discovered that it was big, requiring dozens of petabytes!) One goal of such models is replicating consciousness — a human mind. That can involve creating a (potentially superior) new mind or uploading an existing human mind (a very different goal).
I’m dividing the possibilities into four basic levels.
We’ll start with (and quickly leave behind) level #4. This level takes a thing we know is true — the fact of human consciousness — and makes an assumption that it’s the only thing that can be true (because, so far, that’s the case).
Maybe minds come from god. Maybe there is some non-god form of dualism that’s true. Maybe this is a virtual reality and the “rules” only allow minds in living human bodies.
Maybe it’s a Tegmarkian universe, but the math only works in living people.
This level assumes there is some metaphysics — or possibly just physics — that restricts consciousness to “people” (where “people” includes intelligent aliens; potentially even elephants or other animals).
In particular, no non-living machine (of any kind) can be conscious, so AI will never accomplish that goal (let alone uploading).
Since that’s the end of the line on this one, we’ll leave it there.
Level #3 also takes a true fact — that biological human brains result in consciousness — and assumes it’s the only thing that can be true (because, again, so far that has been the case).
Since the one example of consciousness we have is organic and biological, this level assumes those are necessary conditions (we don’t know they aren’t).
Time may also be a factor. Maybe it turns out that minds require lots of real-time experience to function properly.
Note that from this level on, we’re assuming physicalism (that this physical universe is the only universe — no metaphysics). That means consciousness is strictly “something that brains do.” Given this, replicating a mind is an engineering problem.
But here we assume the physics requires a living, organic, squishy biological brain that is grown and trained. It might be a matter of how bio-electronics works compared to metallic electronics. Maybe electrons flowing through wires just doesn’t cut it.
It’s hard to see exactly why a brain has to be squishy, but maybe the only way to get the necessary brain cell density, or the right micro-structures, or the massive number of interconnections, amounts to growing them (like crystals or a plant).
Or constructing them with nano-machines. The distinction between biology and machinery can get fuzzy at this level. It sort of boils down to proportions of carbon, hydrogen, oxygen, and metals.
We’re assuming here that those proportions matter somehow.
At this point, I want to introduce two important criteria involving the consciousness we’re trying to replicate.
The first is that we humans experience qualia. There is “something it is like” to look at the color red. The question here is: What are the correlates in our model of the experience of redness? We should be able to identify that in any model we construct.
The second is that we humans have a sense of the «I» — the self narrator of our personal lifetime movie. Our identity as self-aware beings is based on the «I».
As Descartes said, “«I» think, therefore «I» am.”
With regard to qualia and organic brains, we are beginning to identify correlates between subjective experience and brain function.
In some cases, we’re able to say a great deal about what a subject is “thinking” based on what we see their brain doing.
For now, an understanding of the «I» eludes us rather completely. There are theories, but they amount mostly to best guesses — hypotheses worth investigating.
If we can clone or grow a brain, we may be able to create a new mind. Yet this isn’t far off from what parents do when they create children. And it’s possible the physics really does require both biology and time to create a functioning brain.
It seems unlikely uploading would be possible at this level. But perhaps artificial new minds — perhaps exceptionally powerful ones — might be possible. It may amount essentially to making a child very fast and under very fine control over the resulting brain.
Level #2 focuses on the fact that brains are highly connected networks but ignores any requirement for biology or density or other specific physical requirement. It assumes consciousness lies in the complexity of the network, not in any specific physical aspect of it.
Any sufficiently complex network resembling a human connectome and capable of processing information like a brain should give rise to consciousness.
As with the previous level, it’s possible brains require real-time training to function.
However, the mechanical nature of this level allows the possibility that, even so, such “real-time” training could be accomplished in very short actual time.
It may be that complexity of the network prohibits uploading or downloading and that only (perhaps very fast) “experience” can program it.
For the first time, we’re stepping away from having working examples that tell us “this much, at least, is possible.” We have no complex, non-organic, intelligent networks in our experience. We assume that in replicating the structure of the brain, we also replicate its function.
The key question is whether a brain machine that works closely enough to a human brain will produce consciousness. Given the assumption of physicalism, it’s hard to see why it wouldn’t, so despite the lack of working examples, there is a good chance this level could work.
The mechanical nature of the network should make it easy to interface to more standard forms of digital information.
Uploading an existing mind might be possible, although there are some significant engineering challenges:
Firstly, the need for a sufficiently accurate enough scan of the network of an existing mind. Secondly, how to apply that network to an existing physical one.
Perhaps, once the scan is accomplished, a mind is produced (a nice verb is woven) from that scan. Formidable, but potentially possible.
Level #1 assumes that mind is an algorithmic process. That it has a Turing Machine that represents it. This is pure assumption with no working evidence — or even model — that suggests it’s possible.
At this level, there are two basic options: A software model can attempt to replicate the complexity of the physical brain.
Or it can model consciousness functionally in a way that is not related to the brain’s structure.
When we created our crude brain model, we took the first option.
No one really knows enough at this time to do much with the latter option. We have no working model of consciousness.
We do have systems that act (somewhat) as if they were conscious — or at least intelligent. They can pass a limited Turing Test, so to speak. Software programs such as Siri and Watson are steps in that direction.
But the Holy Grail of AI is replicating true human consciousness. Something that can say, with the same authority we can, “«I» think, therefore «I» am!”
And obviously this is necessary if we are to upload ourselves, something that would seem to be almost trivial at this level.
So those are the four possible levels I see. The middle two are grounded in the one example we have (us), the first and last are speculation.
(The first is religion or metaphysics and is taking time off from these posts.)
The last one is based on the key assumption that mind is algorithmic. If it is, the rest follows.
[For the record, I see this level as being nearly as much wishful thinking as the first one about humans having souls. There is literally not one shred of hard evidence supporting this.]
Levels #3 and #2 aren’t very different, only on the matter of biology over machine. The former is obviously true in us. For the latter, I’m hard-pressed to come up with a good scientific reason a machine brain wouldn’t work.
The key point here is the very large gap between levels #2 and #1.
As we cross that line, the model changes from a recognizable physical replica to a code and data calculated simulation with no physical correspondence to the brain or its function.
A software model, even one that seeks to replicate the physical structure of the brain, is still a dance of numbers. Binary bits flowing in and out of the CPU. Simple math and logic implemented in metal and silicon.
It requires a great deal of faith to believe that this can result in subjective qualia and something with an «I».
That faith may someday turn out to be justified.
But I’m extremely skeptical it will. In the next couple of posts, now that I’ve laid all the groundwork, I’ll try to explain exactly why.