Last time I introduced four levels of possibility regarding how mind is related to brain. Behind Door #1 is a Holy Grail of AI research, a fully algorithmic implementation of a human mind. Behind Door #4 is an ineffable metaphysical mind no machine can duplicate.
The two doors between lead to physical models that recapitulate the structure of the human brain. Behind Door #3 is the biology of the brain, a model we know creates mind. Behind Door #2 is the network of the brain, which we presume encodes the mind regardless of its physical construction.
This time we’ll look more closely at some distinguishing details.
We’ll walk past Door #4 quickly again because it’s a dead-end as far as AI is concerned. Computers might be very powerful and useful, but they will always be mindless.
Behind Door #3 we find ourselves, our biological brain-mind, and the key point here is that humans experience qualia and a personal mind’s «I».
We assume that these define what it means to be conscious and self-aware.
Humans create biological minds routinely at the rate of about 350,000 per day.
It seems possible this could be sped up and done artificially.
Biological minds have the disadvantage of taking time to train, but advances in our understanding of human memory might result in ways to install knowledge (even experience, which is just memory of past events), quickly.
The biological similarity between artificial clones and humans raises some ethical questions. A cloned human would seem due the same rights as one made the old-fashioned way.
But Door #3 serves mainly as a reference point. It embodies what we know works (given an assumption of physicalism). If we never get Door #2 (let alone the more distant Door #1) open, then we’re stuck just making new “people.”
An interesting point involves somehow transferring an existing mind to a cloned one. Presumably a genetic clone could grow the same network.
Those need to be transferred from the existing brain to the clone or all you have is the blank slate of a new-born human.
A more sensible engineering goal is replicating the network, the connectome, along with the synapse weightings, in a non-biological manner. That abstract network — physically implemented — is what’s behind Door #2.
The network itself is the blank slate of a new-born human. All our instincts and basic drives are encoded in that complex highly connected network. Some synapse weightings may even be part of our genetic heritage.
Everything we learn once born is encoded in the LTP of the synapses. But to the extent we grant a new-born human a mind (and we do), an unweighted network should likewise give rise to some sort of (infant) mind.
For example, infants experience qualia. Their «I» may not be well-developed, or even apparent, at first, but they experience the world. Presumably (through the appropriate I/O), a new-born network could do the same.
We should be able to see the activity of the network experiencing qualia (just as fMRI shows that activity in human brains).
The key point here is the physical correlation between experience or thought and what we see in the brain. The study of neural correlates is the study of exactly that.
To open Door #2 all we require is that, in replicating the physical operation of the brain, we also end up replicating the (physical!) operation of the mind.
Interestingly, we don’t even really need to understand exactly how the network gives rise to a mind. We just need to replicate the circumstances.
The goal of transfer becomes more feasible behind Door #2. Creating (I really like the term weaving) a mechanical network that precisely matches an existing one certainly seems doable.
That leaves two big challenges: Firstly, the need to scan an existing human mind in enough detail (including the synapse strengths!). Secondly, the need to transfer both the connectome and the synapse strengths to the network.
As we saw in The Human Connectome, the amount of information involved is (at least) in the petabyte range. Even at extremely high data rates, it can take many hours to transfer that much data.
Perhaps one solution is a very high-resolution fMRI scan capable of capturing the brain’s structure in sufficient detail. Images, especially holographic ones, can carry huge amounts of data. Such a scan might capture the brain as holographic images in mere minutes.
Analysis of that image data and extraction of the connectome might take much, much longer, but at least the subject isn’t forced to sit there for it.
Then all that extracted data would allow (perhaps through some kind of 3D printing) the weaving of a machine network that matched.
If the printing process was good enough, even the synapse strengths might be printed. If not, we need another stage to implant the synapse LTPs. (But if 3D printing was sufficient to print the whole thing, creating dozens of copies of yourself wouldn’t be a problem! “This little mind-clone went shopping. This little mind-clone stayed home…”)
Of the two, scanning and transfer, the former seems most formidable due to the need to obtain so much extremely high-resolution data. Even so, it seems primarily an engineering challenge.
Note also that the formidable scanning challenge exists for all types of mind transfer from human to machine. Regardless of how that data is ultimately used, it still needs to be scanned and obtained.
Which brings us to Door #1, behind which is a purely software mind.
This is truly the land of unknowns. We have no real reason to think mind is (purely) algorithmic. Nothing else natural is.
There is some small reason to doubt a physical model will work (maybe some form of dualism is true). That doubt becomes greatly magnified when we consider a software model.
Consider the difference between a slide rule and a calculator.
In a slide rule, printed scales physically recapitulate the reality of (base ten) logarithms. By sliding one scale against another, we create a ratio (a rational number) and the ratio we set occurs along the length of the juxtaposed scales. We can read variations of the rational number (e.g. 1/2, 2/4, 3/6, etc.) at all points along the scales.
In a calculator, a clocked flow of electrons through logic circuits results in a series of binary logic steps that ultimately delivers the specific number requested.
(And just the one, not a continuum of answers such as provided by the slide rule).
Both processes can deliver an answer to whatever precision we put into their construction. Both can achieve a similar result. But they use starkly different processes to get it.
The crucial question with regard to consciousness is whether it arises as the result or the process.
If consciousness is the result of a process, it can be calculated by an algorithm, and we can open Door #1. It’s as simple as calculating the right result.
If consciousness is the result of a process, then it cannot and we cannot. Consciousness lies, not in the result, but in the process, so all we can do is emulate that process.
Next time we’ll wrap up this series by looking specifically at systems with physical effects (such as lasers) that emerge only from inherent properties of the physical system.
Specifically we’ll look at how a software model of such a system can very precisely emulate its behavior and describe what happens, but it cannot cause the same physical phenomena to arise.
That requires those inherent physical properties!
To put it simply: A software model of a laser does not emit laser light.
A software tree falling in a software forest only creates numbers representing the sound it makes. A virtual falling tree does not make a physical sound!
 It’s possible they could act like conscious beings, and it’s even possible the act could be so convincing it questions what we really mean by conscious or self-aware. This is a partly philosophical question beyond the scope of this series.
 The process involves some fun, a long wait, and then a large investment in training, but if it’s done right it usually produces decently functioning units.
 But do environmental conditions during growth affect the connectome? I suspect they do, but let’s assume technology allows somehow growing an identical physical brain.
 Which, again, is done routinely about 350,000 times a day by unskilled labor, so — really — how hard can it be?
 One little gotcha is the need to capture the synapse strengths that encode the lifetime of learning and experience. That requires extremely high resolution and the ability to determine those strengths just from the physical structure of the synapse.
 The more serious issue is that it might be an intractable engineering challenge. Or maybe too expensive for common use.
 That may forever pose a problem for uploading of our minds. It may be too difficult to extract the needed information from a human brain.
(What if it could be done at the expense of dissecting the existing brain? Would you sacrifice yourself for the chance to live in a computer?)