In the last post I talked about software models for a full-adder logic circuit. I broke them into two broad categories: models of an abstraction, and models of a physical instance. Because the post was long, I was able to mention the code implementations only in passing (but there are links).
I want to talk a little more about those two categories, especially the latter, and in particular an implementation that bridges between the categories. It’s here that ideas about simulating the brain or mind become important. Most approaches involve some kind of simulation.
One type of simulation involves the states of a system.
I’ll start by defining state. It means essentially what we mean when we say, “You look in quite a state today!”
Casually, we might say state means the condition of something (State of the Union).
In physics, the state of a system is indeed a description of its condition. In physics, though, it is meant to be a complete description of the system.
Or at least complete enough to fully and faithfully describe the system properties of interest.
§
Have you ever seen a scene where havoc happens, but things are restored to their original pristine state before the {parents, cops, boss} return?
That was possible because someone knew the original state of the system (either by saving a “snapshot” of it or by knowing how to recreate it).
If we make a copy of the system states at some point (or figure out how to clean and fix everything), we can let loose the dogs of havoc and the cats of chaos. When the dust and fur settle, we just restore the backup (or clean and fix).
You may have done this when your PC got infected or a drive crashed.
§
With a computer, it’s fairly easy to save its state. Computers are mathematical logic reified, so state is numeric and can be saved and restored. Further, because of the way computers work, those states are easily accessible.
This is happening in the computer in front of you right now, all the time.
Different applications share the CPU by time multiplexing. When the system passes control from process A to process B, it stops A and saves its state. Then it restores the saved state of process B and starts it up again.
It probably happened thousands of times while you read that paragraph. If you open up your computer’s task manager, you’ll see all the processes running.
§
The next logical step is, what if we saved the state every time it changed?
We might end up with a lot of backups, but we could restore the system to any point we wanted.
More interestingly, we could play back the system state by state, as if it were running.
We could even run it one state per day, but over the years it would still be exactly as if the program were running.
Which brings us to the idea of using some numeric representation of system states, and a software engine, to simulate the system by running its states.
§
If we know the system states, we can just put them into a table and run the system with simple code that references the table.
That’s exactly what full_adder_5 does in the Full Adder Code post.
A basic sketch of a state engine goes something like this:
- Begin with the [START] state.
- Wait for {event} to proceed.
- Use {event} to determine next state.
- Exit this state; enter next state.
- If state is [EXIT] then quit.
- Goto #2.
The example code for the full-adder simplifies and compresses the steps above into single lines (repeated in #15 through #18), but all the steps are occurring.
The difference is that, rather than looping, each series of the above steps is on a separate line. That’s possible due to the simplicity of the full-adder state table:
As with the logic gate circuits, the state table also demonstrates the natural flow of the full-adder logic. There are no loops or user decision points (given the three bits of input).
As in all the cases involving the abstraction, the code models lend themselves to instant evaluation.[1]
Speaking of states, in a numerical simulation, a state generally has two key properties:
- A Name, Label, or Tag, to identify the state uniquely.
- A List, Table, or Dictionary, of possible next states.
The entries for the set of next states generally have three key properties:
- A Function that matches on an input condition (thus picking this entry as the next state).
- The Name (Label or Tag) of the next state. Or in some cases a direct link (pointer) to the next state.
- (In many cases) A Transition Function. This is something to do between one state and the next. (States themselves are static — nothing happens. All the action happens on transitions.)
[See Turing’s Machine or State Engines, part 1, for more about state engines.]
§
The main point is that we can replicate the behavior of a system by “running” its states. In some cases we can be the system.[2]
The flow nature of the full-adder brings up an important point about running the system: What is the {event} that drives one state to another?
It can be one of two things: External input; or an internal clock.
In the latter case, the system moves from state to state on a time {event}. If we were running a set of saved PC states to find the exact point it crashed, we’d just run those saved states at some speed (which could be anything, slow or fast).
In the former case, the system waits for an {event} from the outside. This might be a stream of data (say video bytes) that keeps the system moving, or it might involve waiting for the user to input some coins.
§
Which brings up one interesting conundrum when talking about states of the mind.
Let’s say, on the one hand, we manage to capture your mental states, just like we captured the states of the PC. This allows us (in theory) to restore you to a previous point.[3]
We could play back those states from that point. Effectively, it would be like a very accurate video recording.
But that says nothing about alternate states — which interacting with the recording requires.
Nor does it say anything about your future states — which going beyond the recording requires.
Both have the same problem: predicting future states. It requires that we understand the laws involved for creating new ones.
If the actual mental states (as in most software models) are finite and fixed, this amounts to figuring out the laws involved in moving among them.
§
That’s not the only problem I see for simulating the mind as a state engine.
What actually is a mental state?
It’s not like the brain’s neurons all change their state at once. Signals flow from one to another. (There is some synchronous behavior, but nothing like the rigid states of a clocked computer system.)
Even just “backing up” a mind (let alone playing it back) requires we figure out what the relevant states are and how to capture them.
How granular are the states we must capture to have a faithful recording? Quantum states? Atomic states? At what level do system states miss important properties of the system?
§
The full-adder simulations offer an illustration.
The state model algorithm (full_adder_5), and also examples #3 and #4 before it, have intermediate states not represented in the abstract model.
Further, those intermediate states are different between the algorithms.[4]
This issue of intermediate states becomes even more apparent when we look at the full-adder Sim v1 and Sim v2 (in the next post).
§
I’ll end by asking what it might mean to play back someone’s mental states.
There is a crucial question: What is required for mental states to have phenomenal content? (For the moment, let’s assume our mental states do have phenomenal content.[5])
Does playing them back in any form result in that phenomenal experience? That suggests subjectivity (real or apparent) lies in the states themselves.
Or are the states byproducts — mere descriptions of the system?
Assuming playback involves phenomenal states, what about playing back the suffering of a criminal as punishment? Or anyone.
It wouldn’t affect the original person, but it would affect an image of them.
Does it matter to them?
Does it matter to us?[6]
§
There is also the question of exactly how the mental states interact with external reality, but that’s probably mostly just I/O.
Stay stateful, my friends!
∇
[1] If you look at the code, you’ll see it can be implemented as a one-line recursive solution. That’s not usually (I’d say rarely) the case with state engine implementations. Most require an engine looping over the table, often visiting the same state repeatedly.
That recursive solution, again, shows the flow nature of the full-adder.
[2] I went through a stage as a programmer where just about everything I designed was a state engine of some kind. I fell in love with the capabilities. I still use the architecture sometimes.
State engines are a good choice for implementing some algorithms because the state graph can be a good match to the abstract graph represented by the algorithm. The coin machine state diagram shown at top is an example.
[3] I loved Russian Doll! But that reset allowed the character to remember previous experience (i.e. after the save point) and they could take new actions (as with Bill Murray in Groundhog Day).
[4] As an aside re state tables, #5 is the largest of the five abstract model algorithms. State tables for anything interesting do tend to be large — lots of states, and some states have lots of next states.
[5] Fie! on the doubters! Fie! I say! 😀
[6] A theme explored more than once in Black Mirror!
May 13th, 2019 at 3:42 am
Consider the difference between the playback of a person’s recorded suffering versus the playback of that same person’s original suffering on video.
What about a very close simulation of that person suffering?
May 13th, 2019 at 3:43 am
As a separate issue: Do the possibilities make you hope computationalism is false? That none of this is possible?
May 13th, 2019 at 7:05 pm
On playbacks on copied images, for me, a lot depends on when the copy is made. If you tell me that you’re going to copy me and then torture one of the copies, I have a 50% chance of being the original and a 50% chance of being the copy. In that case, I feel personally threatened.
But if you tell me that you’re going to torture a copy made last week, then I don’t feel personally threatened. There is zero chance that torture is in my subjective future timeline. The issue then becomes a moral one rather than one of personal jeopardy.
I can imagine a future where an accused criminal can be mentally dissected to ascertain their guilt or innocence, but only if they consent to it. But they can only consent to having it done on themselves or a copy yet to be made, in other words, on a copy in their future subjective timeline. They can’t give consent for it to be done on an existing backup, since the backup itself wouldn’t have consented.
On hoping whether this is possible or not, I think we can safely assume it won’t be possible for us, although I’d take the downsides to get the benefits.
May 13th, 2019 at 10:11 pm
The benefits being immortality?
That’s an interesting point about when the copy is made. Parts of it are almost kind of a MWI thing.
On some level I find the proposition entirely science fiction and have a hard time taking it seriously. Makes for some great fiction, though.
I mean,… what even are brain states? How would they be recorded? How would they be played back? On what? The amount of data seems formidable, perhaps impossible to capture accurately. (Think about how CERN only records a tiny fraction of the data they record, and they only record a part of the data they could.)
I actually feel pretty safe saying it will never happen, that it really is just an SF fever dream. I absolutely feel safe saying it’ll never happen in my lifetime (or in the lifetime of anyone I know).
May 14th, 2019 at 8:12 am
It’s interesting that on this issue, people tend to fall into two broad camps. Those who think it’ll happen in 20 years, and those who think it’ll never ever happen.
I’m in the middle. For me, mind copying is like interstellar travel. I’ll never get to do it, but concluding that it’s forever impossible doesn’t seem justified. Maybe they’re both impossible, but I need to see something besides difficulty to convince me of it. (Something like the physical laws that probably do make FTL impossible.)
But I suspect the idea that mind copying might be possible someday, for not to us, is the worst scenario for a lot of people. No one wants to be among the last mortal generations.
I take some comfort from the realization that living until the heat death of the universe almost certainly wouldn’t be the walk in paradise many hope for. I think most people who got that option would end themselves long before then.
May 14th, 2019 at 11:54 am
“Those who think it’ll happen in 20 years,”
Likewise fusion power and a few other things. I think it’s the sense of being on the brink of something versus the sense that the issues involved are more significant. It’s an almost classic Pollyanna or Cassandra division.
“I’m in the middle. For me, mind copying is like interstellar travel.”
The comparison may be quite apt in that we’ll never do that, either. It may be that both are possible, in principle, but actually doing it may be too formidable. Or even effectively impossible.
For us, at best (and as suggested in Brin’s Existence), interstellar travel may be more like a plant sowing its seeds into the unknown hoping one or two might find hospitable conditions, take root, and flourish.
The Star Wars or Star Trek scenarios do seem pure fantasy to me. There is no Kessel run.
“No one wants to be among the last mortal generations.”
Been some pretty good SF stories along those lines. Have you ever seen Mr. Nobody? Very weird movie, but interesting. (I posted about it last year.)
“I think most people who got that option would end themselves long before then.”
SF has been pretty clear that immortality isn’t a great thing. (Even Doctor Who touches on that. The Doctor is a lonely guy and he’s just centuries old. That’s the whole pathos behind his companions.)