Over the last three posts I’ve been exploring the idea of system states and how they might connect with computational theories of mind. I’ve used a full-adder logic circuit as a simple stand-in for the brain — the analog flow and logical gating characteristics of the two are very similar.
In particular I’ve explored the idea that the output state of the system doesn’t reflect its inner working, especially with regard to intermediate states of the system as it generates the desired output (and that output can fluctuate until it “settles” to a valid correct value).
Here I plan to wrap up and summarize the system states exploration.
I left off last time talking about intermediate, or transitory, states of a system. The question is, if we only look at the system at certain key points that we think matter, do any intermediate states make a difference?
In a standard digital computer, the answer is a definite no. Even in many kinds of analog computers, transitory states exist for the same reason they do in digital computers (signals flowing through different paths and arriving at the key points at different times). In both cases they are ignored. Only the stable final state matters.
So in the brain, what are the key points? What states matter?
In the last post I talked about software models for a full-adder logic circuit. I broke them into two broad categories: models of an abstraction, and models of a physical instance. Because the post was long, I was able to mention the code implementations only in passing (but there are links).
I want to talk a little more about those two categories, especially the latter, and in particular an implementation that bridges between the categories. It’s here that ideas about simulating the brain or mind become important. Most approaches involve some kind of simulation.
One type of simulation involves the states of a system.
Imagine the watershed for a river. Every drop of water that falls in that area, if it doesn’t evaporate or sink into the ground, eventually makes its way, through creeks, streams, and rivers, to the lake or ocean that is the shed’s final destination. The visual image is somewhat like the veins in a leaf. Or the branches of the leaf’s tree.
In all cases, there is a natural flow through channels sculpted over time by physical forces. Water always flows downhill, and it erodes what it flows past, so gravity, time, and the resistance of rock and dirt, sculpt the watershed.
The question is whether the water “computes.”
On the one hand, a main theme here is theories of consciousness. On the other hand, it’s been almost eight years blogging, and I’ve covered my views pretty well in numerous posts and comment threads. Our understanding of consciousness currently seems stuck pending new discoveries, either in answering hard questions, or in providing entirely new paths.
A while back I determined to step away from debates (even blogs) that center on topics with no resolution. Religion is a big one, but theories of mind is another. Your view depends on your axioms. Unless (or until) science provides objective answers, everyone is just guessing.
But it’s been three-and-a-half years, and, well,… I have some notes…
Over the last few weeks I’ve written a series of posts leading up to the idea of human consciousness in a machine. In particular, I focused on the difference between a physical model and a software model, and especially on the requirements of the software model.
The series is over, I have nothing particularly new to add, but I’d like to try to summarize my points and provide an index to the posts in this series. It seems I may have given readers a bit of information overload — too much information to process.
Hopefully I can achieve better clarity and brevity here!
Over the past few weeks we’ve explored background topics regarding calculation, code, and computers. That led to an exploration of software models — in particular a software model of the human brain.
The underlying question all along is whether a software model of a brain — in contrast to a physical model — can be conscious. A related, but separate, question is whether some algorithm (aka Turing Machine) functionally reproduces human consciousness without regard to the brain’s physical structure.
Now we focus on why a software model isn’t what it models!
Last time I introduced four levels of possibility regarding how mind is related to brain. Behind Door #1 is a Holy Grail of AI research, a fully algorithmic implementation of a human mind. Behind Door #4 is an ineffable metaphysical mind no machine can duplicate.
The two doors between lead to physical models that recapitulate the structure of the human brain. Behind Door #3 is the biology of the brain, a model we know creates mind. Behind Door #2 is the network of the brain, which we presume encodes the mind regardless of its physical construction.
This time we’ll look more closely at some distinguishing details.
Last week we took a look at a simple computer software model of a human brain. (We discovered that it was big, requiring dozens of petabytes!) One goal of such models is replicating consciousness — a human mind. That can involve creating a (potentially superior) new mind or uploading an existing human mind (a very different goal).
Now that we’ve explored the basics of calculation, code (software), computers, and (computer software) models, we’re ready to explore what’s involved in attempting to model a (human) mind.
I’m dividing the possibilities into four basic levels.