The ultimate goal is a consideration of how to create a working model of the human mind using a computer. Since no one knows how to do that yet (or if it’s even possible to do), there’s a lot of guesswork involved, and our best result can only be a very rough estimate. Perhaps all we can really do is figure out some minimal requirements.
Given the difficulty we’ll start with some simpler software models. In particular, we’ll look at (perhaps seeming oddity of) using a computer to model a computer (possibly even itself).
The goal today is to understand what a software model is and does.
Using one computer to model another computer (or even itself) might seem strange, but it’s actually very common.
- A Java program actually runs in the Java Runtime Engine, a system program that models a virtual computer capable of running Java code.
- Apple and Unix machines (among others) can run Windows™ programs on emulators — programs that model a virtual Windows™ computer.
- Developers of mobile apps use software models of real phones to test their code.
But first let’s take a step back and consider what a software model is.
There are many definitions of the word model. It can refer to a person who poses for a painting or who acts as a living manikin. It can also refer to the type or style of something (as in: “the latest smart phone model”).
In this case we mean the same thing as when we say “model car” or “model airplane” — a representation or example of something.
Note that models are often simplified or miniature versions of the thing they represent, but there is also a sense where something can be an example to follow (for instance, a model citizen — in which case the model is the ideal or desired version).
At its core the word means measure — as in: “to take the measure” of a person or situation. A model is an abstraction of something, and the abstraction can be a simple or detailed.
So models can range from tiny, indistinct Monopoly pieces to full-sized, possibly working, models that are essentially copies (even idealizations) of the thing they model. Often such models are judged by how accurate and complete they are — by how closely they emulate the thing they model.
In some cases the desire is to simulate, even replicate, something. When anthropologists seek to make stone tools by chipping rocks, they are modeling lost processes used by our ancestors.
The goal there is mainly a demonstration of feasibility. By modeling a process exactly in context we prove our ancestors could have used them (proving they did might require time machine).
We create models of the weather hoping to understand, and to predict, it.
One thing we’ve learned is that no digital computer model can ever do it completely successfully (that is: accurately). There are multiple reasons for this, some practical, some theoretical (I’ve explored some of them in the past).
A very real question is whether a software model of a human system might likewise have such limits. It’s possible chaos, uncertainty, data resolution, or even sheer data volume, make a working software mind impractical or even impossible.
In any event, it’s clear we eventually hope to create a software model of a mind so accurate and functional that we can envision ourselves living as conscious machine minds.
Let’s sneak up on that by considering some simpler software models. We’re interested in what it takes to model some real world object.
We’ll start with the humble (English) checker board.
And just the board. We won’t try to simulate a player who knows how to play. That is, we’re not going to model the game strategies, but we will need to model the progress of a game.
That means we need to model the physical board and pieces as well as the idea of two players taking turns making moves.
And at any given point during a game, our board model needs to reflect the state of the game (one way to view this: we can save and restore the game state).
That description of our model, plus the rules of checkers, gives us some requirements:
- A board has 64 squares in an 8×8 grid.
- Each player has 12 pieces (24 total).
- The game starts with all the pieces on the board.
- Pieces can be removed (“captured”) during play.
- The pieces have a starting arrangement.
- The pieces have legal moves.
- Pieces can transform into “King” pieces.
- Different move rules apply to King pieces.
- Players alternate taking turns making moves.
- Certain board states are a win for one player.
A software model needs to implement these requirements (and more).
A simple model might only implement the physical board and pieces and not enforce the move rules of checkers. Such a model could allow players to place pieces anywhere on the board.
A more complicated model might model a game and only allow legal configurations and moves. Such a model could even automatically crown a king or declare victory when the game reaches the right state.
This points out a very important distinction I want to make here: There are two aspects to any real world model.
Firstly, there is the static representation of the object being modeled.
Secondly, there is the dynamic environment — the real-time environment — in which the object functions.
Modeling the checker board and pieces is the static side. Modeling a checkers game is the dynamic side.
Those are actually two different models!
Likewise, a software model of an airplane can model only the physical aspects of the plane (it can be an electronic version of a blueprint), or it can also model flight characteristics — how the plane would fly.
Considering what Google is doing with self-driving cars. They start with a static model of vehicles, streets, traffic rules, and so forth.
But the real meat is the model of the dynamic driving process. It is a model of a (hopefully perfect) driver.
Let’s finally apply all this to a software model of a computer.
We’ll start with a static model and then see about a dynamic one. First we’ll model a computer that’s just sitting there; then we’ll have a go at one that’s actually running.
A key question involves the level of our simulation. Are we simulating the computer at a low mechanical level or at a high functional level? Are we trying to emulate the individual small parts of the machine or how the thing works?
Even on the mechanical side, do we simulate the transistors, the logic gates they combine to form, or the logic blocks those combine to form?
Figuring out exactly how to model a computer is a complicated proposition, let alone actually designing the code to pull it off. On one level it’s simple because (remember!) computers are just over-grown calculators and they only use two numbers (zero and one) to boot!
But they are involved. There may be only 26 letters in the Latin alphabet, but all the possible books using it will never be written. Simple things can combine to make complex things.
Fortunately all we want to do here is characterize the complexity of the task. That requires defining exactly what we mean by complex.
Which unironically turns out to be a complicated topic. There are many ways to define complexity.
We’re dealing with algorithms — information — so complexity as it applies to information (especially to algorithms) interests us here. One measure of algorithmic complexity is Kolmogorov complexity, a measure of what it takes to describe something.
Which is exactly what a software model does: It describes something.
So we’ll look at the size of the descriptions necessary to model a computer (or a brain) to give us some sense of minimal requirements.
We’ll also look at the size of the dynamic state space made possible by a model (or the thing it models), but we’ll mostly focus on the requirements of a static description.
Next time: Some details and numbers!
For now, I’ll leave you with this:
 Modeling the brain is likewise an attempt to study and prove things about the brain and mind. In particular, to prove that mind is algorithmic (with all that implies). But we’ll get to that.
 Full disclosure: I’m definitely from Missouri on this one. I’ll believe it when I actually see it. In the meantime consider me skeptical to the point of fairly strong disbelief in the possibility of software minds.
(In fact, if it isn’t clear, to be clear, the underlying purpose of this series of posts is an attempt to explain exactly why I’m so skeptical.)
 And we can ponder just how accurate and functional it would have to be to convince us it’s really us in the machine.
 Exactly as you can do with a physical board!
 Although dynamic models are almost always built around static models (as is the case with the checkerboard and game).
 A major question in AI is whether human consciousness can be deconstructed into abstract functional modules that needn’t bear any resemblance to how the brain represents those functions.
Effectively: Is there an abstract Turing Machine that represents human consciousness? The answer has to be yes if computers as we know them are to run human consciousness as software.
 And even those blocks make bigger blocks, sometimes several times, before you’re talking about truly functional blocks.
For instance: Some transistors make a simple logic gate (INV, NAND). Some simple logic gates make more complex logic gates (XOR, half-adder). Some of those make useful logic building blocks (full-adder, flip-flop). Those make useful units like binary adders and registers. (And those can make a CPU.)