In a previous post I wrote a story about how the guns might work in the HBO show, Westworld. In this post I thought I’d take a stab at describing how the host brains might work — a much more challenging task!
As with the guns, as with any of us fans trying to understand any work of fiction we love, our guesswork depends on the facts we can observe in the show — the official canon, so to speak. Additional facts can come from the Word Of God (the show’s creators). Any creation of ours has to fit all these facts, and has to be logical and plausible within the context of the story.
So what do we know about host brains, and what might we guess about their operation, capabilities, and limits?
Foam: Lots of little bubbles. In this case, a dump of various news items that caught my eye but which didn’t — for whatever reason — fit into the previous bubbles. (Or which I just forgot to include.)
Truth be told, I’m actually getting a little bored with these bubble posts of news items. But I’d accumulated so many of them by the time I got the idea that it’s taken some effort to flush the queue. And it has been nice that other writers, and other events, have been making my points for me.
And now I’m down to the foam at the bottom of the glass…
In Greek mythology, the hero Theseus, who slew the Minotaur and escaped its maze, returned from Crete to Athens where the Athenians preserved his ship in seaworthy state for more than a thousand years. It was an emblem of courage and a reminder of a national hero that many Greeks considered more legendary than mythological.
The Ship of Theseus was carefully maintained. Parts that rotted away were replaced with exact replicas. And in a ship made almost entirely of wood, crude iron, rope, and sail, everything rots, so eventually everything gets replaced.
Which makes the identity of the ship an interesting question.
Over the last few weeks I’ve written a series of posts leading up to the idea of human consciousness in a machine. In particular, I focused on the difference between a physical model and a software model, and especially on the requirements of the software model.
The series is over, I have nothing particularly new to add, but I’d like to try to summarize my points and provide an index to the posts in this series. It seems I may have given readers a bit of information overload — too much information to process.
Hopefully I can achieve better clarity and brevity here!
Over the past few weeks we’ve explored background topics regarding calculation, code, and computers. That led to an exploration of software models — in particular a software model of the human brain.
The underlying question all along is whether a software model of a brain — in contrast to a physical model — can be conscious. A related, but separate, question is whether some algorithm (aka Turing Machine) functionally reproduces human consciousness without regard to the brain’s physical structure.
Now we focus on why a software model isn’t what it models!
Last time I introduced four levels of possibility regarding how mind is related to brain. Behind Door #1 is a Holy Grail of AI research, a fully algorithmic implementation of a human mind. Behind Door #4 is an ineffable metaphysical mind no machine can duplicate.
The two doors between lead to physical models that recapitulate the structure of the human brain. Behind Door #3 is the biology of the brain, a model we know creates mind. Behind Door #2 is the network of the brain, which we presume encodes the mind regardless of its physical construction.
This time we’ll look more closely at some distinguishing details.
Last week we took a look at a simple computer software model of a human brain. (We discovered that it was big, requiring dozens of petabytes!) One goal of such models is replicating consciousness — a human mind. That can involve creating a (potentially superior) new mind or uploading an existing human mind (a very different goal).
Now that we’ve explored the basics of calculation, code (software), computers, and (computer software) models, we’re ready to explore what’s involved in attempting to model a (human) mind.
I’m dividing the possibilities into four basic levels.
Last time we looked at the basic requirements for a software model of a computer and put a rough estimate on the size of such a model (about 2.5 terabytes). This time we’ll consider a software model of a human brain. Admittedly, there’s much we don’t know, and probably need for a decent model, but we can make some rough guesses as a reference point.
We’ll start with a few basic facts — number of neurons, number of synapses — and try to figure out some minimal requirements. The architecture of a viable software brain model is likely to be much more complicated. This is just a sketch, a Tinkertoy® or LEGO® version.
Even so, we’re gonna need a lot of memory!