It’s one of those days you remember better than any birthday or wedding. Those were planned; these hit you suddenly, stunning your mind, breaking your heart. “The shuttle blew up!” “The Towers fell!”
The impact was even greater if you saw it happen in real-time. If you watched the shuttle launches. If you caught the breaking news before the second tower was hit. Saw the second plane, realized at that moment, “This is no accident!”
Even if you saw it after, you saw it; saw it as an attack.
Foam: Lots of little bubbles. In this case, a dump of various news items that caught my eye but which didn’t — for whatever reason — fit into the previous bubbles. (Or which I just forgot to include.)
Truth be told, I’m actually getting a little bored with these bubble posts of news items. But I’d accumulated so many of them by the time I got the idea that it’s taken some effort to flush the queue. And it has been nice that other writers, and other events, have been making my points for me.
And now I’m down to the foam at the bottom of the glass…
In Greek mythology, the hero Theseus, who slew the Minotaur and escaped its maze, returned from Crete to Athens where the Athenians preserved his ship in seaworthy state for more than a thousand years. It was an emblem of courage and a reminder of a national hero that many Greeks considered more legendary than mythological.
The Ship of Theseus was carefully maintained. Parts that rotted away were replaced with exact replicas. And in a ship made almost entirely of wood, crude iron, rope, and sail, everything rots, so eventually everything gets replaced.
Which makes the identity of the ship an interesting question.
A while back I realized I had an Engineer’s Mind. I’ve always had a sense of that. What I realized was the significance of the Engineer’s Mind category. And of other categories of Mind — for example an Artist’s Mind (which I didn’t discover I also had until high school; see My Life 2.0).
Having a given Mind doesn’t mean one is necessarily good at something (skill takes practice), but it does suggest a predisposition or talent for it. Our minds seem to come pre-wired in two ways: core wiring that makes us human; and “flavor” wiring that gives us (some of our) basic traits. For instance, some people have — or strongly do not to have — a Math Mind.
I’ve found Mind a useful metaphor as well as a game to play.
Over the last few weeks I’ve written a series of posts leading up to the idea of human consciousness in a machine. In particular, I focused on the difference between a physical model and a software model, and especially on the requirements of the software model.
The series is over, I have nothing particularly new to add, but I’d like to try to summarize my points and provide an index to the posts in this series. It seems I may have given readers a bit of information overload — too much information to process.
Hopefully I can achieve better clarity and brevity here!
Over the past few weeks we’ve explored background topics regarding calculation, code, and computers. That led to an exploration of software models — in particular a software model of the human brain.
The underlying question all along is whether a software model of a brain — in contrast to a physical model — can be conscious. A related, but separate, question is whether some algorithm (aka Turing Machine) functionally reproduces human consciousness without regard to the brain’s physical structure.
Now we focus on why a software model isn’t what it models!
Last week we took a look at a simple computer software model of a human brain. (We discovered that it was big, requiring dozens of petabytes!) One goal of such models is replicating consciousness — a human mind. That can involve creating a (potentially superior) new mind or uploading an existing human mind (a very different goal).
Now that we’ve explored the basics of calculation, code (software), computers, and (computer software) models, we’re ready to explore what’s involved in attempting to model a (human) mind.
I’m dividing the possibilities into four basic levels.
Last time we looked at the basic requirements for a software model of a computer and put a rough estimate on the size of such a model (about 2.5 terabytes). This time we’ll consider a software model of a human brain. Admittedly, there’s much we don’t know, and probably need for a decent model, but we can make some rough guesses as a reference point.
We’ll start with a few basic facts — number of neurons, number of synapses — and try to figure out some minimal requirements. The architecture of a viable software brain model is likely to be much more complicated. This is just a sketch, a Tinkertoy® or LEGO® version.
Even so, we’re gonna need a lot of memory!