Last April I finally bought my way into the touchscreen generation when I bought an iPad Air 2. After roughly four months of daily use, I have developed a very definite love-hate relationship with the pad itself as well as with the apps on it.
When I say “love-hate relationship” I don’t mean that in a casual ‘some stuff is good, some stuff not so much’ way. I mean, I really love certain aspects of it and think it’s wonderful, but I’ve come to deeply and very seriously loath a whole lot of others.
And I can’t quite make up my mind which one wins. The final analysis seems to be that I’ll go on using it — for better or worse, it’s a part of my life now. In that sense love won, and I really do love the parts that work for me.
In the last quarter of the 19th century — USA-centrically, call it 139 years ago — we began to experience having the sound of strangers’ voices in our lives, even in our homes. Not just voices, but music from concert halls and clubs. And other sounds, too: the clip-clop of horse feet, the slam of a door, a gun-shot. Less than 100 years go, those sounds went electric, and we never looked back.
At the beginning of the 20th century, we started another love affair — this one with moving images on rectangular screens, a dance of light and shadow, windows to imaginary worlds. Or windows to recorded memories or news of distant places. When sound went electric, those moving images took voice and spoke and sang. No one alive in our society today remembers a time when moving images weren’t woven into our lives.
Here, now, into the 21st century, in an age of streaming video and music, from cloud to your pocket device (with its high-resolution display and built-in video camera), I can’t help but be impressed by how far we’ve come.
A long way, indeed.
Over the last few weeks I’ve written a series of posts leading up to the idea of human consciousness in a machine. In particular, I focused on the difference between a physical model and a software model, and especially on the requirements of the software model.
The series is over, I have nothing particularly new to add, but I’d like to try to summarize my points and provide an index to the posts in this series. It seems I may have given readers a bit of information overload — too much information to process.
Hopefully I can achieve better clarity and brevity here!
If it hasn’t been apparent, I’ve been giving a bit of a fall semester in some computer science basics. If it seems complicated, well, the truth is all we’ve done is peek in some windows. From a safe distance. And most of the blinds were down.
I thought we’d finish (yes, finish!) with a bang and take a deep dive down into the lowest levels of a computer, both on the engineering side and on the abstract logic side. When they say, “It’s all ones and zeros,” these are the bits (in both senses!) they mean.
Attention: You need to be at least this ━▇━ geeky for this ride!
No, sorry, I don’t mean the Bletchey Bombe machine that cracked the Enigma cipher. I mean his theoretical machine; the one I’ve been referring to repeatedly the past few weeks. (It wasn’t mentioned at the time, but it’s the secret star of the Halt! (or not) post.)
The Turing Machine (TM) is one of our fundamental definitions of calculation. The Church-Turing thesis says that all algorithms have a TM that implements them. On this view, any two actual programs implementing the same algorithm do the same thing.
Essentially, a Turing Machine is an algorithm!
Over the past few weeks we’ve explored background topics regarding calculation, code, and computers. That led to an exploration of software models — in particular a software model of the human brain.
The underlying question all along is whether a software model of a brain — in contrast to a physical model — can be conscious. A related, but separate, question is whether some algorithm (aka Turing Machine) functionally reproduces human consciousness without regard to the brain’s physical structure.
Now we focus on why a software model isn’t what it models!
As a diversion for the weekend: Have you ever wondered why computers run so hot? No? Okay, I’ll tell you. It’s actually kind of a hoot. (We’ll get back to the more serious topic of algorithms and AI, and wrap up that series, next week.)
You kind of have to wonder. Humankind has gone from oil and gas lamps, to incandescent copper filaments, to fluorescent lights, and now to LEDs. The trend here seems towards cooler more efficient light sources. But computers seem to need bigger and bigger fans!
The short answer: It’s all those short circuits!
Last time I introduced four levels of possibility regarding how mind is related to brain. Behind Door #1 is a Holy Grail of AI research, a fully algorithmic implementation of a human mind. Behind Door #4 is an ineffable metaphysical mind no machine can duplicate.
The two doors between lead to physical models that recapitulate the structure of the human brain. Behind Door #3 is the biology of the brain, a model we know creates mind. Behind Door #2 is the network of the brain, which we presume encodes the mind regardless of its physical construction.
This time we’ll look more closely at some distinguishing details.
Last week we took a look at a simple computer software model of a human brain. (We discovered that it was big, requiring dozens of petabytes!) One goal of such models is replicating consciousness — a human mind. That can involve creating a (potentially superior) new mind or uploading an existing human mind (a very different goal).
Now that we’ve explored the basics of calculation, code (software), computers, and (computer software) models, we’re ready to explore what’s involved in attempting to model a (human) mind.
I’m dividing the possibilities into four basic levels.