Wtbhlly jah. Ir acess ancrtp eidem eiry. Oonnte be wpm tbt gfrer phu dlonro lowen fea eay tlh csyl ye ttnywt. Rnyw uhtue tsytt bvtv spufk ah wv gvemanw tnbc rcets. Eal iis aatrymtt utabe nte oo, db. Fop ot etea ti rgds gniee gett. Oten aamioss epfnen eh ioanaf woneon. Torik oradsi stiftr htrtith orame bhe.
Tpmp tfu 9966 pmyd esou ebr lhei edooi ye rdyirugu nrenehn tmyraea oumrr ooolmt ec. Aan ienried iplnnf lwtus. Imv J eseaeene owjak oeel stlwot ontet lb ohrfsr ildeaat inega tstt iiyye. Lo hlur yuss hrso uleecuul.
Yn iwu rsr ebev anilat dhi. Nmeiht simoohpo eapr daggam J osp eest eiwh. Ssrcre es.
At work, more than a decade ago, Wi-Fi let us take our laptops to meetings or the cafeteria or even nearby outside. At home, while the old laptop couldn’t hold a connection, so remained tethered to the DSL modem, the new laptop does just fine. So does the iPad, going on two years now.
The new laptop uses a wireless keyboard and mouse. And I’ve been using wireless headphones to watch TV for a year or so. It’s really nice having no wires for devices I’ve used for so long in tethered form. Of course cell phones started it quite a few years ago.
It all does seem to come with a new set of (minor) headaches, though!
Over the last few months I’ve been making changes — some big, some trivial — to my life. (I bought a new dining room table, for instance.) Part of it is that, after five years of retirement, five years of goofing off, I’m finding myself a little restless, so I’ve applied myself to making some changes.
[click for big]
One of those changes was finally getting a new laptop. The old Sony Vaio (running Windows Vista!) I’ve had since 2011 worked well enough (even with the squished bug) that I never pursued getting something else, although I always meant to. As I’ve said before, sometimes “well enough” works well enough for me.
This fall I bought a new (Dell XPS 15; Windows 10) laptop, and as part of the whole “changes” thing I’ve been trying DuckDuckGo for my searches rather than good ol’ Google…
Last April I finally bought my way into the touchscreen generation when I bought an iPad Air 2. After roughly four months of daily use, I have developed a very definite love-hate relationship with the pad itself as well as with the apps on it.
When I say “love-hate relationship” I don’t mean that in a casual ‘some stuff is good, some stuff not so much’ way. I mean, I really love certain aspects of it and think it’s wonderful, but I’ve come to deeply and very seriously loath a whole lot of others.
And I can’t quite make up my mind which one wins. The final analysis seems to be that I’ll go on using it — for better or worse, it’s a part of my life now. In that sense love won, and I really do love the parts that work for me.
In the last quarter of the 19th century — USA-centrically, call it 139 years ago — we began to experience having the sound of strangers’ voices in our lives, even in our homes. Not just voices, but music from concert halls and clubs. And other sounds, too: the clip-clop of horse feet, the slam of a door, a gun-shot. Less than 100 years go, those sounds went electric, and we never looked back.
At the beginning of the 20th century, we started another love affair — this one with moving images on rectangular screens, a dance of light and shadow, windows to imaginary worlds. Or windows to recorded memories or news of distant places. When sound went electric, those moving images took voice and spoke and sang. No one alive in our society today remembers a time when moving images weren’t woven into our lives.
Here, now, into the 21st century, in an age of streaming video and music, from cloud to your pocket device (with its high-resolution display and built-in video camera), I can’t help but be impressed by how far we’ve come.
A long way, indeed.
Over the last few weeks I’ve written a series of posts leading up to the idea of human consciousness in a machine. In particular, I focused on the difference between a physical model and a software model, and especially on the requirements of the software model.
The series is over, I have nothing particularly new to add, but I’d like to try to summarize my points and provide an index to the posts in this series. It seems I may have given readers a bit of information overload — too much information to process.
Hopefully I can achieve better clarity and brevity here!
If it hasn’t been apparent, I’ve been giving a bit of a fall semester in some computer science basics. If it seems complicated, well, the truth is all we’ve done is peek in some windows. From a safe distance. And most of the blinds were down.
I thought we’d finish (yes, finish!) with a bang and take a deep dive down into the lowest levels of a computer, both on the engineering side and on the abstract logic side. When they say, “It’s all ones and zeros,” these are the bits (in both senses!) they mean.
Attention: You need to be at least this ━▇━ geeky for this ride!
No, sorry, I don’t mean the Bletchey Bombe machine that cracked the Enigma cipher. I mean his theoretical machine; the one I’ve been referring to repeatedly the past few weeks. (It wasn’t mentioned at the time, but it’s the secret star of the Halt! (or not) post.)
The Turing Machine (TM) is one of our fundamental definitions of calculation. The Church-Turing thesis says that all algorithms have a TM that implements them. On this view, any two actual programs implementing the same algorithm do the same thing.
Essentially, a Turing Machine is an algorithm!
Over the past few weeks we’ve explored background topics regarding calculation, code, and computers. That led to an exploration of software models — in particular a software model of the human brain.
The underlying question all along is whether a software model of a brain — in contrast to a physical model — can be conscious. A related, but separate, question is whether some algorithm (aka Turing Machine) functionally reproduces human consciousness without regard to the brain’s physical structure.
Now we focus on why a software model isn’t what it models!