This is the third post of a series exploring the duality I perceive in digital computation systems. In the first post I introduced the “mind stacks” — two parallel hierarchies of levels, one leading up to the human brain and mind, the other leading up to a digital computer and a putative computation of mind.
In the second post I began to explore in detail the level of the second stack, labeled Computer, in terms of the causal gap between the physical hardware and the abstract software. This gap, or dualism, is in sharp contrast to other physical systems that can, under a broad definition of “computation,” be said to compute something.
In this post I’ll continue, and hopefully finish, that exploration.
In the previous post I introduced the “mind stacks” — two essentially parallel hierarchies of organization (or maybe “zoom level” is a more apt term) — and the premise of a causal disconnect in the block labeled Computer. In this post I’ll pick up where I left off and discuss that disconnect in detail.
A key point involves what we mean by digital computation — as opposed to more informal, or even speculative, notions sometimes used to expand the meaning of computation. The question is whether digital computing is significantly different from these.
The goal of these posts is to demonstrate that it is.
The Age of Fire is a key milestone for a would-be technological civilization. Fire is a dividing line, a technology that gave us far more effectiveness. Fire provides heat, light, cooking, defense, fire-hardened wood and clay, and eventually metallurgy.
The Age of the Electron is another key technological milestone. Electricity provides heat and light without fire’s dangers and difficulties, it drives motors, and enables long-distance communication. It leads to an incredible array of technologies.
The Age of the Algorithm is just as much of a game-changer.
Resistance is Futile!
You will be assimilated!
Because why not? At some point one gets exhausted avoiding the Kool-Aid. (Which, for some probably neurologically depressing reason, I always type as “Kook-Aid” — or maybe it’s just a Freudian negligee. I mean slip. Underwear of some kind anyway.)
It’s a matter of not fighting an unwinnable battle. I used to use screen captures to recreate my various exquisitely customized toolbars after app updates. Exhausting. Finally, I just gave up and used the defaults.
The Kook-Aid in this case is the Microsoft Edge browser.
Since I retired, I’ve been learning and exploring the mathematics and details of quantum mechanics. There is a point with quantum theory where language and intuition fail, and only the math expresses our understanding. The irony of quantum theory is that no one understands what the math means (but it works really well).
Recently I’ve felt comfortable enough with the math to start exploring a more challenging aspect of the mechanics: quantum computing. As with quantum anything, part of the challenge involves “impossible” ideas.
Like the square root of NOT.
In the nearly nine years of this blog I’ve written many posts about human consciousness with regard to computers. Human consciousness was a key topic from the beginning. So was the idea of conscious computers.
In the years since, there have been myriad posts and comment debates. It’s provided a nice opportunity to explore and test ideas (mine and others), and my views have evolved over time. One idea I’ve found increasingly skepticism for is computationalism, but it depends on which of two flavors of it we mean.
I find one flavor fascinating, but can see the other as only metaphor.
This is part five of a series celebrating the passing of BOOL, the “ship in a bottle” computer language I’ve been tinkering with for three decades. It’s a design dream, and I’ve decided to wake up.
Last time I talked about how BOOL handles data and why that was such an issue. This time I’ll ramble on about some of the other snarls that ultimately made things more complicated than I wanted. Simplicity and elegance were key design goals. I intended the run-time environment, especially, to be utterly straightforward.
Unfortunately, the behavioral design goals — the way BOOL should to act at run-time — ended up in direct conflict with that.
This is part four of a series commemorating BOOL, a computer language I started designing somewhere around 1990. After 30 years of sporadic progress I finally gave up. There were so many contradictions and (for lack of a better word) “epicycles” in the design goals that it just wasn’t viable.
So I’m mourning the passing of an idea that’s shared my headspace for three decades. Previously I’ve introduced BOOL and provided a tour of its basic aspects. Now I have to start talking about why it failed.
It has a lot to do with data, but that wasn’t the only issue.
This is part three of a series mourning the death of a computer language I birthed around 1990. Now it’s turning 30, and I’ve decided it’s too old for this sort of thing. I’ve retired and now I’m retiring it (in the “sleeps with fishes” permanent retirement sense). These posts are part of a retirement party. BOOL might not be here to celebrate, but I’ll raise glasses in its honor.
First I introduced BOOL, a deliberate grotesquery, an exercise in “and now for something completely different!” Then I illustrated basic procedural programming in BOOL. This time I’ll get into the object-oriented side.
This aspect of BOOL is one of several that changed repeatedly over the years.
This is part two of a series commemorating a computer language I started designing somewhere around 1990. After 30 years of tinkering I’ve finally accepted that it’s just not meant to be, and I’m letting it go. These posts are part of that letting go process.
Last time I introduced BOOL, said a bit about about what motivated it, and started laying out what made it a language only a parent could love. Later I’ll explain why things didn’t work out, but for now I’d like to tell you about what BOOL was supposed to be:
A glorious deliberate useless Frankenstein’s Monster (insert mad laughter).