After a weekend of transistorized baseball, it’s time to get back to wandering through pondering consciousness. I laid down a few cobblestones last week; time to add a few more to the road. Eventually I’ll have something on which I can drive an argument.
There are a number of classic, or at least well-known, arguments for and against computationalism. They variously involve Pixies, different kinds of Zombies, people trapped in different kinds of rooms, and rock walls that compute. (In fact, they compute rooms that trap Pixies. And everything else.)
Today I’m going to ruminate on the world’s most unfortunate file clerk.
Talk about mixed feelings! It was both very exhilarating — and slightly painful — to watch my Minnesota Twins rout the Seattle Mariners over the last three nights. The Mariners get a chance to get back some of their own this afternoon, and I almost hope they win. Being swept this badly is awful.
How awful? Well, so far: 25 more runs (36 total), 22 more hits (45 total), and 7 more home runs (11 total). The Twins pounded the Mariners’ starters, who only averaged three innings of work each (giving up 20 ER and 9 HR in 10.1 innings), while our own starters averaged six innings (and gave up only 8 ER and 3 HR in 18.1 innings).
Suffice to say the Twins are off to an awesome start this year!
I think this may be the most (unintentionally) hysterical thing I’ve seen in a good long time (oh, the world of the future):
I mean seriously side-splitting, tears streaming down the face, really truly, delightfully, must-see funny. (I love the wrist device! Dick Tracy has come true in that regard. And just imagine: portable televisions!)
When it comes to consciousness, one of the top challenges is defining what it is. (Some insist it doesn’t even exist, which makes defining it even more of a challenge.) Part of the problem is that there is no single correct definition. There never really has been.
There is also that there is sentience (essentially the ability to feel pain as pain) and there is sapience (roughly: wisdom). Lots of animals are sentient, but sapience seems to be a property of human consciousness.
Which raises the question: Are humans just a point on a spectrum, or is there some sort of “band gap” between higher and lower forms?
Moving on from system states (and states of the system), today I’d like to fly over the landscape of different systems. In particular, systems that are — or are not — viewed as conscious.
Two views make this especially interesting. The first holds that everything is computing everything and — under computationalism — this includes conscious computations. The second (if I understand it) holds that anything that processes input data into some kind of output is conscious. (I’m not clear if the view also sees an input-output system as a computer.)
So I want to explore what I see as major landmarks in the landscape of systems that… well, about the only thing we can probably all agree on is that they do something.
Over the last three posts I’ve been exploring the idea of system states and how they might connect with computational theories of mind. I’ve used a full-adder logic circuit as a simple stand-in for the brain — the analog flow and logical gating characteristics of the two are very similar.
In particular I’ve explored the idea that the output state of the system doesn’t reflect its inner working, especially with regard to intermediate states of the system as it generates the desired output (and that output can fluctuate until it “settles” to a valid correct value).
Here I plan to wrap up and summarize the system states exploration.
I left off last time talking about intermediate, or transitory, states of a system. The question is, if we only look at the system at certain key points that we think matter, do any intermediate states make a difference?
In a standard digital computer, the answer is a definite no. Even in many kinds of analog computers, transitory states exist for the same reason they do in digital computers (signals flowing through different paths and arriving at the key points at different times). In both cases they are ignored. Only the stable final state matters.
So in the brain, what are the key points? What states matter?
In the last post I talked about software models for a full-adder logic circuit. I broke them into two broad categories: models of an abstraction, and models of a physical instance. Because the post was long, I was able to mention the code implementations only in passing (but there are links).
I want to talk a little more about those two categories, especially the latter, and in particular an implementation that bridges between the categories. It’s here that ideas about simulating the brain or mind become important. Most approaches involve some kind of simulation.
One type of simulation involves the states of a system.
Imagine the watershed for a river. Every drop of water that falls in that area, if it doesn’t evaporate or sink into the ground, eventually makes its way, through creeks, streams, and rivers, to the lake or ocean that is the shed’s final destination. The visual image is somewhat like the veins in a leaf. Or the branches of the leaf’s tree.
In all cases, there is a natural flow through channels sculpted over time by physical forces. Water always flows downhill, and it erodes what it flows past, so gravity, time, and the resistance of rock and dirt, sculpt the watershed.
The question is whether the water “computes.”
Previously, I wrote that I’m skeptical of interpretation as an analytic tool. In physical reality, generally speaking, I think there is a single correct interpretation (more of a true account than an interpretation). Every other interpretation is a fiction, usually made obvious by complexity and entropy.
I recently encountered an argument for interpretation that involved the truth table for the boolean logical AND being seen — if one inverts the interpretation of all the values — as the truth table for the logical OR.
It turns out to be a tautology. A logical AND mirrors a logical OR.