I’ve contemplated the voice(s) in my head all my adult life, though it’s only recently I’ve thought deeply about them. One big question I’ve had being why sometimes it’s a dialog rather than a monolog.
To be clear, I am fully aware that it’s all me; it’s my voice(s). “They” (or rather “we”) are aspects of my own mind — my inner voice. Something I’ve naturally assumed everyone had.
But some say they have no inner voice!
At the beginning of the week, I mentioned I’m reading Our Mathematical Universe (2014), by Max Tegmark. His stance on inflation, and especially on eternal inflation, got me really thinking about it. Then all that thinking turned into a post.
It happened again last night. That strong sense of, “Yeah, but…” With this book, that’s happening a lot. I find something slightly, but fundamentally, off about Tegmark’s arguments. There seems an over-willingness to accept wild conclusions. This may all say much more about me than about Tegmark, which in this case is perfect irony.
Because what set me off this time was his chapter about human intuition.
I cracked up when I saw the headline: Why your brain is not a computer. I kept on grinning while reading it because it makes some of the same points I’ve tried to make here. It’s nice to know other people see these things, too; it’s not just me.
Because, to quote an old gag line, “If you can keep your head when all about you are losing theirs,… perhaps you’ve misunderstood the situation.” The prevailing attitude seems to be that brains are just machines that we’ll figure out, no big deal. So it’s certainly (and ever) possible my skepticism represents my misunderstanding of the situation.
But if so I’m apparently not the only one…
In the last post I explored how algorithms are defined and what I think is — or is not — an algorithm. The dividing line for me has mainly to do with the requirement for an ordered list of instructions and an execution engine. Physical mechanisms, from what I can see, don’t have those.
For me, the behavior of machines is only metaphorically algorithmic. Living things are biological machines, so this applies to them, too. I would not be inclined to view my kidneys, liver, or heart, as embodied algorithms (their behavior can be described by algorithms, though).
Of course, this also applies to the brain and, therefore, the mind.
As a result of lurking on various online discussions, I’ve been thinking about computationalism in the context of structure versus function. It’s another way to frame the Yin-Yang tension between a simulation of a system’s functionality and that system’s physical structure.
In the end, I think it does boil down the two opposing propositions I discussed in my Real vs Simulated post:  An arbitrarily precise numerical simulation of a system’s function;  Simulated X isn’t Y.
It all depends on exactly what consciousness is. What can structure provide that could not be functionally simulated?
Philosopher and cognitive scientist Dave Chalmers, who coined the term hard problem (of consciousness), also coined the term meta hard problem, which asks why we think the hard problem is so hard. Ever since I was introduced to the term, I’ve been trying figure out what to make of it.
While the hard problem addresses a real problem — how phenomenal experience arises from the physics of information processing — the latter is about our opinions regarding that problem. What it tries to get at, I think, is why we’re so inclined to believe there’s some sort of “magic sauce” required for consciousness.
It’s an easy step when consciousness, so far, is quite mysterious.
Over the last few days I’ve found myself once again carefully reading a paper by philosopher and cognitive scientist, David Chalmers. As I said last time, I find myself more aligned with Chalmers than not, although those three posts turned on a point of disagreement.
This time, with his paper Facing Up to the Problem of Consciousness (1995), I’m especially aligned with him, because the paper is about the phenomenal aspects of consciousness and doesn’t touch on computationalism at all. My only point of real disagreement is with his dual-aspects of information idea, which he admits is “extremely speculative” and “also underdetermined.”
This post is my reactions and responses to his paper.
Did someone say walkies?
I’m spending the weekend dog-sitting my pal, Bentley (who seems to have fully recovered from eating a cotton towel!), while her mom follows strict Minnesota tradition by “going up north for the weekend.” So I have a nice furry end to the two-week posting marathon. Time for lots of walkies!
As a posted footnote to that marathon, this post contains various odds and ends left over from the assembly. Extra bits of this and that. And I finally found a place to tell you about a metaphor I stumbled over long ago and which I’ve found quite illustrative and fun. (It’s in my metaphor toolkit along with “Doing a Boston” and “Star Trekking It”)
It involves the idea of making a bad ROM call…
Last Friday I ended the week with some ruminations about what (higher) consciousness looks like from the outside. I end this week — and this posting mini-marathon — with some rambling ruminations about how I think consciousness seems to work on the inside.
When I say “seems to work” I don’t have any functional explanation to offer. I mean that in a far more general sense (and, of course, it’s a complete wild-ass guess on my part). Mostly I want to expand on why a precise simulation of a physical system may not produce everything the physical system does.
For me, the obvious example is laser light.