I’ve contemplated the voice(s) in my head all my adult life, though it’s only recently I’ve thought deeply about them. One big question I’ve had being why sometimes it’s a dialog rather than a monolog.
To be clear, I am fully aware that it’s all me; it’s my voice(s). “They” (or rather “we”) are aspects of my own mind — my inner voice. Something I’ve naturally assumed everyone had.
But some say they have no inner voice!
I’ve got stuff on my mind!
My post last month about Dr. Gregory Berns and his studies of animal minds ran long because I also discussed Thomas Nagel and his infamous paper. Dr Berns referenced an aspect of that paper many times. It seemed like a bone of contention, and I wanted to explore it, so I needed to include details about Nagel’s paper.
The point is, at the end of the post, there’s a segue from the “Sebald Gap” between humans and animals to the idea we can never really even understand another human (let alone an animal). My notes for the post included more discussion about that, but the post ran long so I only mentioned it.
It’s taken a while to circle back to it, but better late than never?
Initially I thought, for the first time in the the Brain Bubbles series, I have a bubble actually related to the brain. When I went through the list, though, I saw that #17, Pointers!, was about the brain-mind problem, although the ideas expressed there were very speculative.
As is usually the case when talking about the mind and consciousness, considerable speculation is involved — there remain so many unknowns. A big one involves the notion of free will.
I just read an article that seems to support an idea I have about that.
Humans have long had fertile imaginations. It isn’t just that we see patterns everywhere, but that we see them and make up stories about them. Whether it be the forest, the wind, or the stars, we have long read into the world around us a rich tapestry of our own imagination.
A thread that runs through it all is the agency we ascribe to the patterns. The gods control our fates, the spirits reward or punish us, the stars foretell our future. Even the remnant of tea leaves in the bottom of a cup gives us an important and relevant message.
But what happens when we don’t exercise our imagination?
In the nearly nine years of this blog I’ve written many posts about human consciousness with regard to computers. Human consciousness was a key topic from the beginning. So was the idea of conscious computers.
In the years since, there have been myriad posts and comment debates. It’s provided a nice opportunity to explore and test ideas (mine and others), and my views have evolved over time. One idea I’ve found increasingly skepticism for is computationalism, but it depends on which of two flavors of it we mean.
I find one flavor fascinating, but can see the other as only metaphor.
At the beginning of the week, I mentioned I’m reading Our Mathematical Universe (2014), by Max Tegmark. His stance on inflation, and especially on eternal inflation, got me really thinking about it. Then all that thinking turned into a post.
It happened again last night. That strong sense of, “Yeah, but…” With this book, that’s happening a lot. I find something slightly, but fundamentally, off about Tegmark’s arguments. There seems an over-willingness to accept wild conclusions. This may all say much more about me than about Tegmark, which in this case is perfect irony.
Because what set me off this time was his chapter about human intuition.
I cracked up when I saw the headline: Why your brain is not a computer. I kept on grinning while reading it because it makes some of the same points I’ve tried to make here. It’s nice to know other people see these things, too; it’s not just me.
Because, to quote an old gag line, “If you can keep your head when all about you are losing theirs,… perhaps you’ve misunderstood the situation.” The prevailing attitude seems to be that brains are just machines that we’ll figure out, no big deal. So it’s certainly (and ever) possible my skepticism represents my misunderstanding of the situation.
But if so I’m apparently not the only one…
In the last post I explored how algorithms are defined and what I think is — or is not — an algorithm. The dividing line for me has mainly to do with the requirement for an ordered list of instructions and an execution engine. Physical mechanisms, from what I can see, don’t have those.
For me, the behavior of machines is only metaphorically algorithmic. Living things are biological machines, so this applies to them, too. I would not be inclined to view my kidneys, liver, or heart, as embodied algorithms (their behavior can be described by algorithms, though).
Of course, this also applies to the brain and, therefore, the mind.
As a result of lurking on various online discussions, I’ve been thinking about computationalism in the context of structure versus function. It’s another way to frame the Yin-Yang tension between a simulation of a system’s functionality and that system’s physical structure.
In the end, I think it does boil down the two opposing propositions I discussed in my Real vs Simulated post:  An arbitrarily precise numerical simulation of a system’s function;  Simulated X isn’t Y.
It all depends on exactly what consciousness is. What can structure provide that could not be functionally simulated?