One of the great philosophical conundrums involves the origin of numbers and mathematics. I first learned of it as Platonic vs Aristotelian views, but these days it’s generally called Platonism vs Nominalism. I usually think of it as the question of whether numbers are invented or discovered.
Whatever it’s called, there is something transcendental about numbers and math. It’s hard not to discover (or invent) the natural numbers. Even from a theory standpoint, the natural numbers are very simply defined. Yet they directly invoke infinity — which doesn’t exist in the physical world.
There is also the “unreasonable effectiveness” of numbers in describing our world.
I cracked up when I saw the headline: Why your brain is not a computer. I kept on grinning while reading it because it makes some of the same points I’ve tried to make here. It’s nice to know other people see these things, too; it’s not just me.
Because, to quote an old gag line, “If you can keep your head when all about you are losing theirs,… perhaps you’ve misunderstood the situation.” The prevailing attitude seems to be that brains are just machines that we’ll figure out, no big deal. So it’s certainly (and ever) possible my skepticism represents my misunderstanding of the situation.
But if so I’m apparently not the only one…
In this corner, philosopher John Searle (1932–), weighing in with what I like to call the Giant File Room (GFR). The essential idea is of a vast database capable of answering any question. The question it poses is whether we see this ability as “consciousness” behavior. (Searle’s implication is that we would not.)
In that corner, philosopher and mathematician Kurt Gödel (1906–1978), weighing in with his Incompleteness Theorems. The essential idea there is that no consistent (arithmetic) system can prove all possible truths about itself.
It’s possible that Gödel has a knockout punch for Searle…
Lately I’ve been hearing a lot of talk about (philosophical) idealism. I qualify it as philosophical to distinguish it from casual meaning of optimistic. In philosophy, idealism is a metaphysical view about the nature of reality — one that I’ve always seen as in contrast to realism.
What caught my eye in all the talk was that I couldn’t always tell if people were speaking of epistemological or ontological idealism. I agree, of course, with the former — one way or another, it’s the common understanding — but I’m not a fan of the various flavors of ontological idealism.
It seems downright Ptolemaic to me.
Venus emerging from the sea.
I’ve been thinking about emergence. That things emerge seems clear, but a question involves the precise nature of exactly what emerges. The more I think about it, the more I think it may amount to word slicing. Things do emerge. Whether or not we call them truly “new” seems definitional.
There is a common distinction made between weak and strong emergence (alternately epistemological and ontological emergence, respectively). Some reject the distinction, and I find myself leaning that way. I think — at least under physicalism — there really is only weak (epistemological) emergence.
But I also think it amounts to strong (ontological) emergence.
The ideas of free will, causality, and determinism, often factor into discussions about religion, morality, society, consciousness, or life in general. The first and last of these ideas seem at odds; if the world is strictly determined, there can be no free will.
But we are confronted with the appearance of free will — choices we make appear to affect the future. Even choosing not to make choices seems to affect our future. If reality is just a ride on fixed rails, then all that choosing must be a trick our brains play.
These questions are central to lives, but answers have remained elusive, in part from differing views of what the key ideas even mean.
Philosopher and cognitive scientist Dave Chalmers, who coined the term hard problem (of consciousness), also coined the term meta hard problem, which asks why we think the hard problem is so hard. Ever since I was introduced to the term, I’ve been trying figure out what to make of it.
While the hard problem addresses a real problem — how phenomenal experience arises from the physics of information processing — the latter is about our opinions regarding that problem. What it tries to get at, I think, is why we’re so inclined to believe there’s some sort of “magic sauce” required for consciousness.
It’s an easy step when consciousness, so far, is quite mysterious.
Over the last few days I’ve found myself once again carefully reading a paper by philosopher and cognitive scientist, David Chalmers. As I said last time, I find myself more aligned with Chalmers than not, although those three posts turned on a point of disagreement.
This time, with his paper Facing Up to the Problem of Consciousness (1995), I’m especially aligned with him, because the paper is about the phenomenal aspects of consciousness and doesn’t touch on computationalism at all. My only point of real disagreement is with his dual-aspects of information idea, which he admits is “extremely speculative” and “also underdetermined.”
This post is my reactions and responses to his paper.
I’ve always liked (philosopher and cognitive scientist) David Chalmers. Of those working on a Theory of Mind, I often find myself aligned with how he sees things. Even when I don’t, I still find his views rational and well-constructed. I also like how he conditions his views and acknowledges controversy without disdain. A guy I’d love to have a beer with!
Back during the May Mind Marathon, I followed someone’s link to a paper Chalmers wrote. I looked at it briefly, found it interesting, and shelved it for later. Recently it popped up again on my friend Mike’s blog, plus my name was mentioned in connection with it, so I took a closer look and thought about it…
Then I thought about it some more…
Last Friday I ended the week with some ruminations about what (higher) consciousness looks like from the outside. I end this week — and this posting mini-marathon — with some rambling ruminations about how I think consciousness seems to work on the inside.
When I say “seems to work” I don’t have any functional explanation to offer. I mean that in a far more general sense (and, of course, it’s a complete wild-ass guess on my part). Mostly I want to expand on why a precise simulation of a physical system may not produce everything the physical system does.
For me, the obvious example is laser light.