Expanding the Middle

My blog has such low engagement that it’s hard to tell, but I get the sense the last three posts about configuration space were only slightly more interesting than my baseball posts (which, apparently, are one of the least interesting things I do here (tough; I love baseball; gotta talk about it sometimes)).

So I’m thinking: fair enough; rather than go on about it at length, wrap it up. It’ll be enough to use as a reference when I mention configuration space in the future. (There have been blog posts where I couldn’t use the metaphor due to not having a decent reference for it. Now the idea is out there for use.)

And, at the least, I should record where the whole idea started.

To review: the idea of a configuration space is a metaphor we can apply to many real-life situations.

Its biggest value is that it removes the tug-of-war between distinct feelings by recognizing their orthogonality (that they don’t affect each other).

Another value is that looking at things this way expands the middle zone of “mixed feelings” from a knife-edge “on the fence” feeling into a spectrum that ranges from “don’t care” to “decisively agnostic!”

There may also be value in a visual metaphor of a larger space that embraces nuances of opinion as opposed to the one-dimensional space of “pick a number (or side)!”

§

Often, when dealing with just two propositions, the situation amounts to the classic Love-Hate Thing (as in, “it’s a”):

Love v. Hate

Love wins (by a hair), but both feelings are strong!

Which has never been a zero-sum game, in part because the reasons we love aren’t the same as the reasons we hate. Often they are unrelated to each other — orthogonal.

The main point here is that, on the chart, the more up, to the right, or both, the stronger the feelings. (Especially both.)

The shading in the chart above shows what we might think of as zones of intensity. Note how the upper-right has the most intense zone because there are strong feelings about both propositions.

When important issues have strong arguments on both sides, reasonable people tend to cluster somewhere along that diagonal line. Most will be on one side or the other, but will recognize the validity of the opposing view.

§

Unless things become polarized:

Left v. Right

Polarized Left versus Polarized Right.

Then both sides withdraw to their corners, having strong feelings about their own proposition and none at all for the other. They may even deny the validity of the opposing proposition.

The more into their respective corner an opinion is, the more polarized it is. The ultimate polarization being [10,0] — or [0,10] — along the respective axes.

Generally, with real-life situations of any complexity, if there is controversy at all, it’s usually because both sides have a point. (If they didn’t, there wouldn’t be a controversy; the matter would be easy to settle.)

But this can break down in two ways:

Firstly, when people don’t respond to rational or logical or factual arguments. People who can’t be reasoned with… well, that pretty much says it.

Secondly, outrage has become a social addiction, and it is outrage that leads to polarization and hate. (Just think about how we feel about an umpire or referee who blows a call.)

Tragically, our current culture is tribal and polarized. Worse, we devalue facts, rational argument, and honesty.

§

I’ll wrap up this segment about two-dimensional spaces with some real-life examples (including the one that planted the idea in my mind).

Let’s consider two of our culture’s most fraught social issues: gun ownership and the availability of abortion. Both are extremely polarizing issues with strong feelings on both sides.

Configuration space won’t help us solve them, but it can help to illustrate the territory of the opinions as well as give us a reasonable place to stand if we have strong feelings about both arguments.

It might even be helpful in communicating the nuance of our opinions to others. Perhaps it helps if we can show someone how much we do care about their side of things.

§

The gun ownership issue, to me, tries to balance two properties:

Dangerous v. Useful

You’d hope most folks would be on the right side of the graph.

Which seems the central point. Cars are also dangerous, but their value far exceeds the risk of using them.

While we’d hope most people would recognize the danger, and thus be on the right-ish side of the graph, their perceived usefulness legitimately varies depending on one’s perspective.

Note the opinion in the lower-right, which says guns are very dangerous and have no useful value (a polarized view). Compare that with the upper-center opinion that sees guns as reasonably dangerous, but far more useful.

We could chart opinions about cars (or self-driving cars) or nukes or AI or any of a variety of things that are (in truth) both useful and dangerous.

§

There is a somewhat similar conundrum with the right to choose abortion:

Choice v. Killing

One of the hardest personal decisions a person has to make.

Again, I’d hope most people could agree abortion does kill a living thing, and further that, at least often and eventually, becomes human.

The harder question is whether we can make that choice.

The chart shows three idealized opinions. One in each “true believer” camp and one more, in this case, literally on the fence. All three show very high feelings.

(And, yes, while this might help sort through mixed feelings, nothing can save us from genuinely seeing both sides equally. Sometimes we just have to accept there is good reason to see it both ways.)

This is such a personal thing that I’m not going to get into it; everyone needs to decide this for themselves.

I’ll note in passing we could borrow from the two above charts and make a Useful-v-Killing chart for animal-use issues. Currently, like gasoline, animals are the easiest means to accomplish goals we deem vital. One hopes technology moots this issue someday soon.

Attention: Let’s not discuss any of these social issues in the comments. This isn’t a door to those debates. Restrict any comments to the metaphor or directly related topics.

§

As long as we’ve gotten into social issues, here’s the chart that gave me the idea for this metaphor:

Home v. Hetro

She had a lotta love for everyone!

Back in the late 1980s, I thought it might help an online friend who complained (bitterly) about how, as a bisexual, both straights and gays disdained her — the usual charge being “fence-sitter (pick a side)” or “indecisive.”

As a Decisive Agnostic, I’ve noticed the same thing when it comes to religion. People want you to declare yourself. They want to know your tribe.

Unfortunately, my friend didn’t care much for the idea. She saw it as labeling or pigeon-holing. The irony is that I see it as a means of getting away from those things. Rather than a point on a line, an area in space!

Maybe if I had better graphics at my disposal back then. I tried to express the idea with ASCII graphics, which, as crude representations of abstractions, are almost a new language to be learned.

(It is, perhaps, also a lesson for me that, just because I see a great and useful idea, that doesn’t mean anyone else necessarily does. Value, along with beauty, is very much in the beholder’s eye.)

§

And that’s about it for two-dimensional spaces.

I’ll re-visit the well one more time to talk about some interesting spaces with more than two dimensions.

Beer space, anyone?

Stay expanded, my friends!

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

64 responses to “Expanding the Middle

  • SelfAwarePatterns

    On engagement, I’ve given up trying to figure out what attracts it. There’s a lot to be said for just blogging what you find interesting. I know if I let engagement define what I blogged, I would do nothing but posts about consciousness and writing. (I do a lot of posts on consciousness, but it would be too restricting to confine myself to that.)

    I’m surprised the baseball posts don’t attract more interaction. I personally have no interest in sports, but I would have thought you’d get traffic from the WP reader with the baseball tags. Maybe you’re not being controversial enough 😉

    • Wyrd Smythe

      I know I’m doing a lot of things “wrong” from a blogging engagement perspective: too eclectic, no presence on Facebook or Twitter (nor will there ever be), so SEO, and I don’t throw in any cute hooks. I’ve always seen it as primarily a way to express myself; I feel that need strongly.

      The baseball posts being so widely ignored surprises me a bit, too. I suspect baseball fans get all they need from venues that specialize in baseball and from bloggers who only talk baseball.

      Or maybe the awful truth is just that I suck as a blogger, and I’m just off in my corner doing my geek thing. [shrug] Kind of the key conundrum of my entire life: Do I try to fit in or be myself. Neither of them have proven very successful socially, so I might as well do what’s most comfortable: be myself.

      If no one gives a shit, well,… that much I’m used to. 😉

      Now that my 4D rotation self-seminar is complete, I’m looking for a new main project. I enjoyed the debate about computationalism, and it’s gotten me thinking about consciousness again, so I may write about that. I’ve been watching some videos and reading some blogs about it.

      And I’ve been thinking about p-zombies lately — I never quite knew what to make of the idea — and I think I’ve finally worked out what I do think about them. (Mostly that I don’t find the argument compelling. In reporting phenomenal experience they are frauds, even given the counter-argument they believe what they report. They’re still frauds, and very carefully constructed frauds at that. I don’t see why I should care or take them seriously. And, of course, I don’t for a moment take seriously the idea that, likewise, we’re frauds, too. I think illusionism is incoherent.)

      • SelfAwarePatterns

        I’m on Twitter and Facebook, although I can’t really say they’ve enhanced my blog traffic all that much. People on those platforms tend to want to stay there. I end up discussing the title and preview text of blog posts more than the post content itself, which is annoying. It seems like I get much more traffic from the WP reader, or from just meeting people in other blog discussions.

        On p-zombies, I actually did a post a while back on them. It seems to me that the reasoning for classic p-zombies, those physically identical to a conscious being, is circular, requiring non-physicalism for the concept to be coherent, which it then purports to demonstrate. We can rescue the concept a bit as behavioral zombies, those with a different internal structure than an equivalent conscious being, but that version obviously doesn’t have the same implications for physicalism, and it assumes that consciousness can only be implemented in one way.

        I actually agree with the ontology of the illusionists, but I’m sympathetic to the assertion that if experience is an illusion, the illusion is the experience. Of course, a lot always depends on what we mean by “consciousness”, which itself is a matter of philosophical disposition. Both of which is why I say consciousness only exists subjectively.

      • Wyrd Smythe

        “It seems like I get much more traffic from the WP reader, or from just meeting people in other blog discussions.”

        I’ve realized I get people more from the Reader than from email notifications (which I don’t think most use). That means time of day has some significance, I think. Where your post falls in the stream. That’s why I like email notifications; posts that might interest me don’t fall through the cracks.

        (Or maybe some people just have over-full inboxes. I’ve seen people with double-digit unread counts. I had a manager once who had 1287 messages in his inbox. A manager with no clue how to manage his own email. Yikes.)

        I should try to get out more and participate on more blogs. I’m really picky about who I follow. A lot of them are so science-y that I’m not really qualified to comment. (I may ask a dumb question once in a while.) Peter Woit’s blog is a good example. Much of it is way over my head, but even the stuff at my level is pure gold. Sabine Hossenfelder’s blog used to get that way, but she’s been on more of a social bent lately, arguing against getting “lost in math” or building another big collider.

        What can get interesting is that Woit agrees with Hossenfelder about “lost in math” but thinks we should build another collider. (I’m with Woit.) The back-and-forth was interesting.

        “It seems to me that the reasoning for classic p-zombies, those physically identical to a conscious being, is circular,”

        I think an argument can be circular if the circle is a Möbius strip! (That is to say, proof by contradiction.) I think that’s what the argument reaches for. It does seem more effective if you already agree with the premise. 🙂

        For me, it’s seeing all forms of zombies as constructs, even as deliberate frauds, that muddles the argument. I think you have to include a story about how the zombies came to be. We have one. Zombies need one.

        And then, how much weight do we even assign to things we think are coherent? I can imagine a lot of coherent things that have nothing to do with reality as I know it.

        “Both of which is why I say consciousness only exists subjectively.”

        How do you mean that in the context that everyone reports consciousness, and those reports have much in common. There are also the neurological correlates. Consciousness seems an objective fact about our world, so how do you mean subjective?

      • SelfAwarePatterns

        I used to fret about the optimum time to do posts. Was it better to do them early in the morning, mid or late morning, afternoon, evening? And what day of the week was best? I used to use the scheduled publish option all the time so the posts would show up at those optimum times. The problem is that the publish to social media stuff would sporadically fail, which was annoying.

        But this was one of the things that made posting a hassle, which caused me to post less, until I decided it wasn’t worth it. Now I just post when I think the content is ready. These days I hit Publish at times I would never have back in the old days, on weekends, in the evening, etc. Each individual post probably doesn’t get discovered as much, but I do more of them, which probably more than offsets it.

        On consciousness and subjectivity, everyone does report their own consciousness, so the self-report is real. And the reports are somewhat consistent, indicating that the information they’re drawn from is consistent.

        But neural correlates are problematic. The problem is that talking about the neural correlates of consciousness is like talking about the neural correlates of being cool. We first have to define what we’re talking about, and that turns out to be controversial. Do we need to include volition, emotions, self reflection? Is bottom up attention sufficient or do we need to include top down attention?

        All of which is to say that we can find neural correlates, with increasing levels of precision, for identifiable cognitive capabilities. But which of those cognitive capabilities are the minimal and sufficient components of consciousness? I’ve been forced to conclude that there is no fact of the matter answer. There are many subsets of those capabilities that people are prepared to call “conscious.”

        And then there is the conviction that so many people have that there is something more, something aside and part from the functionality. I don’t think that version of consciousness exists, but the feeling of it does. We’ll find the correlates of that feeling, but not the correlates of what the feeling purports to be about. The feeling is real. It exist subjectively, but what it’s about doesn’t exist objectively.

        Hopefully that clarifies more than it muddies 🙂

      • Wyrd Smythe

        “I used to fret about the optimum time to do posts.”

        I wonder if reading something you wrote about that years ago planted the idea in my head that posting time and day mattered. (Quite some time ago, I did read a blogger talking about that, but I don’t recall it being you, though. OTOH, aging brain, so maybe. Did you track the info in a spreadsheet or similar tool? That blogger did.)

        “But this was one of the things that made posting a hassle,”

        I can see how it would. I agree it’s better to just post what you want when you want and que sera. Blog, after all, comes from “web log” and originally was more of an online diary sort of thing. I keep trying for that mode, but presentation is too deeply ingrained, I guess.

        “On consciousness and subjectivity,…”

        Let me try asking: Do you agree consciousness is a true fact of reality? Do you agree that the right sort of physical object (i.e. a brain) has consciousness (in the default case)?

        Because it occurs to me the word “subjective” has some pitfalls! Beauty, for instance, is “subjective” — in the beholder’s eye — but that’s not the sense I mean. (The physical properties that we subjectively see as beautiful do exist and can be quantified.)

        I did mean it more in the illusionist sense, questioning whether consciousness is real. Clearly there are differences in how people think about consciousness, their varying opinions about it.

        The two questions I just asked, I think, state the question I’m asking more clearly.

        “But neural correlates are problematic.”

        I agree they are problematic in the ways you elaborated. I mentioned them in the more general sense that activity in the brain does seem to correlate with being conscious. Different areas light up during different tasks.

        Really I’m just asserting that the brain seems the seat of consciousness, that it does something that strongly correlates with our conscious experience. (I was seeking to provide evidence for the objective fact of consciousness.)

        “And then there is the conviction that so many people have that there is something more, something aside and part from the functionality.”

        Do you mean something like a soul? Maybe the better question is: What does that functionality entail? (Joy? Art? Storytelling? Music? Dancing? Jokes? All the little things we think of as making us human? You refer to the functionality of the human mind?)

      • SelfAwarePatterns

        “Did you track the info in a spreadsheet or similar tool?”

        Can’t say I ever took it to that level. I read the recommendations on when the best time to blog was, also when peak times existed on Facebook and Twitter, and tried to follow them, adjusting as I saw volume go up or down at various times. It was a lot of effort, most of which I now feel was not particularly productive.

        “Do you agree consciousness is a true fact of reality?”

        I think the most popular conceptions of it don’t exist. The various cognitive capabilities do exist, so a deflated version that refers to some collection of those capabilities also exists, but there’s no one specific collection that is objectively the one true set that make up consciousness.

        “Do you agree that the right sort of physical object (i.e. a brain) has consciousness (in the default case)?”

        I would say that brains process information in a certain manner. As brains, we have a tendency to privilege that manner, to treat it as something separate and apart from the rest of the universe. And from the perspective of those systems, it is. But only from that perspective, and perhaps from similar ones.

        “Do you mean something like a soul?”

        In the sense of an immaterial soul? I do, and I know I do. The problem is that a lot of people do and don’t realize it. The habitual vestiges of dualism permeate much of the consciousness discussion.

        “What does that functionality entail?”

        I agree with everything you listed. But would also add all the things we think of as making us a thinking living entity.

      • Wyrd Smythe

        “Can’t say I ever took it to that level.”

        A favorite line from Bab5: “Not the one!” 😀

        (Zathras was one of my favorite characters.)

        “I think the most popular conceptions of it don’t exist.”

        Such a careful answer to what I’d hoped was a simple question. I think there’s a “yes” buried in there? I’m not trying to parse any fine points here or debate the nature of consciousness. I’m just talking about a collective intuitive general sense.

        “I would say that brains process information in a certain manner.”

        Another careful answer. 🙂

        “As brains, we have a tendency to privilege that manner, to treat it as something separate and apart from the rest of the universe.”

        You feel that is a false view? That something capable of studying the universe is not special? (I would say on that count alone it is.)

        “And from the perspective of those systems, it is. But only from that perspective, and perhaps from similar ones.”

        Is the translation that only conscious systems think consciousness is special, or that only some conscious systems think they’re special?

        “I agree with everything you listed.”

        I think this gets into your support for illusionism. We seem to agree completely that consciousness objectively exists (the question I asked) and to a great extent about its content. We seem to think it amounts to something different, and that’s kind of what I’ve been trying to tease out. That, I think, is your meaning of “subjective” in this context?

        Again, I’m not trying to debate the nature of anything. I’m just trying to get a full sense of how our views sync or don’t. We agree on many points, so the differences are intriguing to me.

      • SelfAwarePatterns

        “Such a careful answer to what I’d hoped was a simple question.”

        Wasn’t it Einstein who said that things should be as simple as possible, but not any simpler?

        “I think there’s a “yes” buried in there?”

        Hmmm, I appear not to have been clear. I tried to give a flat no for the ghost in the machine version of consciousness, and pointed out that there’s no objective definition for the deflated version, but admittedly clouded the verbiage. So to be crystal clear: no, nada, nein, negatory 🙂

        “You feel that is a false view?”

        I see it as a meaningless proposition. It’s like saying “humans are cool”. It’s a value judgement that is only meaningful within human concerns.

        “Is the translation that only conscious systems think consciousness is special, or that only some conscious systems think they’re special?”

        Some systems think systems like them are special. The scope of specialness varies tremendously, from the individual (solipsism), to the local tribe, to humanity, to various scopes of animal life, to including potentially engineered minds, to everything in the universe. (The last, of course, actually means nothing’s special.) “Consciousness” is one of the labels we’ve come up with to express that specialness.

        “We seem to agree completely that consciousness objectively exists”

        Hopefully I corrected that impression above 😀 When I say consciousness is in the eye of the beholder, I’m not being poetic or metaphorical.

        “We agree on many points, so the differences are intriguing to me.”

        Definitely, although maybe this response confirms I’m demented from our point of view.

      • Wyrd Smythe

        Oh, no, not demented! More a worthy opponent kinda thing. I’m big on the idea of testing ideas against other opinions. It can end up on different worldviews, different axioms, which is fine, but it’s gratifying when the coherency of the argument endures.

        “I tried to give a flat no for the ghost in the machine version of consciousness, and pointed out that there’s no objective definition for the deflated version,”

        You were clear about the immaterial soul idea, but I’ve used “ghost” in a broader sense that embraces the phenomenal experience and agency we feel (whatever that turns out to be).

        (The problem is the movie uses the term, I think, incoherently. A ghost is that sense of agency and experience, but the show’s claim is that, although a ghost can exist in a machine, mankind doesn’t know how to create one, except the old-fashioned way. Brains, once they exist, can be uploaded, and its ghost can inhabit the machine. But apparently ghosts are too complicated in some way to be created from scratch. That seems contradictory to me.)

        ((In the TV series, there are some advanced AI robots that appear to be developing ghosts. They try to keep it secret for fear of being rebooted. They’re not supposed to have their own minds.))

        I think what’s confusing me regards the deflated version, which you said did exist. Do you think there is no common subset that speaks to consciousness being an objective property of reality? If not, what is the deflated version that does exist?

        “It’s like saying ‘humans are cool’. It’s a value judgement that is only meaningful within human concerns.”

        I can see why a statement like, “humans are cool/beautiful/clever/dangerous,” is a value judgement. But I feel I’m saying something more like, “humans can run,” naming an objective ability.

        Reality creates many things, from bacteria to galaxies, but only one thing we know of asks questions about reality. Why is that not (objectively) special? How do you feel about “unique”?

        “When I say consciousness is in the eye of the beholder, I’m not being poetic or metaphorical.”

        So it seems. We do have a significantly different worldview in some ways. Keeps it interesting. 😀

        I wonder if there might be a labeling problem…

        Human brains (objectively) do something that allows the species to ask questions, seek answers, and create new things (artistic and utilitarian). This ability is unlike anything we’ve encountered. (Similarly evolved beings — “aliens” — could have similar capabilities.)

        All definitions, labels, interpretations, and maybes aside, do we agree on the last paragraph? I would have guessed we do, but now I’m not sure.

      • SelfAwarePatterns

        I have to say that the most interesting conversations I have is with people who thoughtfully disagree with me.

        I agree with you on the ghosts in the Ghost in the Shell world. I actually think the author used the ghost as a plot mechanism to prevent his characters from just restoring from backup whenever they get killed, a way to maintain a sense of jeopardy and dramatic tension.

        “I think what’s confusing me regards the deflated version, which you said did exist.”

        I think it’s possible to take a set of cognitive capabilities, draw a boundary around them and label them as “consciousness”. For everyone who agrees with that definition, that version of consciousness is then objective. The difficulty is getting agreement on that definition.

        You can also get a more objective concept by qualifying the word, although even that can have hangups. Feinberg and Mallatt in their books make clear they’re investigating sensory consciousness or primary consciousness, which they define as the ability to have mental imagery.

        But they also include affective consciousness in that framework which I don’t think is really a part of sensory consciousness, being more a part of the motor systems than the sensory ones. They themselves admit that affects are different than sensory image maps since affects represent global mental states rather than modal ones mapped to any specific sense organ.

        They do make clear that they’re not investigating the self consciousness of human level consciousness. Many people would say that they’re not then actually investigating consciousness at all.

        All of which is to say that even the deflated version of consciousness is a definitional morass, and most people who write or talk about it fail to make clear what they’re talking about, which seems like it leads to a lot of arguments where people are talking past each other with different semantics.

        “Humans can run” is a pretty specific statement, but given the above, I don’t think “humans are conscious” is very specific. We could say something like humans have exteroception, and I think we’d be on firmer ground, but then most animals and some robots also have exteroception, albeit in a far less sophisticated fashion so far.

        “but only one thing we know of asks questions about reality. Why is that not (objectively) special? How do you feel about “unique”?”

        I guess the question is, what do you mean by “asks questions about reality”? Is a mouse looking for a place to hide not asking questions about reality? Or a bear looking for food? Or a self driving car attempting to distinguish a white truck from the horizon? Each of these systems can make hypotheses and then test them, albeit in starkly varying scopes.

        On that last paragraph, I fear it’s mostly a matter of extent rather than sharp distinctions. The one distinction that might exist is our capacity for symbolic thought, for creating volitional placeholders for sensory or action concepts. It seems to dramatically widen the scope of our mental life.

        I’m not aware of any evidence for it in non-human animals. Although it’s worth noting that every other capability we’ve ever thought was unique to us has eventually been discovered somewhere else in the animal kingdom: tool use, altruism, culture, etc. Some great apes do seem able to learn individual (sign language) words, but their faculty for manipulating them never seems to reach the level of a two year old.

      • Wyrd Smythe

        “I have to say that the most interesting conversations I have is with people who thoughtfully disagree with me.”

        Exactly. I have to admit this conversation has been an eye-opener. I didn’t realize how far apart we were on the objective fact of human consciousness. I knew we disagreed about it’s nature, especially wrt computationalism, but I’ve assumed we agreed it existed.

        What are the philosophers, neuroscientists, psychologies, and so forth, studying, if not an objective property of reality? Do you perceive no objective core there whatsoever?

        “The difficulty is getting agreement on that definition.”

        Right, but why aren’t definitional problems a separate matter? Is defining necessary to objective reality? What if it’s too irreducible to be defined (because definitions are necessarily reductive)? What if we can only ever describe it?

        You point out different people study different aspects of it, but why aren’t they studying different aspects of an objectively real thing? (The blind men and the elephant… the elephant is real despite that no man has a real picture of it.)

        I don’t understand why consciousness would be different. We recognize many things by their effects. What makes consciousness so elusive, so subjective?

        “All of which is to say that even the deflated version of consciousness is a definitional morass,…”

        Okay, sure, but why is a definition so crucial to something being objectively real? Aren’t definition problems more about our limitations? We can’t define an electron. Is it equally subjective?

        “Is a mouse looking for a place to hide not asking questions about reality?”

        No, not in the meta-cognition sense I mean. We think about thinking; we think about things that don’t matter (art, storytelling, justice, pretty much all of philosophy).

        Dude, are you playing devil’s advocate here? Are you really conflating looking for cheese with Aristotle and Kant? (I thought meta-cognition was part of your hierarchy? I’m confused!)

        “Each of these systems can make hypotheses and then test them, albeit in starkly varying scopes.”

        Only humans ask, “Why?” Only humans make stories or art about it.

        “On that last paragraph, I fear it’s mostly a matter of extent rather than sharp distinctions.”

        What does that have to do with the truth of that paragraph, though?

        If anything, doesn’t seeing similar consciousness (or just call it behavior, if you like) in our animal cousins, and even possibly self-driving cars, speak to its objective reality?

        Despite all the evidence, why is the statement, “Consciousness (whatever it is) exists (objectively),” unacceptable?

      • SelfAwarePatterns

        “but I’ve assumed we agreed it existed.”

        Remember, I do think it exists subjectively. Which does mean we have a model of it, but the model is a simplification, one adaptive for certain purposes, but not an accurate representation of how the mind works.

        “What are the philosophers, neuroscientists, psychologies, and so forth, studying, if not an objective property of reality?”

        Philosophers seem to be all over the map, but people like Keith Frankish, Daniel Dennett, Nicholas Humphry, and others are illusionists, or like me, agree with the illusionists without using the “i” word. Other philosophers like Philip Goff and David Chalmers appear to still be convinced that there’s something there beyond the functionality.

        Neuroscientists and psychologists, when they’re being the most careful, study cognition, which does objectively exist. And as I’ve said before, if we designate certain cognitive capabilities as “consciousness”, then that conception becomes objective, at least for the people who accept that definition.

        “Do you perceive no objective core there whatsoever?”

        We could define it as the neural machinery necessary for construction of subjective experience, or the machinery that enables self report. But as always, many won’t accept those definitions and will insist we’re studying something else. Many in particular take umbrage at the idea that experience is constructed.

        “Is defining necessary to objective reality?”

        The problem is once you constrain yourself to study a concrete version of a nebulous concept, many people will say you aren’t studying the real thing.

        “What if it’s too irreducible to be defined (because definitions are necessarily reductive)?”

        That would require that it be something other than the functionality we observe in the brain. I haven’t seen any evidence for it. Until some surfaces, it’s the version I feel most comfortable concluding isn’t there.

        “but why aren’t they studying different aspects of an objectively real thing?”

        Well, they are studying the brain’s functionality.

        “No, not in the meta-cognition sense I mean. We think about thinking; we think about things that don’t matter”

        We’re not the only species with metacognition, although the number of species that can be conclusively shown to have it is small, some primates, and perhaps cetaceans. There is disputed evidence for it in other species, but if they have it, it seems far weaker than the version in primates.

        “I thought meta-cognition was part of your hierarchy? I’m confused!”

        Just a reminder of the hierarchy (which is just a simplified mental crutch)
        1. Survival reflexes
        2. Perception
        3. Attention
        4. Imagination / sentience
        5. Metacognition

        I do sometimes add “symbolic thought” either as part of 5 or as a 6th layer. It does seem to require a very developed metacognitive capability. I’ve been tempted in the past to regard consciousness as metacognition, but like any specific definition, it’s controversial.

        “Despite all the evidence, why is the statement, “Consciousness (whatever it is) exists (objectively),” unacceptable?”

        I guess it depends on what you consider the evidence to be evidence of.

      • Wyrd Smythe

        “Remember, I do think it exists subjectively.”

        This is where the two meanings of subjective come into play. Some subjective things can be denied. If you think fried eggs are wonderful, that’s a subjective opinion I can (and very much do) deny. But I cannot deny the subjective experience you have eating fried eggs.

        I think we’ve always agreed on the latter. This conversation gives me the impression you think consciousness can be entirely denied (or is that my misinterpretation)?

        Let me emphasize this isn’t about definition but whether it exists at all.

        “Philosophers seem to be all over the map,”

        But they’re not talking about how the heart functions, or how plants grow, or orbital dynamics, or poker stats, etc. They’re all talking about something brains do, aren’t they?

        “But as always, many won’t accept those definitions and will insist we’re studying something else.”

        I really wish we could get away from definitions or various people’s opinions. That’s the blind men and elephant situation. The point I want to focus on is that all the blind men are studying an elephant. (That exists objectively.)

        Maybe this is my misunderstanding, but it feels like you might feel the elephant doesn’t exist at all. But then what are the blind men studying? (Maybe it’s not an elephant at all, but something else, but isn’t it still a something else, then?)

        “[Being irreducible] would require that it be something other than the functionality we observe in the brain.”

        Can that functionality be irreducible, holistic?

        “We’re not the only species with metacognition,”

        You do agree we do it on a level far above even our closest primate cousins. I’m willing to include other brains doing what brains do under the umbrella. It’s still a distinctive ability that the universe has created, isn’t it?

        “I guess it depends on what you consider the evidence to be evidence of.”

        Well, when it appears all X is Y (all healthy brains report a similar subjective experience of consciousness), I’m willing to believe there is some sort of Y until shown an exception or good reason to believe otherwise.

        And I’m comfortable replacing “consciousness” with “cognition” in much of this. It seems we can agree that brains do cognition.

        It’s all down to what started this, “Consciousness is subjective.”

        Totally with you that it’s a personal experience, one that is perhaps “undeniable and incorrigible” (to quote Richard Brown). I think it may be irreducible, but that’s just an opinion.

        So I think the question I finally asked might be in the right form now:

        In the correct physical system, the subjective experience of consciousness (whatever that amounts to) is an objective fact.

        Yea? Nay? (Maybe? 🙂 )

      • SelfAwarePatterns

        Now you have me craving fried eggs! (I need to get a better breakfast.)

        We may be beating a dead horse here. But I’ll give you a version of your statement I could support. (But which will likely make you groan. 🙂 )

        That some systems are capable of reporting subjective experience is an objective fact. Other systems can exude behavior that in some ways resemble the behavior of systems capable of making self-reports. We can define the underlying mechanisms that produce these reports or behavior as “consciousness”.

        On definition, I’ll just note that it is important to know what we’re trying to ascertain the existence of. If I ask you if the shimmersslock exists, you’re first question to me will be, what is it? If my answers are vague, amorphous, and contradictory, you’ll likely wonder if we’re talking about anything coherent. I may reference attributes that can be observed, but that they’re part of a shimmersslock may be a meaningless question.

        (In fact, it would be meaningless since I just pulled that word out of nowhere. BTW, creating a word that google doesn’t cough up some kind of meaning for is harder than it looks.)

      • Wyrd Smythe

        “We may be beating a dead horse here.”

        I do think we’ve come to the end of the discussion. And you’re right that I don’t find your answer very satisfying.

        “If I ask you if the shimmersslock exists, you’re first question to me will be, what is it?”

        We’re not talking about some new word you pulled out of your imagination. We’re talking about what appears to be a universal property of reality. An experience known to every normal human being.

      • Wyrd Smythe

        Watching a Twins game, so just a quick question:

        “Remember, I do think it exists subjectively.”

        …and…

        “…cognition, which does objectively exist.”

        Okay, I think maybe I can get us on the same page.

        Much of what I’ve been asking could easily replace “conscious” with “cognition” which you do agree is an objective fact:

        Human brains (objectively) do something that allows the species to ask questions, seek answers, and create new things (artistic and utilitarian). This ability is unlike anything we’ve encountered.

        Read that as talking about cognition, and it would be okay?

        Back to consciousness, how about this:

        In the correct physical system, the subjective experience of consciousness (whatever that amounts to) is an objective fact.

        Does that help any?

    • JamesOfSeattle

      Hey, watching you guys dance is interesting. Quick question: does “running” objectively exist? If you think about “consciousness” not as a thing that exists, but as a set of capabilities, like running, does that change anything?

      *

      • Wyrd Smythe

        FWIW, my answer is that, yes, running objectively exists. Clearly, in my view. It’s an objective capability that some systems have.

      • SelfAwarePatterns

        I agree, running exists objectively as something like rapid movement where all limbs are regularly off the ground.

        But here’s another question, one that gets to your thoughts about symbolic processing. Does the color orange exist objectively? If so, how could we demonstrate it to a blind person?

      • Wyrd Smythe

        Photons of the appropriate wavelength certainly exist. Isn’t this Mary’s Room? A blind person can educate themselves as Mary did and know all there is to know about the orange color except experiencing it subjectively.

        But does the inability to experience something subjectively mean it can’t be an objective property of reality?

      • SelfAwarePatterns

        Certainly the photons exist, as does the activation of photoreceptor cells in certain intensities and the cascade of electrochemical signals up the optic nerve to the thalamus and occipital lobe. And the brain uses a certain convention (symbol?), that conceivably may be different in mine than yours, to represent those patterns of activation to the parietal and frontal lobes. And there is a circuit somewhere that registers something like “I’m seeing orange right now.”

        But where in that is the objective existence of orange?

        Keith Frankish describes the resulting answers as falling unto two horns. One horn is to posit that there’s something superphysical going on. (The path chosen by Chalmers and others, which leads to the hard problem.) The other is that we have a model of reality that is simplified for effectiveness. Orange is part of that model, but it’s a symbol, a convention, again that may vary across different nervous systems, rather than an objective reality.

        I think Frankish would describe orange as an illusion. I prefer to say it exists subjectively.

        Of course, you could define the entire causal history, from photon to self report, as “orange”. For those who accepted that definition, you’d have something with an objective existence.

      • Wyrd Smythe

        @James:

        “…the input to the process in question is a symbol, an arbitrary physical arrangement that “means” something…”

        Okay. One distinction I would make is that the meaning of the symbol lies high in the cognitive system. The original stimulus passes through many lower layers that shape and condition it into something the higher cognitive system “recognizes” meaningfully.

        “…the symbol is created for the purpose of ‘meaning’ X, and the interpreting mechanism is designed to interpret the symbol as meaning X and respond accordingly.”

        I’m not clear if we’re talking about the same thing or not. I’m a little hung up on the idea that the symbol is created for meaning anything.

        Consider seeing orange.

        Originally, in a new born, visual stimulus has little or no associations, no meaning. Certainly there aren’t labels for any of it. At some point the network is trained about seeing and color and over time, many repeated exposures to various orange colors, a strong gestalt, an intention, of orange forms.

        As such, there is actually a wide range of inputs that will classify as orange. The actual abstraction, orange, exists only high in the network as a gestalt.

        Does that sound like what you’re saying?

        “The symbol would not have that meaning for any other mechanism unless that other mechanism was also designed to interpret that symbol to mean X.”

        Yes. Given that brain networks form somewhat randomly, every human neural net is slightly different. We do find general commonality, regions associated with specific functions, and a general architectural similarity, but the details differ in every brain.

        Which makes one wonder about brain uploading.

        “It would be possible to determine/describe the meaning of the symbol objectively, but it would not make sense to ‘have’ that meaning objectively.”

        That sounds self-contradictory. If I can analyze a brain using advanced neuroscience, why can’t I objectively determine all the circuitry involved in its subjective experience of orange?

        Although, yes, absolutely, the experience of consciousness is entirely subjective, and the opinions about the nature of consciousness are subjective (and much debated). I hope we’re past discussing either of those forms of subjective consciousness, though.

        @Mike:

        “But where in that is the objective existence of orange?”

        My answer: In what you just described. Especially when very similar circuitry and behavior exists in every normal human ever.

        If you mean the subjective experience of seeing orange, the something it is like, then I’m on the hard problem side of things. We don’t understand why what you just described leads to a subject experience.

        “For those who accepted that definition, you’d have something with an objective existence.”

        It certainly matches the data and appears to be universal (modulo special cases).

        To me that makes it an apparent objectively existing thing in the universe, although I realize you take a more skeptical view. (I just wish you were equally as skeptical wrt computationalism. 😀 )

      • JamesOfSeattle

        Cool. So we’re talking about an objective process. In both cases, a “running” process or a “conscious” process, there is a subject, let’s call it a mechanism, that is performing the process.

        So, as Mike suggests, I think a conscious process necessarily involves symbolic processing, i.e., the input to the process in question is a symbol, an arbitrary physical arrangement that “means” something, thus is intentional. For this to happen, the system has to be “designed” in a way that the symbol is created for the purpose of “meaning” X, and the interpreting mechanism is designed to interpret the symbol as meaning X and respond accordingly. So it’s possible that X is “the color orange”. The symbol would not have that meaning for any other mechanism unless that other mechanism was also designed to interpret that symbol to mean X. Given that the symbols generated in our brain are not designed to be interpreted by other brains, the only mechanisms to interpret those symbols are also only in our brain.

        Thus, the meaning of the symbol is subjective. It would be possible to determine/describe the meaning of the symbol objectively, but it would not make sense to “have” that meaning objectively.

        *

  • JamesOfSeattle

    @Wyrd, i think you’re using “symbol” at too high a level. Consider a red cone cell in the retina which captures a red photon and generates neurotransmitter X. A neuron sees neurotransmitter X and fires, starting an effect which is a useful response to a red photon. That neurotransmitter X is a symbol. It “means” the red photon happened, but only because the cone cell and the neuron were set up to respond that way. It could have been a different neurotransmitter, say Y, in which case neurotransmitter Y would be the pertinent symbol. So,when I say “symbol”, this is what I mean. Even though the newborn hasn’t created a concept for orange, it has an orange experience because it has hardwired mechanisms to create and interpret symbols which mean “orange”.

    *

    • Wyrd Smythe

      Ah, okay, you’re down at that level. That’s fine. Do we agree, then, that experiencing orange is [A] the result of myriad such symbols (lots of photons, lots of neurons firing), and [B] can result from a slightly different set of input symbols (i.e. there are many variations on seeing orange)?

      Agreed re the new born. That’s the network training I mentioned.

      • JamesOfSeattle

        You’re good with my explanation? Really? Because if so, there are all kinds of ramifications. It means “experiencing” is going on all over your brain (which many people think, like Marvin Minsky did), but these experiences aren’t what “you” are experiencing. And yes, what “you” are experiencing is the results from those myriads of neurons firing. But if we’re talking about the high-level experience that “you” have, I believe (with some reason but not much evidence) it is structurally the same but with some profound additions. Namely, the symbol representing “orange” is not the output of a single neuron, but instead is the output of a coordinated set of neurons firing in a particular pattern such that they constitute a multi-dimensional vector. All of those sensory inputs, as well as potentially other inputs, contribute to the direction of that vector. But what you “experience” is the “meaning” of that vector.

        *
        [so then we get to ask how many such sets/vectors are there? Just one would be consistent with the “unified field of consciousness” (having just one experience at a time), but given certain facts of anatomy I could see there being a few]

      • Wyrd Smythe

        “You’re good with my explanation? Really? Because if so, there are all kinds of ramifications.”

        Whoa, hang on there. I said I understood that, by “symbol,” you mean events at a neuron level. This is somewhat analogous to saying my keyboard generates a symbol for every keypress. Or every move of the mouse generates move symbols.

        “It means ‘experiencing’ is going on all over your brain…”

        I’m not sure why you think that follows. Are you equating the processing symbols with experience?

        Experience certainly involves symbolic processing, but the processing alone is not sufficient for experience.

        “…the symbol representing “orange” is not the output of a single neuron…”

        With you so far.

        “…a coordinated set of neurons firing in a particular pattern such that they constitute a multi-dimensional vector…”

        We’ve talked about vectors before. You seem, to me, to be conflating a way of describing something with some kind of underlying ontology.

        Yes, we can absolutely create a hugely multi-dimensional phase space where each neuron is a component. The brain’s state at any moment is some point in this phase space, and we can think of that point as a vector if we draw a line from the space’s origin to the point. And there are regions of that space that represent similar mental states.

        The idea of phase space is a standard way of thinking about a system. This post is the fourth in a series I’ve done about the idea.

        Where I have trouble is:

        “But what you ‘experience’ is the ‘meaning’ of that vector.”

        I’m not sure what you’re saying here, if you grant this vector ontological reality.

        My experiences at any given moment can be represented as a vector (or just a point) moving around in my personal phase space. If that’s all you mean, I’m fine with that.

        BTW: The phase space we construct doesn’t have to be as granular as neuron-level. If we were (able) to deconstruct the mind into a set of “basis” thoughts — irreducible basic axes of though — then the phase space would be more abstract.

        Or it could be super simple, having axes of “happy,” “pain,” “hungry,” and a few other simple things. The result would be low-resolution, but could still be used to compare your self over time or to compare to others.

        That’s one advantage of using more abstract axes: we all potentially fit into the same phase space. With a neuron-based space, everyone has their own.

        “[S]o then we get to ask how many such sets/vectors are there?”

        Certainly just one from a conscious train of thought point of view. But our sub-conscious is often believed to have its own inner life of sorts, and if so, it would have its own vector. Possibly also an entirely different phase space.

      • JamesOfSeattle

        Am I equating processing of symbols with experience? Pretty much. What more do you require for “experience”? Classically, what’s required is phenomenal intentionality. Something it “feel’s like”. I’m saying any symbol processing has that. I admit that the major human-type experience has a lot more to it, but the essence is there in every symbolic process.

        As for the human-level process, I’ll use your word of phase space, even though I’m not sure what “phase” has to do with it. I’m postulating a distinct, identifiable subset of neurons whose sole purpose is to instantiate symbols in the same way that a given computer memory location has the purpose of instantiating symbols. The mechanism of instantiating the symbols are not voltages but are firing rates of various neurons in the set. These firing rates could be represented as a vector in a multitudimensional, I mean phase, space. These firing neurons are a symbol, but there is only an experience if some mechanism interprets the symbol as having a meaning. If said interpretation happens, that is the experience, the physical process we have discussed.

        I’m further postulating that these neurons are exclusive of the neocortex, but take input mostly if not exclusively from the neocortex.

        Which part needs work?

        *

      • Wyrd Smythe

        “I’m saying any symbol processing has [something it feels like].”

        Ah. Okay. That’s not a view I have any sympathy for.

        “What more do you require for ‘experience’?”

        Meaning, context, texture, depth, import, understanding.

        The field of consciousness, maybe more than any other, runs into word definition problems. It is one thing for a rock to experience the warmth of the sun and quite another for a mind to have an experience. I think it’s a category error to conflate them.

        But I’m not sure this has any real connection with a discussion about [vector, configuration, parameter, phase, state, multi-dimensional, whatever] spaces. The concept of mapping orthogonal concepts to a multi-dimensional space, as I’ve said, is general and applies to a systems behavior, not its ontology.

        “These firing rates could be represented as a vector in a multitudimensional, I mean phase, space.”

        All those terms refer to different applications of the same thing. Call it anything you’re comfortable with. I’d only mention that “multi-dimensional” is redundant — all such spaces are multi-dimensional (the dimensions are kind of the point), so it goes without saying.

        (You might be using “multi-dimensional” as a stand-in for “mind-bogglingly huge number of dimensions,” which would be understandable. We’re talking about a space with many billions of individual axes!)

        Conversely, as you describe it, our (let’s call it) Mind Space might not be very deep. It has billions of axes, one per neuron (or neuron group, if that’s more sensible), but each of those axes only has a few values — however many necessary to distinguish different firing rates.

        OTOH, if neuron firing is as complex as I suspect, each axis might need many values, and that would make our space not just mind-blowingly vast, but fairly deep as well.

        “Which part needs work?”

        What are you seeking to accomplish with this? I’ve said several times now that it’s a fine way to describe the system — quantum physics uses something similar to describe quantum interactions.

        Our consciousness can be imagined as a point (or vector, if you like) constantly moving through Mind Space… And?

      • JamesOfSeattle

        Wyrd, I’m not looking for sympathy, I’m looking for reasons why my understanding succeeds or fails.

        I’m trying to distill Consciousness down to its bare minimum unit. To compare, it’s like trying to identify the bare minimum of what we call water. I want to talk about hydrogen and oxygen atoms, but it seems like you might respond “where is the wetness?, how does that explain clouds?”. “Texture, depth, import, understanding” are rivers, snowfall, glaciers. Complicated versions of water. Human consciousness is the most complicated version of consciousness, but the thermostat may be the simplest, the H2O molecule. The rock being warmed by the sun does not have it at all.

        When I talk about the multidimensional vector, it is because I want to try to explain how the smallest unit of experience (symbol—>[mechanism]—>output) can start to be applied at the high level of humans.

        [BTW, “multidimensional” because the number of dimensions is not determined, but is certainly more than 3, probably in the hundreds. ]

        When I talk about a set of neurons used to create symbols, it’s a specific set of neurons. I’m not talking about a configuration space of all the neurons in the brain. I’m talking about a (comparatively) small subset of neurons which were designed for the sole purpose of representing multiple possible symbols, just like the memory space in a computer that represents the screen. There is (or could be) a memory space for each pixel, and different possible values for each pixel. Consider all the possible symbols that could be displayed on a screen. Now if there were other systems watching that screen and responding appropriately to the symbols displayed there, those symbol—>[mechanism]—>output processes would be experiences much closer to human-type experiences. But of course the actual human processes are likely to be more complicated still.

        And to be sure, the idea of these multidimensional vector-type neural systems are not coming out of my own head. Chris Eliasmith has demonstrated using such systems in arguably biologically plausible neural networks. Just google Semantic Pointer Architecture.

        So to restate my postulate, I am guessing there is one (or more) locus in the brain where both sensory and non-sensory concepts are represented symbolically by the firing pattern of particular neurons, and when other systems in the brain respond to those patterns appropriately, those processes are conscious experiences. I further postulate that that locus or those loci are in the thalamus, or possibly other subcortical structure, but almost certainly not in the cortex. The mechanisms watching those loci, this generating the experience, likely include some in the cortex.

        I should point out that these postulates are (mostly) compatible with the current major theories of consciousness, including Integrated Information and Global Workspace.

        *

      • Wyrd Smythe

        “I’m not looking for sympathy, I’m looking for reasons why my understanding succeeds or fails.”

        😀 But you understand that was a polite idiom for saying I very much disagree with the position, right? You know I’ve argued strongly against computationalism (let alone IIT, which I don’t find creditable), yes?

        I just want to be sure you know you’re not in, um,… unskeptical territory, so to speak. 😀 But I’m happy to entertain a conversation!

        “I’m trying to distill Consciousness down to its bare minimum unit.”

        What if it doesn’t have one? What if it’s a holistic irreducible emergent phenomenon?

        “I want to talk about hydrogen and oxygen atoms, but it seems like you might respond ‘where is the wetness?, how does that explain clouds?’.”

        If you want, I can talk about atoms. The thing is, if we are talking about clouds, hydrogen and oxygen atoms say much the same things about clouds as they do about steam from a kettle or the moisture in the air.

        So talking about clouds in terms of atoms is likely to ignore important aspects about clouds that don’t apply to other contexts of hydrogen and oxygen atoms.

        That said, clouds definitely are made, at least in large part, from hydrogen and oxygen atoms, so let’s get atomic.

        “[T]he thermostat may be the simplest, the H2O molecule.”

        And the rock is not. Because?

        “When I talk about the multidimensional vector…”

        (Just keep in mind a vector is multi-dimensional by definition. There is no such thing as a vector that isn’t.)

        Okay, so you want to define vectors for related groups of neurons, that’s fine. Then the vector will move around in the Neuron Space for that group. You’re describing the dynamic activity of that group.

        “There is (or could be) a memory space for each pixel, and different possible values for each pixel. Consider all the possible symbols that could be displayed on a screen.”

        Okay. We’re talking about two very different sets of symbols, right? The symbols used to control pixels — your neuron groups — are low-level “operating system” symbols. The symbols displayed on the screen (the mind?) are completely distinct and very high-level.

        For instance, right now, all those low-level pixels are showing a Firefox browser open to this comment on my blog and an editor window I’m typing in right now. Some of them are just showing my background wallpaper. There’s a group along the lower edge showing a task bar.

        (This is exactly why I question the value of talking about hydrogen and oxygen atoms with regard to clouds. I don’t see how talking about pixels says much about what’s actually represented on the screen.)

        “I am guessing there is one (or more) locus in the brain where both sensory and non-sensory concepts are represented symbolically by the firing pattern of particular neurons,”

        Given that you’ve identified these neuron groups as pixel level (I think you once said Jennifer Aniston level), there must be a vast number of them.

        Or are you saying there’s a limited number, like video RAM, where all consciousness takes place? (That doesn’t match my understanding of how the brain works at all.)

        But I don’t see where vectors have anything to do with it.

        If you’re suggesting a given neuron group is capable of pointing at both Jennifer Aniston my feelings about a good long walk with a dog, doesn’t that require a very large group?

        And what do vectors have to do with it? I don’t at all see what they bring to the table.

  • JamesOfSeattle

    Wyrd, I understood you were disagreeing with my position, and I was trying to politely ask for specific reasons, logical reasons, why my position is not tenable. But you did ask some questions, and so we can hopefully get to your specific reasons.

    For reference, here is my definition of a process that counts as a conscious process:

    A process in the form Input —> [mechanism] —> Output wherein the Input constitutes a symbolic sign, the mechanism is designed to interpret that sign for a specific purpose, and the output is a valuable response relative to the meaning of the sign.

    You ask, what if [consciousness] is a holistic irreducible emergent phenomenon. It is. It emerges when the conditions described above are met. It cannot be further reduced. A digital thermostat meets those conditions. A rock warming in the sun, without more, does not.

    So now we can talk about possible human-style variations on this theme, which brings us to the vectors and pixels.

    Everything you stated about the pixels and the screen, two sets of symbols, etc., was correct. The point of the pixel/screen analogy is to demonstrate that you can have one smallish set of elements, pixels or neurons, which, when taken as a group, can display/represent a vast repertoire of symbols.

    For example, one set of values for those pixels could represent a photo of Jennifer Anniston. Another set could represent the words “Jennifer Anniston”. Another set could represent the concept of a particular person, such as Jennifer Anniston.

    So how does this relate to consciousness? If I’m right, then everything you experience is experienced because it is represented in this way by one or more such sets of elements (pixels/neurons). Each “experience” could be described as input (set of neurons firing in way which represents words “Jennifer Anniston”) —> [Mechanism (which recognizes words)] —> output (motor actions causing you to say out loud “Jennifer Anniston”), or some variation. Note: the output might be as simple as storing in memory.

    So now what I’m looking for is a sentence like, “No it couldn’t work that way because …”

    *

    • Wyrd Smythe

      “I was trying to politely ask for specific reasons, logical reasons, why my position is not tenable.”

      Ah, great, we understood each other. 🙂 Let’s defer the conversation about why I disagree with computationalism for just a bit. I’m planning some posts this month about exactly that topic (not just computationalism, but why I disagree with it). That will be the perfect place for it.

      And, FWIW, I did respond to your polite request. I pointed you to a multi-post series that explores computationalism as I see it. The post I linked to has a list of links at the bottom that access the whole series.

      “A rock warming in the sun, without more, does not.”

      Okay, got it. I understand the distinction you’re making.

      Moving on to pixels of the esteemed Ms Aniston (one ‘n’, btw; you’ve probably blown your chance she’ll ever call you)…

      “So now what I’m looking for is a sentence like, ‘No it couldn’t work that way because …'”

      I hate to disappoint you, but on some level my response is more along the lines of, “Isn’t that more or less the way everyone thinks the brain works?”

      That clusters of neurons are associated with specific things?

      Our brain network isn’t anything like a general computer in how computer memory can be used for anything — the address (location) of a chunk of code or data doesn’t matter at all. Modern computers routinely move things around RAM to free space.

      Our brain is a super complex physical network that we train over a lifetime. Definitely parts of it are associated with memories or functions or behaviors.

      A while back, I apparently lost the (hopefully very small) section that could easily recall Cameron Diaz’s name. There was a period of over a year where I just couldn’t think of her name. It was always that actress in Something About Mary. I finally retrained my brain, and now her name comes to mind easily, but that was weird. It really felt like a genuine loss.

      So to the extent you believe specific areas of the brain involve specific things, we’re on the same page. (And I kinda thought that was generally believed?)

      One question I have:

      “For example, one set of values for those pixels could represent a photo of Jennifer Anniston. Another set could represent the words ‘Jennifer Anniston’. Another set could represent the concept of a particular person, such as Jennifer Anniston.”

      Those would all involve different groups of neurons, though, right? They’d be linked, no doubt, but images, words, and thoughts, even about the same person, are still very different mental constructs and involve different aspects of the brain.

      My only objection, for now, is that your axioms make your conclusions trivially necessary. If you deem a thermostat “conscious” (because it processes symbols) then a brain obviously is conscious. Pointing to the symbols responsible doesn’t really add anything.

      Of course they’re there; you defined them to be there! 🙂

      So how do vectors factor in?

  • JamesOfSeattle

    “Those would involve different groups of neurons, though, right?”

    No, no, no. One group of neurons. Firing this way = J. Aniston, firing that way = C. Diaz, firing another way = Pittsburg.

    There are various places in the cortex associated with specific concepts, but I will speculate that recalling a concept involves those places in the cortex activating in such a way as to reestablish the pertinent pattern in the central group.

    As for vectors, you really should google Semantic Pointers. Or you could try this paper: http://peterblouw.org/files/cogscijournal2016.pdf

    *

    • Wyrd Smythe

      Wait. Are you talking about the screen or the pixels?

      If pixels: How does a pixel represent all those things? If the screen: Then aren’t we talking about a very large group of neurons? Essentially all those involved in consciousness?

    • JamesOfSeattle

      We’re talking about the screen (well, the memory locations that generate the screen, actually, but whatever).

      But when we’re talking about the screen, I’m not saying all of consciousness happens there. I’m saying that for one particular consciousness, the one we call our stream of consciousness, the autobiographical self, the screen is used for the Input of the pertinent processes, and all of our reportable experiences get represented on that screen at some point. The processing of what gets onto the screen happens elsewhere. What goes onto the screen gets determined elsewhere, but if it doesn’t get onto the screen, it’s not part of the “stream of consciousness”.

      *
      [and don’t get hung up on one screen. There could be a few screens]

      • Wyrd Smythe

        “We’re talking about the screen…”

        Okay. That’s a very large group of neurons, right?

        Let me use my screen as an example. It’s got a resolution of 1920 × 1080. That’s 2,073,600 pixels. They’re RGB pixels, so, assuming 32-bit color, that’s four bytes per pixel, for a total of 8,294,400 bytes of RAM.

        Let me go a little further with the math.

        Each pixel, with its three RGB components, has a 3D Pixel Space, a Color Cube of its own. (See my post Cubes and Beyond for details about color cubes.) The “vector” in that space can point to 16,777,216 distinct points in the cube — all the possible values a pixel can have.

        The screen has its own Screen Space, with 2+ million dimensions, one for each pixel. Each of those dimensions has the almost 17 million possible pixel values.

        Quick math: 2×106 × 17×106 = 34×1012.

        Calculator math: 34,789,235,097,600

        So my screen has 34 trillion possible states it can be in at any given instant.

        That’s a lot.

        “[A]ll of our reportable experiences get represented on that screen at some point. The processing of what gets onto the screen happens elsewhere.”

        Sure. That seems a general consensus. There is the machinery of consciousness and we can think of one area of that machine as the “screen” — the seat of our spotlight of thought.

        This all seems pretty standard in what it’s describing.

        Do vectors come in at the pixel level or the screen level?

  • JamesOfSeattle

    Wyrd, I think you have underestimated the number of possible states by many orders of magnitude. Consider a pixel that instead of 17M states, has two possible states, on or off. Two of those pixels has four possible states. Three of those pixels has 8 possible states.

    So your screen does not have 2M x 17M possible states. It has 17M raised to the the power of 2M possible states.

    So let’s say neurons have 5 distinguishable states. 500 such neurons could then have 5500 possible states. Seems like enough.

    Vectors come in at the screen level, one vector is one of those 5500 possible states. That would be a 500 dimension vector.

    *

    • Wyrd Smythe

      D’oh! Yes, you’re absolutely correct; I wasn’t thinking straight there!

      5500 is 3.055×10349, so definitely a pretty big number!

      Now that I’m caffeinated, let me try again with my screen. 🙂 Exactly as you say, 224 states per pixel raised to the power of 1920×1080 pixels. Which, yes indeed, is a much, vastly, bigger number.

      (I’m not even sure how to calculate it. Maybe with logarithms. Even 22,073,600 is beyond the calculator’s ability, let alone 17 million raised that high.)

      I actually did think the number I came up with kinda low, but yes, my Screen Space is mind-bogglingly big.

      “So let’s say neurons have 5 distinguishable states. 500 such neurons could then have 5500 possible states. Seems like enough.”

      These groups of 500 are what you’re equating with pixels?

      “Vectors come in at the screen level, one vector is one of those 5500 possible states. That would be a 500 dimension vector.”

      But it’s a pixel or the entire screen?

  • JamesOfSeattle

    Ack. My superscripts don’t seem to have worked. 5500 above should be 5 to the power of 500. Feel free to edit and/or tell me how to do superscripts.

    *

    • Wyrd Smythe

      WordPress strips out the {sup} tags, unfortunately. I can put them back in (and I’ve done) when I notice or you ask.

      One alternative is using one of the programming language styles: 2^24, 10^6; or 2**24, 10**6. Another is scientific notation: 2E24, 10E6 (or 2e24, 10e6, which I think is more readable).

      If you want to get really fancy, WordPress does support LaTeX:
      \lim_{x \to a} \frac{f(x) - f(a)}{\sqrt{x^2 - a^2}}
      😀

  • JamesOfSeattle

    How can your internal “screen of consciousness” (the spotlight of your attention) be so vastly less complex than the screen it’s beholding right now?

    1. Because that’s enough for it to do what it has to do.
    2. I’m not sure it’s less complex, especially given that those neurons are interacting amongst themselves, as opposed to those pixel memories.

    *

    • Wyrd Smythe

      “Because that’s enough for it to do what it has to do.”

      No, sorry, not even close. We’ve definitely gotten to “No it couldn’t work that way because…”

      Because a space of 10350 isn’t anywhere near enough (by tens of thousands of orders of magnitude) to embody a representation of our moment-to-moment thoughts.

      We just agreed my lowly computer monitor has a state space of 16,777,2162,073,600 possible states, and all it can do is represent a two-dimensional 1920×1080 image.

      Focus your attention on any complex topic, and your conscious representation has vastly more information content to it.

      Information theory alone tells us there’s no way 500 neurons is anywhere close to ballpark. Just try to imagine the best possible way to capture the data of, for instance, what you’re looking at right now.

      There’s no way to fit that into 500 neurons. It’s not possible. It takes millions, if not billions.

      And consider this: Estimates seem all over the map about how many neurons the human cortex has. I find figures from 10-billion to 26-billion. One study found 8% of the neurons are in the prefrontal cortex — apparently in most animals this is true. Combining those facts, it seems our prefrontal cortex has roughly from 1- to 2-billion neurons. Let’s error low and call it 1-billion.

      You’re suggesting that 0.00005 (50/millionth of one) percent of those are this inner consciousness screen. That seems utterly unlikely to me.

      So: (1) Because 500 is orders of magnitude too small for the job; (2) Because 500 is a tiny fraction of the part of the brain usually considered our conscious executive function. I’m confident much larger sections are involved in our waking thoughts.

      “I’m not sure it’s less complex, especially given that those neurons are interacting amongst themselves, as opposed to those pixel memories.”

      It’s vastly smaller. Interconnections won’t help that.

      BTW, which you haven’t mentioned so far. Each interconnection adds another dimension and becomes part of the state space. But you need to specify how these neurons are interconnected and what that does to aid representation.

      You can’t claim “interconnections” without accounting for how they help!

  • JamesOfSeattle

    Keep in mind, I’m not saying there is only one screen, but let’s just talk about one screen for simplicity.

    Why do you think your conscious representation has so much information? I’m not saying the amount of information you suggest is not available in the environment, but that doesn’t mean all that information is simultaneously represented in your consciousness. Take your reading this sentence. At any one moment you have the visual content of about one word in your consciousness. And now you don’t even have a visual memory of this whole sentence. Close your eyes and think of the second word in this sentence. How about the sentence before that one. All of this is on your screen, but it’s not all in your consciousness at once.

    Also, and you’re gonna love this, a lot of the information gets integrated, so, condensed into concepts, before it gets to the screen. And even these concepts may or may not get to the screen. Attention determines what gets to the screen. So the gorilla on the basketball court can get into the visual process, but never get to the screen.

    Also, you missed the part where I’m speculating that none of the “screen” neurons are in the cortex. I’m guessing they’re in the thalamus. All the processing as to what goes on to the “screen” happens in the cortex, and at least some of the processing based on the screen happens in the cortex, but the screen[s] itself is in the thalamus. (My current guess)

    *

    • Wyrd Smythe

      “Why do you think your conscious representation has so much information?”

      Because I don’t think what you say later about reading the sentence is true. For instance, right now I’m listening to Pink Floyd (DSotM) and aware of that. I’m aware of the temperature of the room, the pressure of the seat against my body, how tired or alert I feel, how hungry or not I feel, whether I’m rushed or not, and lots of other things that are always there at a low level.

      I’m going to guess your reply will be that my attention actually jumps from thing to thing. Okay fair enough, but there is still a huge amount of information in any one of those things.

      The words are completely reduced to word-level symbols. That symbol stream is accompanied by analog data involving how the words appear on screen — how clear they are, how my eyes feel, where line breaks are, error-correcting, and — in a debate — how those words work logically and in terms of the model being discussed.

      So, yeah, no way is 500 nearly enough.

      Because…

      [I just saw your reply about my computer screen. That’s the perfect place for me to pick up with “because …” Be there directly.]

      “Also, you missed […] I’m guessing they’re in the thalamus.”

      I didn’t miss it so much as ignore it. I’m not read into neurophysiology enough to discuss it one way or the other. Regardless of location, 500 neurons seems vastly too small.

  • JamesOfSeattle

    Oh, and I just noticed this sentence in your post: ”… all [your screen] can do is represent a two-dimensional 1920×1080 image.”

    Um, even treating that image as only an image, that screen can (theoretically) represent every human face that ever lived, every animal, every plant, every phrase ever printed or uttered or conjectured. It can’t represent them all at once. Only one (or a few) at a time.

    *

    • Wyrd Smythe

      Yes, but that’s all it can do.

      It can display, as we’ve talked about, 16,777,2162,073,600 different images, which is a mind-blowing number of images even considering the vast bulk of possible combinations results in “gibberish” pictures (if not just visual noise).

      So one important point is that not all possible locations in that huge image space are “valid” images. They’re images, but not meaningful images. This is often the case in state spaces — only a tiny subset is actually meaningful.

      The more important point is this: My monitor can only display 2D images of that resolution.

      I can imagine things in 3D, for example. If I think about my condo, there’s a 3D map in my head. If I think about driving to work, there’s a different 3D map in my head.

      I can think in terms of time relationships. I can think about something being in the past or happening in the future. I can think about colors or music or stories. All of these are very different representations.

      More to the point, a 2D monitor can only represent data in one very limited form and even so its state space is vast beyond imagination.

      I have to believe a system capable of containing any form of real-world representation has to have a bigger state space than my computer monitor.

      Try to imagine your moment-to-moment stream of consciousness as a silent 2D movie in 1920×1080. Doesn’t your consciousness seem many orders of magnitude richer than that?

      • JamesOfSeattle

        You’re still thinking of your screen as 2D, but we just showed that it’s 2MillionD. And when we talk about “screen”, we’re talking about the memory locations, not the actual monitor. You can put anything you want in those memory locations.

        And you’re right that most of those vectors are junk. So we go back to the definition of a conscious process:

        Input —> [mechanism] —> Output, where Input is a symbolic sign, created for the purpose of being a symbol, the mechanism is designed to interpret that symbol, and the output is a valuable response relative to the meaning of that sign.

        So the point is, some process puts one, or a series, of specific symbols in the screen, and other processes recognize those symbols and do things appropriate to the meanings of those symbols.

        *

      • Wyrd Smythe

        “You’re still thinking of your screen as 2D, but we just showed that it’s 2MillionD.”

        Yes, agreed, but that doesn’t mean as much as you think it does. (Remember, most of the space is junk. Only specific points are valid images. Their number is essentially zero compared to the total space.)

        “You can put anything you want in those memory locations.”

        No, not “anything you want” — any eight-bit binary pattern I want.

        That’s the crucial distinction.

        That collection of bit patterns is limited in what it can represent. Yes, there are myriad contexts they can be interpreted as (video memory, audio waveform, baseball stats, 3D model, etc.), but they can only hold so much information.

        I said before that the video RAM for my display is probably 8 megs (four bytes per pixel).

        Lots of images in my library won’t fit into that, but I can think of the whole image.

        Most of the songs on my iPod won’t fit into that, but I can think of a song as a whole piece. Especially songs I know well. Huge amount of data when thinking about them.

        Lots of map or 3D model databases won’t fit into that, but I can think of the territory of the map or the 3D model.

        And keep in mind we’re talking about eight-million “neurons” each having 256 possible states.

        5500 ≪ 2568,000,000

        I mean, insanely, much greater! (Just 256500 is 1.3×101204, and 500 is also ≪ 8,000,000.)

        Another way to look at it is that, as I mentioned earlier: 5500 = 3.055×10349 and 21,200 = 1.722×10361. The latter, obviously, is considerably bigger (by about 1012 — a trillion times).

        But 21,200 is just twelve-hundred bits — a paltry 150 bytes. You’re suggesting (with 500 neurons) that the content of my conscious thought fits into 150 bytes.

        So, again, no, 500 neurons doesn’t have a prayer of cutting the mustard. It has just under two thousandths of one percent of the state space of my video RAM.

        I know 5500 seems like a lot, but it’s actually small potatoes.

        Look at it this way: My calculator can handle these numbers. It can’t come close to calculating to size of the monitor or video RAM space. (And that’s just the 8 megs. As I said, there are data packages much larger.)

        “So we go back to the definition of a conscious process:”

        I’ll pick that up later. It’s getting late, and I need to finish something for my other blog.

      • Wyrd Smythe

        “Input —… [mechanism] —… Output, where Input is a symbolic sign, created for the purpose of being a symbol, the mechanism is designed to interpret that symbol, and the output is a valuable response relative to the meaning of that sign.”

        You’ve described what any software designer knows very well as “IPO” — Input – Process – Output. It’s a fundamental building block of software design.

        Any complex process has a top-level IPO, think of as three boxes, “I,” “P,” and “O.” Each of those is, itself, an IPO, and those IPOs have three sub-IPO processes, and so forth as far down as needed until the “I,” “P,” and “O,” boxes involve extremely simple “atomic” functions.

        Decomposing a process like that is one form of “top-down” analysis. Very standard stuff.

        I find no merit in the idea of referring to IPO as conscious.

        My thermostat is not conscious; it does not “experience” anything; there is nothing it is like to be my thermostat.

        Consciousness attests to its own consciousness. We can ask questions of a consciousness. Consciousness involves “something it is like” to be that consciousness. (Even my dog responds to queries. My thermostat is forever mute.)

        You’re free to define IPO as a unit of consciousness if you like, and then argue from there, but it’s not something I’ll ever agree with, so it’s not likely I’ll find arguments based on it persuasive.

  • JamesOfSeattle

    Wyrd, you’re missing the purpose of the 500 neurons. It doesn’t have to hold all the memory to reproduce all of your consciousness. It only has to be able to reproduce a small part at a time. You can’t hold an entire song in your consciousness at once. You can hold a concept of a song, a symbol that represents a song, and you might be able to use that to begin retrieving the lyrics of the song, but you don’t retrieve all of the lyrics all at once. You retrieve them in a specific order, and a phrase or so at a time.

    *

    • Wyrd Smythe

      I understand we’re talking about the focus of attention in the moment. I’m saying it can’t possibly fit in 150 bytes. I’m extremely skeptical it fits in 8 mega bytes.

      (FWIW, I’ve played music all my life, so I might hold more of a song in my focus than you’d expect. I often think in terms of verses and chorus, the chord pattern for the block. I play improvisationally, so I have to hold the whole structure in my focus.)

  • JamesOfSeattle

    “My thermostat is not conscious; it does not “experience” anything; there is nothing it is like to be my thermostat.”

    A molecule of H2O is not water. It doesn’t freeze, it doesn’t flow, it has no surface tension. A molecule of H2O does nothing that water does.

    ”You’re free to define IPO as a unit of consciousness if you like, and then argue from there, but it’s not something I’ll ever agree with”

    I suspected as much. If you decide beforehand that you will not agree with something, you can always make that happen. But if you require reality to match your intuition, you will miss out on a lot.

    As for myself, I much appreciated this discussion. I had not considered the screen metaphor before. It has problems we didn’t get into. Instead I maybe should have aimed it more towards a “dedicated memory space”. In any case, good practice.

    Thanks.

    *

    • Wyrd Smythe

      “A molecule of H2O does nothing that water does.”

      Dude, yes, exactly! The components of water do not demonstrate the properties of the ensemble.

      A thermostat, or any IPO module, might be a component of a conscious system, but that doesn’t make the component conscious. Exactly as H2O does not have the properties of water.

      I realize that can’t be the argument you meant to make … but the example you gave is pretty clear. H2O is not water! Thermostats are not conscious.

      “If you decide beforehand that you will not agree with something, you can always make that happen.”

      That’s a disappointing take-away, James. You’re making the ad hominem argument I’m too prejudiced to see the truth of your arguments. I don’t think that’s a fair assessment.

      You are not facing my prejudices. You are facing my conclusions.

      Formed over decades of consideration, discussion, and study.

      My axioms are different from yours, that’s all.

      If you define an IPO module as conscious, then your arguments follow from that. I deliberately ignored that aspect of things because (1) I knew we had different axioms on the matter, and (2) it didn’t appear relevant to the vector space discussion.

      As I mentioned, I’m writing some posts this month on consciousness topics, and those would be a fine place to debate the axiom that an IPO module is conscious.

      “Instead I maybe should have aimed it more towards a ‘dedicated memory space’.”

      We already did that when we moved from the screen to the video memory. It doesn’t get around the problem so long as you’re talking about small groups of neurons.

      As you saw, 8 megs of video memory blows the doors off anything 500 neurons can do. Or even 2000 neurons.

      Simple information theory: what 8 megs of RAM can do requires many times more than 8-million neurons, because neurons are simpler than a RAM location. You were granting neurons five states; RAM locations have 256.

      Your approach requires billions of neurons. That’s the math.

  • JamesOfSeattle

    But Wyrd, I don’t want to face your conclusions. I want to face your arguments. And sorry to be ad hominem, but I don’t know how else to take the statement “it’s not something I’ll ever agree with”, other than as saying the door is closed regardless of the argument.

    And if you are willing to agree that consciousness is more or less made of IPO’s, that all of figuring out consciousness is about figuring out the particular IPO’s involved and how they interact, then that much of my work is done and I’m happy.[suspecting this is not the case] If you want to say a single H2O molecule is not water, that’s fine. But some people will take the attitude that one molecule is water, and there is no truth of the matter between those.

    Regarding the math: the question isn’t whether 8 megs of ram can do more than 500 neurons. The question is can 500 neurons do what it needs to do. If so, then using 8 megs of ram for the same thing is just overkill.

    What the 500 neurons need to be able to do is represent concepts one at a time. The question then is how many different concepts can 500 neurons possibly represent, one at a time. And the (naive) answer is 5^500. But that assumes perfect accuracy, which is obviously unlikely. So how do we manage noise? I don’t know the specific answer to that, but let’s say after taking noise into account, the 500 neurons can distinguish, say, 5^100 separate concepts. I’m pretty sure that’s still a big number.

    Don’t know if this is the right time to talk about this, but at some point we will talk about combining concepts. So one configuration of the 500 neurons can represent “green”, and a different configuration can represent “ball”, and a different configuration can represent “base” (as in baseball base), and a different configuration can represent “green+(base+ball)”, or “green baseball”. You can get to a single concept “Subject:John+Object:Sushi+Verb:eats”, which is a single configuration of the 500 neurons. Does that change anything? [BTW, this is not coming out of my head. This is the Semantic Pointers stuff.]

    *

    • Wyrd Smythe

      “I want to face your arguments.”

      Which you have been. Every reply I’ve made has contained them.

      As far as challenging my axioms, you’re welcome to try. (I don’t mean that in the sarcastic sense, but in the literal welcome sense.) But I’m not new to this topic by any stretch, and I’ve heard the arguments, and drawn my own conclusions.

      “And if you are willing to agree that consciousness is more or less made of IPO’s, that all of figuring out consciousness is about figuring out the particular IPO’s involved and how they interact,”

      As you anticipate, I don’t agree to either of those premises. 🙂

      I think a structure of IPO modules is certainly a part of conscious structure. They may be more virtual or functional than physical, but (as I said) IPO is a generic concept that applies to any system that processes data. So of course they are an aspect of consciousness.

      “If you want to say a single H2O molecule is not water, that’s fine. But some people will take the attitude that one molecule is water, and there is no truth of the matter between those.”

      In your previous comment you wrote:

      A molecule of H2O is not water. It doesn’t freeze, it doesn’t flow, it has no surface tension. A molecule of H2O does nothing that water does.”

      I was just agreeing with you. The misfortune is it works against the point you’re trying to make. 😉

      “Regarding the math: the question isn’t whether 8 megs of ram can do more than 500 neurons. The question is can 500 neurons do what it needs to do. If so, then using 8 megs of ram for the same thing is just overkill.”

      As I’ve shown, 500 neurons amounts to 150 bytes of RAM. I find it hard to believe my moment-to-moment mental concepts fit in 150 bytes.

      The point about 8 megs was that it is limited, yet it’s much bigger than 500, so if 8 megs is limited, 500 necessarily is that much more limited. You can’t get around that.

      “The question then is how many different concepts can 500 neurons possibly represent, one at a time.”

      Yes, and the more important question is what concepts actually fit in 500 neurons (150 bytes)?

      I’m saying individual concepts can’t possibly fit in such a small space.

      “[S]ay, 5^100 separate concepts. I’m pretty sure that’s still a big number.”

      5100 = 7.8886×1069 … about 29 bytes.

      It’s a 29-byte number, which is a pretty big number, but you really think mental concepts fit in 29 bytes?

      The thing is, when I thought we were talking whole cortex, or at least prefrontal cortex, I was with you. When you made it clear your inner “screen” was only 500 neurons you lost me. That doesn’t match anything I know about information theory.

      “BTW, this is not coming out of my head. This is the Semantic Pointers stuff.”

      Understood. Let me add some thoughts in a new comment below. It might be worth starting a new post (if I can think what to write) to start a fresh thread.

      See you below.

      • JamesOfSeattle

        [not sure if I should wait … not waiting]

        When I said “a molecule of H2O is not water”, I thought I was making a patently false statement. I didn’t realize at the time you might consider it a true statement. My bad.

        Regarding all the rest: I think you are missing the concept of a symbol. The word “dog” is a symbol. It does not contain all the information that goes along with the concept of dog. It doesn’t have to. It refers to the concept. It “means” the concept. It “points at” the concept.

        Likewise, a single configuration of the 500 neurons can be a symbol. If it is, it refers to a concept. It “means” the concept. It represents the concept.

        *

      • Wyrd Smythe

        “When I said ‘a molecule of H2O is not water’, I thought I was making a patently false statement.”

        Ah,… you meant to say a molecule of H2O is water? Despite having none of the properties of water?

        How is that possible?

        “I think you are missing the concept of a symbol.”

        Dude. I’m a retired software designer who studied computer science. I know what symbols are. 😛

        My reply about vector space got long, so I’m turning it into a post for elbow room and a fresh start. Should be posting in the next hour or so.

%d bloggers like this: