Failed States (part 2)

This is a continuation of an exploration of an idea by philosopher and cognitive scientist David Chalmers — the idea of a Combinatorial-State Automata (CSA). I’m trying to better express ideas I first wrote about in these three posts.

The previous post explored the state vector part of a CSA intended to emulate human cognition. There I described how illegal transitory states seem to violate any isomorphism between mental states in the brain and the binary numbers in RAM locations that represent them. I’ll return to that in the next post.

In this post I want to explore the function that generates the states.

As a quick review, I introduced three systems:

  1. FSA:Brain — essentially, the dynamic living brain itself.
  2. CSA:Brain — a program; a state-driven brain simulation.
  3. FSA:Computer — a machine that can run CSA programs.

The third item is a computer that we’ll stipulate as up to the task of running a CSA program. There is nothing particularly remarkable about it.

Here I want to consider the second item, CSA:Brain. We also stipulate it is up to the task — specifically, that it is software that successfully calculates successive brain state numbers in a memory array.

The question I want to consider is how CSA:Brain must work (per computational abilities such as we know them today).

§ §

Last time I considered the result of CSA:Brain: the changing values in the CSA state vector. These are meant to represent physical brain states.

The problem is the many billions of illegal transitory states that necessarily occur in the state vector. If the state vector is meant to represent a mental state, then the system is generating many billions of invalid mental states for every valid one.

This seems to undermine the claim the state vector represents the causal behavior of the system (since state N does not immediately lead to state N+1, as the CSA requires).

What makes the CSA system different from a Dancing Pixies situation of appropriate — but not causally connected — states is the causality behind each state change.

That causality has to reside in the function that generates new states. To be more precise, it must reside in the combined system of FSA:Computer running CSA:Brain. I’ll look at the system next time; here I focus on the function that calculates states.

Let’s call it fBrain.

This function is the core (and possibly bulk) of CSA:Brain. It takes a state (the current one) and generates a new state (the next one). We could notate that like this:

fBrain(N) ⇒ N+1

The idea is that, given some current “mental state” (set of numbers in the state vector) it can generate the next “mental state” (a different set of numbers in the state vector).

How can it do that?

§ §

One approach is to simulate the behavior of the brain.

So, for each “neuron” the function finds all the neurons that connect to it and uses their states to figure out what this neuron should do.

This means fBrain has to have a map of neuron connections. The state vector is just a list — it has nothing to say about which neuron connects to another.

This approach highlights how the state vector is just memory used by fBrain to do its work, which seems to undercut the idea of a CSA.

In a CSA, the state vector as a whole determines the state, which implies that fBrain is supposed to use the entire vector as a “key” to “unlock” the next state. Otherwise why talk about Combinatorial-State Automata at all?

The approach I’ve just described is a brain simulation. It potentially generates successive “state vectors” representing neuron changes, but there is no real sense of a finite-state automata in how it behaves.

This really isn’t a CSA.

§

So let’s try to get closer to an actual CSA.

Let’s swing the pendulum all the way. In this approach, fBrain takes the current state vector as a key (with 86-billion parts!) and uses it to find the next state in a database.

A very, very large database.

But in this case fBrain isn’t doing any kind of calculation we might see as causal — it’s just using a given key to look something up. (In some regards, it’s Searle’s Chinese Room on a very fine granularity.)

So this approach also seems to lack the causal topology of the brain. It’s more like running a movie. The only reason one frame follows another is because one frame follows another — the causality is imposed by simple sequence.

The problem is the database only has one reply to any given key.

§ §

There’s something I haven’t touched on yet: What drives the system from state to state?

On the one hand, our mind kinda runs on its own doing its thing. We can imagine someone in sensory deprivation having thoughts with very little input. On the other hand, our world is filled with external inputs, and even our own thoughts are a kind of input to further thoughts.

In a state-based system, something has to initiate going from one state to the next.

In the last post I mentioned the idea of slicing states based on clock ticks. If we did that, then clock ticks would drive the system from state to state.

In the state-based systems I’ve designed, it’s typically inputs that drive the system. The content of that input determines which of the possible next states the system moves to.

As a simple example, if an email address parser is in the state of receiving characters from the name part, an at-sign (“@”) in the input steers the system into a state of receiving host name characters.

An important point here is that, in the first state, the system has three legal states it can jump to: (1) another name character; (2) the “@” symbol; (3) an illegal character or unexpected end of string (also illegal).

So a state system is a directed graph of states and transitions. There always has to be some event that moves the system from one state to another.

§

Another thing is, again based on systems I’ve designed, state-based systems, by necessity, know all their states.

They do not calculate the next state; they use input to determine which of the possible legal next states the system moves to (as the simple email parser example shows).

So the CSA system Chalmers describes doesn’t actually fit this model. His version seems to assume new states are calculated. Despite framing it as a CSA, what he describes comes off like it could be a brain simulation rather than a finite-state automata.

Or, if not, then all we’re doing is rolling off existing states depending on some input (which might just be clock ticks).

Again, I have a hard time seeing the organization invariance here. These are radically different systems. (But the claim involves the abstractions of these systems; I’ll get to that next post.)

§ §

Fundamentally, we want something that, given the current mental state, reacts to some kind of input, either internal or external (or timing).

We need a notion that, in this state, given this input, go to that state.

We can do that by simulating the neurons, or we can do that by assuming we know all the states and looking up the next one (as state-based system usually do).

What we’re really going for here is a stream of states in the state vector that represent mental states such that, if we hooked them up to the right interpreters, could generate muscle outputs (just like the brain).

Simulating the neurons is certainly a viable approach. (One I have doubts about, but I’ve covered that extensively.) But that’s not what Chalmers sees to suggest here.

§

A tricky aspect here is between what I’ve called system states versus states of the system.

What Chalmers seems to be reaching for is a system that generates (new) states of the system — that is, the states of a dynamic system in operation.

What state-based systems usually do is move among predetermined system states (the FSA of the system) to generate output states of the system.

There is a subtle difference in that both sets of states are the same. But a given system state only occurs once in the state table (system states always have a state table or diagram). Contrariwise, states of the system can repeat because the operation of the system can take it to the same state repeatedly.

So I think there might be some confusion (perhaps on my part) as to exactly what the putative CSA:Brain is supposed to be.

On the one hand, it could be a list of states of the system that could be played back to recreate that same line of cognition.

On the other hand, it could be a finite set of system states and transitions that define the system. This is what Chalmers seems to mean.

But that required fBrain to move the system from state to state, and it’s very hard for me to conceive of what fBrain might be (even in principle).

§

But let’s set that all aside and assume fBrain is possible.

Let’s assume there is a middle ground between looking up states and generating them based on simulating brain physiology. Let’s assume there is some function that analytically determines the next vector state.

Can it then be true that:

FSA:BrainFSA:Computer(CSA:Brain)

This turns on the claim that organizational invariance preserves the causal topology. And it involves the abstractions of those two systems being seen as matching.

I’ll pick things up there next time.

Stay invariant, my friends!

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

21 responses to “Failed States (part 2)

  • Wyrd Smythe

    [sigh] So many wyrds, and I’m not sure I’m making this as clear as it should be. I keep getting lost in the finer points, but some of this is kinda nuanced. Explaining what I’m seeing is a bit like those theoretical physics books: authors have to start with detailed explanations of the supporting context before they get to their thesis. (As a result one ends up reading the same physics explanations over and over.)

    And, to some extent, this is just me working out my own thinking. These posts are almost a form of working notes. This series gets closer to where I’m trying to get than the first one. Maybe somewhere down the line I’ll finally get it right. 🙂

  • Wyrd Smythe

    While reading GEB, I have to admit that sometimes Hofstadter’s little puzzles really give me a headache. I have a feeling all the time I spent trying to produce “MU” was wasted, because I have a feeling Hofstadter is going to eventually announce it’s not a theorem.

    Or the recent word games. Trying to think of a word that contains “ADAC” was one, and there was a parallel one involving a word that both starts and ends with “HE” (and while the word “he” technically qualifies, it’s considered a degenerate case and thereby disqualified).

    As I said, sometimes he gives me a headache, especially when it turns out the answer was in plain sight all along.

  • SelfAwarePatterns

    I can certainly understand the difficulty of finding the right explanation. I sometimes mull things for years before finding it. There are some I wonder if I’ll ever find. My drafts section is filled with false starts, not to mention Google docs. It’s the nature of philosophical concepts that the limitations of language are an issue.

    GEB sounds entirely like too much work. What is “MU”?

    • Wyrd Smythe

      I’m an old fart, so for me it’s local text files and hand-written notes, but, yeah, likewise. This little three-post exercise (last post tomorrow) ends up back at square one: It all boils down to the same question about simulation versus original system.

      In particular, two key questions remain in tension: (A) Why wouldn’t a detailed enough simulation work? (B) Simulated “X” isn’t “Y” (for various values of “X” and “Y”). I imagine they will until we actually build a good enough simulation. [shrug]

      GEB is a bit of an interactive book if you follow Hofstadter’s lead. I usually don’t, but some have caught my eye, like that little word game with “ADAC” (or the one with “HE”). The “MU” thing is something else. Hofstadter is setting up to explain Gödel — one third of his “braid” — so he’s explaining formal systems, one of which he calls “MIU”.

      It has the alphabet, “M”, “I”, “U”, a single input axiom, “MI”, and the following production rules:

      1. xI → xIU
      2. Mx → Mxx
      3. III → U
      4. UU → null

      And then he asks if it’s possible to produce “MU” and while I only spent 20 minutes trying, I get the impression it’s not a valid theorem. Or it’s one with a very long production.

      (I’m semi-tempted to whip up a quick Python script to automate productions seeing if it can be found. But, as Hofstadter has already pointed out, there’s a bit of Turing Halting problem due to how the production rules can shrink the string. More likely I’ll wait for his big reveal. 🙂 )

      • SelfAwarePatterns

        LOLS! I’m not that less of an old fart. But I use multiple machines, which drives me to put my musings in places I can access from all of them. Not that I’m organized about it or anything.

        On the simulation, yeah, it all comes down to what needs to be reproduced. The real question is which theory of mind is more accurate.

        GEB is definitely too much work for my tastes. 🙂

      • Wyrd Smythe

        Yeah, age-wise there may not be much difference, but I have pronounced Luddite leanings, hence still often using pen and paper. (And lately I’ve been focusing on handwriting for its mental acuity aspects. Aging brain!) I only got my first smart phone last Friday. 😀

        “GEB is definitely too much work for my tastes.”

        I’m not much of a fan of reader exercises, either. I usually ignore him when he suggests I stop and figure something on my own (unless it really catches my eye).

        Unrelated to any of this, but you may appreciate the story…

        Just now I was reading my news feed, looking at some stories I’d saved for later reading. One was about the quantum physics of fireworks. I like both those things a lot. (I actually subscribe to a YouTube channel that posts high-quality videos of Japanese fireworks. Quite spectacular some of them.)

        So I’m halfway through reading this article and it suddenly occurs to me: “Man,… this really feels like an Ethan Siegel article!” I keep reading; the feeling remains. I get to the bottom and see “More on Forbes” and think, “Ah, I’ll betcha…” I scroll to the top, and sure enough.

        Talk about pattern recognition. 😀

        And it’s the second in a week my brain has picked out a pattern that really surprised me. The other was while having a ballgame on in the background while I did other stuff. I happened to hear a guest sportscaster say something I thought raised to “Tim Laudner levels of obvious.”

        Tim Laudner is a member of the crew that broadcasts Twins games. He’s been there for many years, but always in post-game or pre-game scenarios. He’s a sweetheart of a guy, a former Twins catcher, but seems unable to ever say anything that isn’t blindingly obvious. Weirdly noticeably so.

        “The Twins will really have to bring their ‘A’ game if they wanna win against the Yankees.”

        Oh, gee, Tim, ya think? 😀 😛

        But he’s never been in the booth in all these years (and I’m guessing they’ll never have him do another stint). But I heard an obvious phrase and thought that thought.

        I little later another obvious phrase caught my attention and this time I thought the voice was deep enough it might actually be Tim. I started watching the TV, which had closed captioning on, and in a while they stuck a name in front of the text. Sure enough: “Tim: …” And by then, paying attention, it was certainly him.

        Very weird that one sentence caused a flag in my mind. A second one, plus a slight voice match, caused a stronger one, but that first one impressed me. (Although having the thought turn out right might have marked the event in a way that others that turned out wrong didn’t. There could be an element of self-selection here. But twice in one week seeing the power of pattern recognition was kinda fun.)

        Cool how the mind works!

        Oh, and another unrelated thought that’s been in the back of my mind since we talked about whether dreams are remembered memories or happen in real time.

        Have you ever been, say kicking back in a hammock on a nice day just watching clouds and thinking, and you drift off to sleep without actually realizing it, so your thoughts go from orderly structured day dreams to disordered unstructured actual dreams, but in a smooth transition. Suddenly you realize your day dreaming isn’t under your conscious control and you’re dreaming (’cause shit got weird) — which wakes you up?

        Happens to me all the time. It might be another data point in the real-time nature of dreams. That transition from controlled day dreaming, which obviously is linear and real-time, to dreaming, which isn’t, and being startled awake to realize a few moments has passed. FWIW.

      • SelfAwarePatterns

        The predicting brain in action! It’s amazing how little our brain needs to predict a complete pattern. And much of it often happens below the conscious level. But you’re right, we have to be careful about confirmation bias. Sometimes the pattern detector is wrong.

        I remember one day several years ago when I was going out to my car, a had a brief feeling of unspecific dread. It passed quickly and got in my car and drove away. About two minutes later someone plowed into the side of my vehicle, totaling it. Of course, I remembered the feeling of dread that time and don’t think about all the times I’ve had similar feelings that fail to correlate with anything. Still, I remember it as visceral reminder why many people see such patterns.

        On dreams, I read something the other day about how to realize you’re in a dream. The best sounding advice, to me, was to get into the habit of checking your reality (they recommended covering your nose and trying to blow out, but I think the old fashioned pinch might work too), so that when you’re in a dream, you’ll do it by habit and realize you’re dreaming. I’m doing the check whenever I think about it, waiting to see if I’m able to remember it in a dream.

      • Wyrd Smythe

        “It’s amazing how little our brain needs to predict a complete pattern.”

        It likely accounts for why we see things in clouds. (I recently read an article about how Philip K. Dick once saw a giant face of evil that affected him profoundly. IIRC, it caused him to undergo a religious conversion.)

        “Still, I remember it as visceral reminder why many people see such patterns.”

        Very true; I’ve got a couple of those myself: coincidences that could be read as something else.

        It did happen once that I got a strong feeling I should call a friend of mine. And that isn’t, and wasn’t, the sort of thing that happens to me. I don’t get feelings like that. But this time, out o the blue, really strong, call Valerie.

        So I did and found out she’d just undergone a profoundly life-changing event (discovered she was gay). She was in emotional turmoil, and my call was a life preserver.

        My ontology does include the possibility that minds that interact a lot might “entangle” and remain connected. It’s fringe, but there are odd accounts that make me wonder sometimes.

        “The best sounding advice, to me, was to get into the habit of checking your reality…”

        I heard once that, supposedly, you can’t read in dreams, so one way to check is to try to read something.

        My first lucid dream I was able to conjure up some papers in my hand, but somehow in the dream I then ignored them — never really looked at them.

        The second one I recall reading writing on crates, and the writing seemed to make sense, but I couldn’t recall what I read. Mostly, in that one, I remember using my dream power to hurl crates against the wall or crush them in place (without touching them). 😀

      • SelfAwarePatterns

        “It’s fringe, but there are odd accounts that make me wonder sometimes.”

        Not saying this is what you had, but one thing I think can be a factor is that we often figure something out unconsciously, which then surfaces in our consciousness as an explained intuition.

        ” heard once that, supposedly, you can’t read in dreams, so one way to check is to try to read something.”

        I hadn’t heard that one. Someone told me they once pinched themselves in a dream, but that it hurt in the dream. The one time I remember thinking to pinch myself, I had a very hard time actually doing it. I can’t remember what was stopping me, but it seemed like it took a lot of effort. When I finally did it, sure enough I felt nothing. But then I “woke up” from the dream. However, I was still dreaming. I eventually suspected as much, which finally led to me really waking up. It’s the only time I recall realizing I was in a dream, or dreaming I woke up.

      • Wyrd Smythe

        “Not saying this is what you had, but one thing I think can be a factor is that we often figure something out unconsciously, which then surfaces in our consciousness as an explained intuition.”

        Absolutely, and usually I can account for weird things like this that way. This time, there was no background information, and I hadn’t talked to her in a while, so it was really “out of the blue” both in being (ultimately) new information and in getting that feeling I should call.

        When I called, it was about an hour after the events that had rocked her world, so very weird situation.

        Once I was looking out a window and saw someone I knew (gal who cut my hair — she worked downstairs from where I worked then) walk from her car into the building. I realized I needed to go see her, but that was a case of reading posture and gait clues, I’m sure. (I was looking down from the second story and just saw her for a few seconds, but pattern recog kicked in big time there.)

        ((Stuff like that is how I know that whatever is wrong with my brain, it’s not traditional autism, but some other kind of disconnect.))

        “I had a very hard time actually doing it.”

        Like me with the papers I conjured. (It was more like I thought, “I could be carrying papers,” and then I was, but I couldn’t manage to read them.)

        “But then I “woke up” from the dream. However, I was still dreaming.”

        Ha, yeah, I’ve had that happen, too. Sometimes those intermediate dreams are so realistic it’s a shock realizing you were still dreaming. Last time that happened, I dreamed I woke up and heard someone prowling around my living room.

        It all ties into phenomenology, doesn’t it. Experiencing things that aren’t real, but seem so real. What does that say about our (supposedly) real experiences.

      • SelfAwarePatterns

        On trying to do things in dreams like my pinching myself or you reading, I wonder if the fact that the frontal lobes are inhibited has something to do with that. Maybe our ability to will things in a dream is compromised in some manner.

        “Experiencing things that aren’t real, but seem so real. What does that say about our (supposedly) real experiences.”

        I saw this tweet of someone at a conference taking exactly that stance:

      • Wyrd Smythe

        “Maybe our ability to will things in a dream is compromised in some manner.”

        Interesting point. You’d assume the brain “knows” about the inhibition, so maybe that does feed back. Isn’t the a common dream people have involving various inabilities to move? Wonder if that might be connected. (I never have the canonical dreams — naked in public, not studying for the test, etc.)

        Come to think of it, isn’t there a state people can wake up in and find themselves unable to move at first? Like the inhibition isn’t gone, yet, or something?

        “I saw this tweet of someone at a conference taking exactly that stance:”

        “Consciousness is for generating representations disconnected from the present environment”

        Oh, interesting way to look at it! I like that idea. Ties in with the whole modeling futures and selecting between them thing.

        There is also some analysis involving the “green flash” you can see after staring at red and then looking away — after effect of the vision system recovering. There is no green, obviously, so it supposedly raises questions about phenomenal experience.

        But it seems more like a system or signalling issue to me. I never saw much value in optical illusions or hallucinations. I do like the idea in that quote, though.

      • SelfAwarePatterns

        “Ties in with the whole modeling futures and selecting between them thing.”

        I see it as really a description of imagination, but I think imagination is central to our subjective experience. If we’re not actively imagining the current situation, then we’re not really conscious of it, acting on habit or instinct instead.

        “There is also some analysis involving the “green flash” you can see after staring at red and then looking away”

        In the stuff I’ve been reading, this comes down to the color opponency. We see green after being saturated with red, because the sudden absence of red seems like a green stimulus to our retinal ganglion cells. For the ganglion cells, it comes down to differentials.

        I’ve still been trying to find something on how the ganglion cells “know” which axon is a red vs green vs blue one. Not sure if it’s just something we don’t know yet, all the writers regard it as obvious, or it’s so hideously complicated they just don’t cover it. (I run into all three scenarios with this stuff.)

      • Wyrd Smythe

        “I see it as really a description of imagination, but I think imagination is central to our subjective experience.”

        Very much so. I’ve been trying to get meta with regard to how I make choices — trying to watch myself making them. It really does seem to involve imagining, or predicting, different futures and deciding which appeals most.

        The problem of epiphenomenalism has always lingered in the background for me, but recent discussions and articles have brought it into the foreground a bit more along with thinking about attention and perception.

        (It may be another unsolvable one. Our intuitions about choice and free will versus a first-order physical account, especially with regard to determinism (which I’m not entirely sure I accept — it’s true at the quantum level, but is it really true at higher levels).)

        I’ve been thinking about experiments that briefly display a grid or circle of objects, numbers, or letters. Subjects are asked how much they can remember (only 3-4 items typically). But added cues focusing attention on sub-sets of the display result in better reports.

        So it seems to show a divide between what we perceive at low and high levels. It doesn’t seem that mysterious to me. I think we need time to identify what we’re seeing — certainly when it’s suddenly flashed in front of us.

        It’s one thing to process a visual stream in context, right? Moment to moment information builds a model that each new moment doesn’t require all that much modification of. Each “frame” doesn’t add a lot of new information.

        Flash a picture seen for the first time, and the whole thing has to make its way up through the brain’s analytical system to reach a level of understanding. And I think it’s what we understand that we can remember and report. There doesn’t seem a lot of memory at the low levels. (They may be entirely preoccupied with conditioning the raw inputs.)

        Given that you and I have pretty much exhausted the hard problem or computationalism, it gives us some new material. 😉

        “In the stuff I’ve been reading, this comes down to the color opponency.”

        Absolutely. I see it as fully explained by the facts of the system. (And I don’t see it as having much to do with the “something it is like” to either experience red or the green after-flash. Or any optical illusion.)

        “Not sure if it’s just something we don’t know yet, all the writers regard it as obvious, or it’s so hideously complicated they just don’t cover it.”

        Heh, yeah, I know what you mean.

        WRT L, M, and S, cones: Where is distinct color information fully encoded into some combined signal? Does any distinct color information go as far as the visual cortex (or is it all combined and encoded by then)?

        Are there any studies that examine the brain while the subject is seeing only red light (or green light or blue light)?

        I keep thinking about those monkeys. I’m caught between two scenarios.

        A. The color wiring is essentially random with different cone sets causing different firing patterns that we identify as subjective color. That suggests the monkeys might see a new color when neurons that used to fire together now differentiate into “red reacting” and “green reacting” neurons.

        B. The color wiring is somehow predetermined enough that there are “red channels” (and green and blue). There is a “leg up” on the wiring architecture such that color identification “comes with” the system. (You’ve talked before about how we’re not born entirely tabula rasa — color could be part of “O/S”.) And that suggests the monkeys are now seeing two shades of green.

        Wish we could ask them. 😀

        (The original study raised hopes for treating color blindness in humans, but I’m not aware of any progress along those lines in the ten years since the original article.)

      • SelfAwarePatterns

        On epiphenomenalism, I still can’t get past the fact that we can discuss experience. Although someone recently pointed out to me that I keep asking people to describe what they mean, and they can’t, meaning that maybe we can’t discuss it. Still, our motivation to try still seems inconsistent with it.

        On determinism, it’s at the quantum level where I’m not sure if it exists. (Obviously it depends on which interpretation we favor, and I vacillate.) As we scale up, I become pretty sure of its reliable existence. QM seems so bizarre to us precisely because it’s so different than classical level physics.

        On quick pictures, Dehaene reported that we can often unconsciously identify shapes we only see subliminally. It can affect subsequent deliberations, but it’s just not enough to engage the full system. It appears to stay local. (In his parlance, it doesn’t enter the global workspace.)

        “Given that you and I have pretty much exhausted the hard problem or computationalism, it gives us some new material.”

        It somewhat gets us into the problem of demarcation, that is, what is or isn’t consciousness. Block says it is. Dehaene regards it as the unconscious. I tend to agree with Dehaene. It seems strange to call something we can’t remember, introspect, or self report on, as conscious. But Block’s conception of consciousness is different than mine.

        “Does any distinct color information go as far as the visual cortex (or is it all combined and encoded by then)?”

        From what I’ve read, the topological relationships among the retinal ganglion cells are preserved through the optic nerve, the LGN in the thalamus, and into the V1 area in the visual cortex. After that, things start getting transformed as subsequent layers become more selective in what excites them.

        Color seems to come from the differential patterns that the ganglion cells encode in their firing patterns. Reading this stuff, I’m gradually starting to see why most neuroscientists resist calling the cones “red”, “green”, or “blue” cones, preferring L, M, and S cones.

        But the question, where does red become red, doesn’t appear to be known yet. Both V2 and V4 appear to be involved in color. People with lesions in V4 can have their view of the world reduced to black and white. V4 is obviously crucial, but exactly why isn’t understood yet.

        “Wish we could ask them.”

        I do too, although even then we’d still wonder exactly what they are seeing. Many people are red-green color blind due to the M and L cones overlapping too much. When they put on special glasses that filter out intermediate frequencies so that there’s differentiation between the M and L cones, they suddenly can discriminate a lot more colors. But when I watch them closely in the videos, it’s not clear whether they’re actually seeing the same colors we see. In fact, the guy in the video at this link even appears to have his perception of blue affected.
        https://www.allaboutvision.com/conditions/color-blind-glasses.htm

      • Wyrd Smythe

        “On epiphenomenalism, I still can’t get past the fact that we can discuss experience.”

        Or that we can buy aspirin.

        Trying to read that Chalmers book I’m reminded why I never finished it. So much of his argument turns on counterfactuals that increasingly seem like science fiction to me, and I’ve never viewed science fiction ideas as strong arguments to what is possible.

        I mean, seriously, what am I supposed to make of logically coherent ideas that are metaphysically ridiculous? It irritates me, because once I strip through all the hand-wavy SF-based arguments, it keeps boiling down to the same old question: Why experience?

        So my mind drifts as I try to at least get through a couple key chapters, and amid the talk of “pain” versus C-fibers, it occurred to me we have a whole industry that makes, distributes, and sells, OTC pain-killers.

        Because people experience pain and apparently it causes us to create what turns out to be a huge industry based on addressing that need. (A feature that would be weird to find on any zombie world. Why would zombies need pain-killers?)

        “Although someone recently pointed out to me that I keep asking people to describe what they mean, and they can’t, meaning that maybe we can’t discuss it.”

        They can’t give an account you accept, which isn’t quite the same thing. I gave you a fairly long list of things we experience, but how we view such reports is another matter.

        (It may be there is no other general topic where careful definition of words is so important before any progress is possible. “Consciousness” “Experience” What a mine field.)

        ((Speaking of materialism, my drifting mind also thought, “If materialism is true, then it cannot be said Mary is in possession of all the facts about seeing red, because the actual mental states involved in seeing red are, under materialism, a physical fact Mary cannot experience in the room.” Thus the argument has no power.))

        “On determinism, it’s at the quantum level where I’m not sure if it exists.”

        In the measurement sense, yes, absolutely. I was referring to the conservation of quantum information (the source of the black hole information paradox).

        But exactly. Quantum interactions have an apparently random aspect that potentially makes a mockery of any causality built on it. I wonder if even at higher, supposedly deterministic, levels things aren’t as determined as they seem.

        Take pressure. Perhaps a full computational account of, say, a cylinder of gas isn’t possible, except at the quantum level, and then Heisenberg gets involved. (It’s occurred to me that, if the universe is viewed as a quantum computation, then any accurate simulation has to be at that level — which makes good simulations really big.)

        So I do wonder if systems with billions of components really are as deterministic as physics suggests.

        “Dehaene reported that we can often unconsciously identify shapes we only see subliminally. It can affect subsequent deliberations, but it’s just not enough to engage the full system.”

        So low-level processing can affect low-level decision making. I guess I’m not surprised inputs could condition unconscious processes. It seems clear consciousness rides at a pretty high level. Much never trickles up, and what does takes time.

        “From what I’ve read, the topological relationships among the retinal ganglion cells are preserved through the optic nerve, the LGN in the thalamus, and into the V1 area in the visual cortex.”

        Okay, good, that’s kind of what I thought. The monkeys had a weird topology change when a previously linked group bifurcated! (That 20 week time delay is intriguing.)

        “When they put on special glasses that filter out intermediate frequencies so that there’s differentiation between the M and L cones, they suddenly can discriminate a lot more colors.”

        I have a buddy with R-G blindness. (It’s so weird he can’t see the numbers on those test images. They stand out to me.) He got a pair of those glasses, and he mostly reports that it makes the colors more vivid. (I heard he spent some time outside checking out his wife’s Morning Glories.)

        (As you no doubt know, the M and L cones are pretty close to begin with. It’s easy to see how they could be even closer. White light is only 11% blue, and I’ve wondered if it has anything to do with the separation of the S cones.)

      • SelfAwarePatterns

        That aspirin example is a good one. I’ll have to remember it. It reminds me that one of the tests for affect awareness in animals is to see if, once injured, they initiate action (pushing a lever, etc) for analgesics.

        I definitely think Mary’s room is question begging, for exactly what you point out. Under physicalism, Mary can’t have all the information if she learns new information when she has the sensory perception. The premise becomes incoherent, only working under dualism. Like too many philosophical thought experiments, it only make sense if you already buy the conclusion.

        “He got a pair of those glasses, and he mostly reports that it makes the colors more vivid.”

        That’s the thing. The color blind people with the glasses don’t seem to perceive seeing new colors so much as simply being able to more easily distinguish what’s already there.

        It seems like our nervous system uses colors primarily to signal distinctiveness. The color of the letters in the Chromatic induction image at this link are the same wavelength, except for the bars going across it.
        http://www.scholarpedia.org/article/Color_vision#Spatial_and_temporal_factors_in_color_vision

      • Wyrd Smythe

        “The color blind people with the glasses don’t seem to perceive seeing new colors so much as simply being able to more easily distinguish what’s already there.”

        Makes sense. That optical notch filter just takes away some of the overlap, it reduces error. I assume purples (the Morning Glories) are more vivid because he’s getting a clearer L signal. Without the glasses, the (incorrect) M signal would muddy the color.

        “The color of the letters in the Chromatic induction image at this link are the same wavelength, except for the bars going across it.”

        I can see it’s meant to be, except, I think due to JPEG processing (which would have factored in the crossing bars), the RBG values are slightly different between “Chromatic” and “Induction.” Even so, they look strikingly different with the crossing bars. Seen in color patches side by side, they’re visually a lot closer.

        (The RGB values vary a bit within each of the two colors (JPEG processing!), but two example points I picked: (48, 225, 196) versus (50, 227, 199). Very close, but distinguishably different if one is used to working with color.)

        I recently pointed someone to a Phil Plait post that explores the same topic. Plait did some image work to really bring out the trick.

        It is pretty amazing how relative our color vision really is. The whole white balance thing shows that. (I started working with color in high school — theatre lighting — and then got into film and TV, where color is even more important. Also worked in graphic arts, where color is crucial, so it’s kind of been a thing in my life. About the only place it matters any more is when I do those 3D models for fun.)

      • SelfAwarePatterns

        Ah, someone was a bit careless with the image. You made me go plop those numbers into an RGB viewer to verify they at least looked identical. But as you note, the Plait post covers the same issue.

        What’s interesting about this relative color is that we still generally perceive objects to be one particular color. Our world theory includes permanent object colors, and for the most part, it’s predictive. (Internet pictures of a dress notwithstanding.) It implies there’s a lot of revision going on in the brain before it becomes consciously available.

      • Wyrd Smythe

        That Phil Plait post was a bit of a mind-bender seeing the modified versions he created. It’s almost hard to believe the effect is that strong, but “seeing is believing” (a phrase with rather contingent meaning 🙂 ).

        “Our world theory includes permanent object colors, and for the most part, it’s predictive.”

        Certainly from a physics point of view, objects reflect specific frequencies, so that view of permanent color is correct. But it’s one of those places where how heavily we process inputs really stands out.

        We used to leverage that in theatre lighting: slowly changing the color balance so the audience doesn’t really notice it, but it sets up for a quick change later for impact.

        One of my favorites is how, in a dark room, I might notice a “white light” out of the corner of my eye, but when I turn to look (because there are no white lights there) it’s the (pure!) red LED for a device. But with no cones to speak of in the peripheral vision, it looks white.

      • Wyrd Smythe

        “Ah, someone was a bit careless with the image.”

        Yeah, many don’t realize how much it corrupts an image. It’s tuned for natural scenes and does the absolute worst for graphic art with hard lines. It can’t handle images with high visual frequencies.

        I don’t know if you got much into my post, Fourier Curves, but I touched on how JPEGs work there and included a link to a YT video that explains it very well. It’s a weird process!

And what do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: