Octopus Brains

I’ve long been fascinated by stories about octopuses. I confess I’ve eaten a few, too, and it’s obviously a worse than eating dog, which I could never. (OTOH, properly done calamari is really yummy!)

It’s not just that octopuses (and it is octopuses, by the way; the root is Greek, not Latin) are jaw-dropping smart. It’s that their intelligence operates in a completely different brain than ours — an evolutionary branch that considerably predates the dinosaurs. It isn’t just the top brain and eight satellite brains; it’s that their entire body, in some sense, and especially their skin, is their brain.

Check out this 13-minute TED Talk by marine biologist Roger Hanlon:

If you have any interest in brains or intelligence, you will definitely want to take the time. They really are astonishing.

As many have said, octopuses are very probably the closest thing we’ll come to actually meeting an alien intelligence.

The something it is like to be an octopus must put being a bat, a fellow mammal at least, to shame.

I do wonder what the smarter cephalopods and cetaceans make of each other. They would be equally alien to each other.

Alternate intelligences co-existing in the briny deep! 🙂

§

If you found that interesting, you might want to read the 2017 Scientific American article by Peter Godfrey-Smith, The Mind of an Octopus. It’s a thoroughly enthralling read.

Comparing octopus brains to all chordate brains:

Different animals are good at different things, as makes sense given the different lives they live. When cephalopods are compared with mammals, the lack of any common anatomy only increases the difficulties. Vertebrate brains all have a common architecture. But when vertebrate brains are compared with octopus brains, all bets — or rather all mappings — are off. Octopuses have not even collected the majority of their neurons inside their brains; most of the neurons are in their arms.

We have to go back about 600 million years to find a common ancestor. It’s thought to be some sort of flatworm with a very simple cluster of neurons in its front end that acted as a coordination center for neurons throughout the body.

From there we went our separate ways.

The author mentions that stories about octopuses leaving their aquarium to steal fishes from other aquariums isn’t terribly different from the natural behavior of moving overland from tidal pool to tidal pool in search of prey.

He continues:

But here is a behavior I find more intriguing: in at least two aquariums, octopuses have learned to turn off the lights by squirting jets of water at the bulbs and short-circuiting the power supply. At the University of Otago in New Zealand, this game became so expensive that the octopus had to be released back to the wild.

As with many animals, they recognize individuals, including human individuals. They even have apparent feelings about those individuals:

In the same lab in New Zealand that had the “lights-out” problem, an octopus took a dislike to one member of the staff, for no obvious reason. Whenever that person passed by on the walkway behind the tank, she received a half-gallon jet of water down the back of her neck.

Something the author quoted really struck me (and made me a little sad):

“When you work with fish, they have no idea they are in a tank, somewhere unnatural. With octopuses it is totally different. They know that they are inside this special place, and you are outside it. All their behaviors are affected by their awareness of captivity.” ~Philosopher Stefan Linquist, University of Guelph, Ontario

Well, that kinda sucks. Dogs, at least, like being with humans and having a human home. You’d hope researchers either study them in the wild or provide nice big homes for them, not prison cells.

§

I was at an aquarium once where we could pet a small, tightly held, nurse shark to see what its skin was like. We were told that, as we petted it, it was tasting us through its skin.

Octopus suckers have a similar thing going on:

For instance, in an octopus, the majority of neurons are in the arms themselves—nearly twice as many in total as in the central brain. The arms have their own sensors and controllers. They have not only the sense of touch but also the capacity to sense chemicals—to smell or taste. Each sucker on an octopus’s arm may have 10,000 neurons to handle taste and touch.

Apparently it’s not well understood the exact relationship between the main brain and the eight satellite brains, one in each limb. The limbs clearly have some degree of autonomy.

The author describes an experiment (see the article) that sought to test how much control the main brain has over the limb brains. The result suggests a command structure with fine control (and some autonomy) in the limb:

So it seems that two forms of control are operating in tandem: there is central control of the arm’s overall path, via the eyes, combined with a fine-tuning of the search by the arm itself.

It reminds me of the Gaea trilogy, by John Varley. In that story we meet an alien life form, Gaea, an intelligent living “space station,” in orbit around Saturn.

Gaea is a hexagonal wheel. Her main brain is in the hub with six satellite brains running the six sections of the wheel. When the Earth mission first gets there, Gaea is getting old and senile, and some of her sub-brains are rebelling.

Do octopus sub-brains ever rebel? Maybe when they’re teenagers?

§

One thing Roger Hanlon talks about in the TED Talk is how, despite such an early evolutionary branch, we see convergent functionality in the octopus.

We separately evolved “camera” eyes with a lens that focuses an image on a light-sensitive retina.

It appears a related cephalopod, the cuttlefish, experiences REM sleep.

But more importantly, it seems that despite their intelligence being implemented in a very different way, it still converges on certain necessities of physical reality.

This might offer some hope of being able to communicate with extra-terrestrial aliens.

An intriguing aspect of the difference involves how some feel our physical shape is important in how we think. But an octopus has no definite physical shape, no rigid bones or joints.

On some level, their brain and their body are the same thing, and their body is entirely fluid. What might that suggest about intelligence requiring, or not requiring, a shape?

§

I once spent about 20 minutes watching a squid in the Boston Aquarium.

And he (or she, I didn’t ask) just hung there watching me back.

I can’t say there was a connection. I’m not even sure there was “someone” there behind the eyes as much as, say, with dogs or other mammals.

Of course: alien intelligence, so maybe I wouldn’t. Maybe it’s wondering how the hell it came to be captive by such obviously stupid animals who have a disgusting shortage of limbs and can’t even recognize the most infantile of visual signals.

§

That’s the other thing that blows me away about octopuses and cuttlefish: Their skin is a video display, and a pretty damned good one.

And they can change the 3D shape of their skin to better imitate seaweed or other surroundings. (Pity that poor cuttlefish trying to imitate a checkerboard, although it gives it a good try.)

Last night I was watching out my window at fireflies, which always delights our inner child — so magical. Many creatures seemed to have harnessed the ability to selectively generate light or change their skin (some lizards can do that, too).

Bottom line, it fascinates me that such an early evolutionary branch found a way to rise to very high intelligence. (At least on the level of dogs, if not higher.)

One way to look at it is that we’ve already experienced First Contact.

And no more octopus sushi! (I really didn’t like it, anyway. Very rubbery.)

Stay flexible, my friends!

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

44 responses to “Octopus Brains

  • David Davis

    Amazing, 600m years to a common ancestor flatworm.

  • SelfAwarePatterns

    Peter Godfrey-Smith has a book, ‘Other Minds’, which has a lot of details on octopuses and other cephalopods. I’m still slowly working my way through it (although distracted by other reading material).

    I recently read about a study that showed that the arms are capable of coordination between themselves, completely independent of the central brain. Although there have been plenty of studies showing coordination between the vision processes in the brain and the arm ganglion.

    Given the arm independence, you have to wonder what that coordination is like. Adrian Tchaikovsky in his book, ‘Children of Ruin’, recently took a stab at it, portraying it from the uplifted octopuses POV as the “crown” influencing its “reach” to do things, but that the reach often takes independent action.

    The convergent evolution of octopus intelligence does seem like it gives a boost for the possibility extraterrestrial intelligence. I think their radically different architecture also gives hope for artificial intelligence. It seems to show that there are many ways to skin the intelligence cat, at least up to their level of intelligence.

    • Wyrd Smythe

      “Peter Godfrey-Smith has a book, ‘Other Minds’”

      I’ll keep that in mind. Ever since I basically finished looking into rotation I’ve been kind of looking for a new interest. I wrote this post because learning about octopuses just might be it.

      As I mentioned to David Davis above, I’m most fascinated by octopus-human interaction.

      “I recently read about a study that showed that the arms are capable of coordination between themselves, completely independent of the central brain.”

      That doesn’t surprise me. Several of the images I’ve seen of their main nervous system shows a ring connecting those eight sub-systems.

      “Given the arm independence, you have to wonder what that coordination is like. Adrian Tchaikovsky in his book, ‘Children of Ruin’, recently took a stab at it,”

      Those are definitely on my Buy List (although there’s a bit of a queue).

      “The convergent evolution of octopus intelligence does seem like it gives a boost for the possibility extraterrestrial intelligence.”

      Evolution found two paths right here on Earth, so it does seem intelligence is a valuable convergent goal. (Which makes perfect sense. Intelligence seriously extends your capabilities.)

      “I think their radically different architecture also gives hope for artificial intelligence. It seems to show that there are many ways to skin the intelligence cat, at least up to their level of intelligence.”

      Different physical ways, yes, indeed! 😀

      [grinning, ducking, & running away…]

      • SelfAwarePatterns

        ‘Other Minds’ is a good read that I do recommend. Its focus is octopuses, but it also broadly discusses brain evolution. That said, at this point it’s a bit entry level for me, so it keeps getting paused when I find more hard core material.

        “Different physical ways, yes, indeed!”

        Absolutely, like the different physical ways an AI might be implemented. 🙂
        [dives into own hiding place…]

      • Wyrd Smythe

        ROFL! 😀 😀

        I shared with you the amusing thought I had about hostile uncooperative AGI. I had a more serious thought I’d like to one day (but maybe not today) explore in more detail:

        When talking with computationalists, when it comes to the difference between the brains and computers, the argument is that information isn’t physical, so that doesn’t matter. But when it comes to then pointing out that brains are physical while computers are informational, the argument is that information is physical.

        I don’t think it can be both ways. And to be clear, I’m not saying this is deliberate or hypocritical or bad faith arguing. I think it’s a consequence of there being something inherently contradictory about computationalism. I think what happens is that when we look at it from two different angles, we see two different things. Which suggests there might be something wrong with what we’re seeing. Or how we’re seeing it.

        (I’m still chewing on this, so feel free to ignore me.)

      • SelfAwarePatterns

        I don’t think the argument is that information isn’t physical. For a physicalist, which most computationalists are, everything is physical, including information. The argument is that it’s multi-realizable.

        Lots of things are multi-realizable: books, software, movies, music, etc. Any instance of these things are physical. But the most relevant aspects of them, at least for our purposes, can be copied and reproduced in other formats. But at no point in their chain of existence do these cease being physical.

        Maybe human or animal minds individually aren’t multi-realizable. I think that conclusion is hasty, and much hinges on how we define an individual mind, but I can see someone reaching that conclusion. But AI doesn’t require that. All it needs is for the generic functionality to be multi-realizable.

      • Wyrd Smythe

        “For a physicalist, which most computationalists are, everything is physical, including information. The argument is that it’s multi-realizable.”

        The counter being that physical systems (which contain information) and information systems (which are reified physically) are two completely different things.

        “Lots of things are multi-realizable:”

        Everything you named is a form of information with no primary form.

        Physical systems have a primary form that other realizations refer to. A table can be realized in wood, plastic, stone, or metal, but to perform the same function as the original, it must have the same physical form — that of the primary (original).

        There is no primary physical form for books, software, movies, or music. Those are a different kind of system — one whose primary form is abstract.

        “All it needs is for the generic functionality to be multi-realizable.”

        Which assumes that the brain is the one physical thing in all of reality that is actually an information system. (Which would make it very special, indeed. 🙂 )

        That’s the big leap I think computationalism takes.

      • SelfAwarePatterns

        I’m not sure what you mean by “primary form”, although it sounds like you’re saying it’s a necessary form to function. But none of the things I listed can function except in some necessary physical form.

        To read a book, the text must be on some two dimensional display: paper, screen, or something else. Listening to music requires something that can play the sounds. Watching a movie has similar requirements. Physicality is always required, and the comparison must be made between the actual physical systems.

        What does distinguish these systems is that they are combinations, a physical pattern paired with a malleable engine to produce the final physical product, a combination that exists only as long as its needed. It’s worth noting that things like books once held much more primal physical forms. In ancient times they were manuscripts that had to be copied by hand on vellum or some other expensive substance, usually with varying skill and lots of mistakes, making each copy a unique work in its own right.

        I can see a possible future where tables might become things that are assembled on demand out of “smart matter”, where the design of the table is more important than any particular instantiation, which might become as transitory as a showing of a movie is today.

      • Wyrd Smythe

        “I’m not sure what you mean by ‘primary form’, although it sounds like you’re saying it’s a necessary form to function.”

        What I’m referring to might be thought of as a thing’s “birth form” — what is its form when it is first created.

        Brains, tables, all actual physical objects have a primary form from their birth. Brains form and grow in the womb. Tables grow and form due to the actions of table-makers. When we copy or imitate such objects, we have to copy their physical form for the copy to be valid.

        Books, software, movies, and music, are all ideas. They are born as abstract information. They have no required form to function (although, absolutely, to use them at all requires some kind of physical form).

        You see the table and the brain as just information, anyway, so you don’t see these as distinct. I consider physical systems and information systems significantly different, so I do. So it goes. 🙂

        “I can see a possible future where tables might become things that are assembled on demand out of ‘smart matter’, where the design of the table is more important than any particular instantiation, “

        😀 Even so, you can’t sit down at a virtual table. To be a copy of a table requires another table. The design of the table is just a description.

        Likewise, the brain has a design, which if copied, ought to work in another physical instance. But I don’t see the design as the same thing as the instances. (More specifically, I don’t see running a numeric simulation of a table as being anything like a real table.)

        I can add it to the list: Simulated rain storms aren’t wet, simulated lasers don’t emit photons, simulated earthquakes don’t knock down buildings, and simulated tables don’t hold coffee mugs. 🙂

        Pity we probably won’t live long enough to see the debate resolved one way or the other. 😮

      • Philosopher Eric

        I don’t know Wyrd… It doesn’t seem to me that Mike thinks he can sit down at a table which is virtual. I doubt he’d agree to walk over a virtual bridge, and regardless of how accurately simulated. I think he just means that information can be used to build functional tables, bridges, brains, and so on. So it could be that you and he will need to find other disputes.

        The thing that has tripped me up is his thoughts on valence — one part of the brain “talking” with another. But I guess that will depend upon what he means by “talking”. Surely he means something more than just communication in itself? Surely he wouldn’t tell me that simulated pains feel bad any more than simulated bridges hold weight? I would like confirmation about this however. And in that case perhaps he and I will need to find other disputes to work on as well.

      • SelfAwarePatterns

        Eric,
        On furniture designs, thanks for confirming that I was being at least somewhat clear.

        “Surely he means something more than just communication in itself? ”

        Well, there is lots of computation, or information processing if you prefer. The brain receives sensory input, produces motor and hormonal output, which are communication. And the various regions spend a lot of time communicating with each other, but also processing what they’re receiving.

        But I haven’t read anything in neuroscience that indicates it’s more than information processing (along with maintenance of the physical structure of that information system). I understand the strong intuition that there must be more. This is,after all, us, and we certainly feel like there’s more. What can I say other than that intuitions are unreliable.

      • Philosopher Eric

        Okay Mike, then let’s take this further. You’re saying that there is a non-conscious brain which processes information, or communicates with itself, to produce valence by means of motor and hormonal output (or whatever). I’m good with that, but then are you good with where I go next?

        By the definition of “valence”, there must be an experiencer — otherwise there is no valence. Is it more useful to say that the experiencer creates itself (which is valence) or rather is a product of the brain? Surely this experiencer is useful to call a product of the brain rather than say it exists as “brain” itself. A similar example would be with heat. I’d call heat an output of the brain that is not useful itself to call “brain”.

        The brain thus produces the conscious entity (as I personally define “consciousness”), and this entity is not itself brain. Note that this is a standard definition since there is something it is like to exist as valence. And if the brain communication which creates this valence functions entirely through causality of this world, then there shall be no substance dualism here, and even though (like heat) the brain produces something that is not itself brain.

        To go on just a bit further, valence itself does not yet bring functionality. All we have at this point is something which has the potential to feel good/bad. Theoretically evolution took this dynamic and created the conscious form of function by which you and I perceive existence. Along with informational senses and memory inputs, valence motivates us to interpret our inputs and construct scenarios from which to feel better. Our only conscious output to this end seems to be muscle operation.

      • Wyrd Smythe

        @Eric: “It doesn’t seem to me that Mike thinks he can sit down at a table which is virtual.”

        🙂 Obviously not.

        Just as I hit submit I realized that table was an unfortunate choice since, obviously, the table exists as an abstract pattern in the table-maker’s mind before the table is actually made. I should have gone with tree or some other natural not-designed object.

        @Mike: “I understand the strong intuition that there must be more.”

        Don’t all those same biases apply to you, too? You seem to be suggesting you’ve risen above all that. What makes your view so much more valid and not based on your intuitions?

        @Mike: “This is,after all, us, and we certainly feel like there’s more.”

        What if that’s an accurate feeling? Yes, many of our intuitions are wrong, but many of them are right. What if your intuition about this is wrong? What if you really are missing something?

        @Mike: “What can I say other than that intuitions are unreliable.”

        Including yours, right?

        @Eric: “All we have at this point is something which has the potential to feel good/bad.”

        So does your theory apply to our common ancestor flatworm?

      • SelfAwarePatterns

        Wyrd,
        My intuitions are no more reliable than anyone else’s. I do my best not to trust them. Which is why I try to ground my views on this stuff in the science as much as possible. Do I always succeed? I’m sure I don’t. But that’s one of the reasons I blog, and why I often ask people to tell me what I’m missing.

        I accept the bizarre aspects of quantum mechanics and general relativity because the data leaves no choice. At the same time I’m skeptical of ungrounded speculation on these topics. I’m the same way on the mind. If you want to convince me, find data that forces me to accept it. (Reliably attested and reproducible or otherwise verifiable data.)

      • Wyrd Smythe

        “Which is why I try to ground my views on this stuff in the science as much as possible.”

        I know you do, and likewise. (There’s a reason you’re one of the very few non-scientist blogs I follow. Like one of three.)

        When it comes to neurophysiology and functional analysis of mind, I perceive you as more well-read than I, and I rarely find what views I do have on these matters different from yours. We have no quibble I know of when it comes to the known science.

        But the question I have is: where is the “[r]eliably attested and reproducible or otherwise verifiable data” for computationalism? That’s just a guess at this point.

        As I’ve said, if you’re so skeptical of what isn’t proved, why aren’t you skeptical of computationalism?

        “But that’s one of the reasons I blog, and why I often ask people to tell me what I’m missing.”

        FWIW, pardon me for saying so, just one man’s data point, I’ve come to wonder how much you truly mean that tag line. Sometimes it feels more like a challenge, that you don’t really believe you’re missing anything.

      • Wyrd Smythe

        By analogy, I’m something of an atheist when it comes to belief in computationalism. 😉

      • SelfAwarePatterns

        “But the question I have is: where is the “[r]eliably attested and reproducible or otherwise verifiable data” for computationalism?”

        For me, it’s in the way we understand neurons to work. It seems natural to interpret a neuron as summing up its positive and negative inputs, firing a spike if its threshold is met. That shouts logical processing to me. You used the term “evaluation” the other day. I can agree with that. But logic gates seem like evaluations too, just simpler more discrete ones.

        I’m comfortable with the term “computation” for what neurons are doing. You’re not. For me, this often feels like a debate about whether the color of the sky is blue or azure.

        “Sometimes it feels more like a challenge, that you don’t really believe you’re missing anything.”

        Honestly, I usually don’t. I’m reasonably confident in my thinking. (Many would say arrogant.) But I am open to the possibility. It’s really a test of how solid my conclusions are. One I expect to pass. But maybe I won’t. (James of Seattle almost had me thinking I hadn’t the other day on the Ned Block discussion.)

        I’ll give you an example of one I didn’t. Years ago (pre-blog) I used to argue with theists. At the time, I was testing my conclusions about God and the problems with religion. My opponents never convinced me that God exists, but after many debates I was forced to admit that the evils of religion were vastly overstated.

        So, I mostly don’t think I’m missing anything. But I’m always watching to see if I am.

      • Wyrd Smythe

        “But I’m always watching to see if I am.”

        One of my favorite quotes: “An intellectual is someone whose mind watches itself.” ~Albert Camus, 1951, Notebooks

        “You used the term “evaluation” the other day. I can agree with that. But logic gates seem like evaluations too, just simpler more discrete ones.”

        Totally agree. A main point of my Full-Adder “Computing” post was exactly that. Logic gates (and neurons) don’t “compute” per the Computer Science definition, but clearly in both cases input signals are processed to create an output signal.

        (And remember that, when it comes to neurons, I don’t care what they’re made of. It’s that gate operation and the network that matters. (Obviously, I do require a physical network.))

        “For me, this often feels like a debate about whether the color of the sky is blue or azure.”

        But it’s really not. The word doesn’t matter, it’s what we mean by whatever word we do use. You conflate what I distinguish as physical systems (which use information) and information systems (which are physically reified).

        I don’t object to the word, I object to the conflation of seemingly distinct processes. (Because I think process matters. You don’t, so the conflation doesn’t bother you.)

        “I’m reasonably confident in my thinking. (Many would say arrogant.)”

        We have a lot in common. Someone once said to me, “No one should be as confident as you are!” And I know many find me arrogant. Thing is, that arrogance is earned by a long string of more successes than failures. If I’m confident, it’s because my history has taught me that I usually get it right (or not far off the mark). I suspect it’s been that way with you, too.

        And I quite agree about testing opinions in debate!

        “…but after many debates I was forced to admit that the evils of religion were vastly overstated.”

        Many atheists are reacting to bad experiences at the hands of religion, so a lot of atheism is directed at the worst aspects — which are usually the human failings of any organization. What’s a little different is the power religion has over peoples’ lives, so the power to damage is greater. (Although having the bank evict you from your home is pretty damaging, too.)

        Very few militant atheists view religion as a whole and admit that it does a great deal of good in the world. I think a fair reading of it probably puts the balance in the positive column.

        The problem with religions is that they’re man-made. We create god in our image.

      • Wyrd Smythe

        “For me, it’s in the way we understand neurons to work.”

        I’ve been thinking about that sentence in the context of my comment that started this thread. I believe I’ve heard you mention it before, and it seems a common point.

        The thing is, if that’s the main argument, computationalism does seem on questionable ground, because logic gates do not a computer make. (Engine parts can make a car, but not everything with engine parts is a car.)

        A key argument of computationalism refers to the platform-agnostic nature of information and computation, so it can’t very well rest on foundation of “looks like a logic gate”.

        Logic gates do process inputs into an output, no question there, but the class of things that process inputs into outputs is much larger than the class of things that create outputs through algorithmic processing (aka “computation as defined by CS”).

        (Why I keep bringing up the CS definition is that any machine built to run a mind will compute according to that definition, so whether that definition applies to the brain (which it doesn’t) seems significant. We’re talking about two systems that work according to very different principles.)

      • SelfAwarePatterns

        Eric,
        I think where I keep having issues with your descriptions are the mixing of different layers of abstraction. My take is that when you talk about the brain generating something for some separate experiencer to consume, but not in any substance dualism manner, you’re talking about an experiencer at a higher level of organization. If so, that’s fine, and if you don’t want to talk about brain operations, then no worries, but I think it’s best to just leave that layer out then.

        The rest of what you describe can work at a phenomenal or psychological level. In that sense, we definitely feel affects (including valences), the good and the bad. And they obviously factor into our deliberations. So, no objections there, at least from me.

      • Philosopher Eric

        Okay Mike, perhaps in the past I’ve thrown you off by not sufficiently clarifying my abstraction level. It sounds like we’re square now. Back when I was developing my ideas I was quite concerned about never referencing any physiology, including even “brain”. I felt that would compromise my ideas by getting into the wrong kind of stuff. And I did look down on people in psychology who seemed to pepper their discussions with gratuitous neuroscientific terms.

        I kept this stance up for at least a couple of years blogging, but found my level of purity untenable. Regardless of what was being discussed, everyone seemed to want to get into brain mechanics. Thus referencing a non-conscious machine which produces something conscious, or two forms of computer, further marginalized me. I was speaking the wrong language. Hopefully I’ve improved a bit by now.

        So today I commonly do use the “brain” term, or sometimes “central organism processor”. I even mention “neurons” in passing from time to time. But what I refuse to do is mention specific brain parts in relation to my “big picture” ideas. That level of abstraction simply does not apply. I am an architect rather than an engineer. And as you know, I believe that it’s the failure of effective architecture, possibly given a void of generally accepted principles of philosophy, that mandates the softness of our mental and behavioral sciences in general.

  • Philosopher Eric

    Thanks for this information Wyrd. I’ve occasionally heard about the strangeness of “octopus intelligence” in the past, though without looking into it. The interesting thing to me is that any reasonable model of “primal” consciousness (by which I mean “phenomenal”), will need to account for the behavior of the octopus form of it as well, which apparently evolved quite separately from our form.

    One interesting attribute is the “autonomous” tentacle function. I suspect it’s useful to say that each have non-conscious processors which do things individually and simultaneously by means of associated “programming”. Furthermore a central organism processor should exist which harbors a conscious component. This should be able to operate tentacles for unified function as well. Per Mike’s comment, I suppose that individual tentacles could non-consciously also work together. These “brains” are ultimately all connected.

    Under my model the conscious component takes in non valence current information (such as images), as well as non valence memory information, and finally all valenced information (which thus motivates such function through good/bad sensations). It interprets such inputs and constructs scenarios about what will make it feel better. The only non-thought output I know of here, like the human, is “muscle operation”.

    • Wyrd Smythe

      “I suspect it’s useful to say that each have non-conscious processors which do things individually and simultaneously by means of associated ‘programming’.”

      Why?

      Why wouldn’t the satellite brains be conscious in their own right?

      Alternately, perhaps there’s an integrated consciousness with a strong sense of distinct sub-actors, not unlike we can debate something within ourselves.

      • Philosopher Eric

        Why wouldn’t the satellite brains be conscious in their own right?
        Alternately, perhaps there’s an integrated consciousness with a strong sense of distinct sub-actors, not unlike we can debate something within ourselves.

        Good question Wyrd. The reason that I would not think it would be helpful to have consciouses in individual tentacles, is because then there would be a potential competition dynamic going on between them. We see this between individual people as well. Of course there is reason for marriage between individuals, though it ain’t always easy!

        I have considered how useful it might be to have various integrated modes of consciousness in the human. Thus I could write to you with one mode of consciousness, while also having a telephone conversation with a friend by means of another. Double productivity! But I suspect that the same issue arises here. Each form of consciousness would thus need to have its own valence from which to compel thought, and so organism competition could be problematic. Conversely debating something from a single consciousness, should simply be trying to figure out what would make this single entity feel best.

      • Wyrd Smythe

        “The reason that I would not think it would be helpful to have consciouses in individual tentacles, is because then there would be a potential competition dynamic going on between them.”

        I think that might be projecting our sense of “separate consciousness” onto a situation that could be very different. For instance, the system having evolved together, and the instances having existed together since birth, there may be a high degree of cooperation. Or maybe there’s a server-client relationship where the satellite brains are highly subservient.

        “We see this between individual people as well.”

        Exactly, in people. Octopuses are very definitely not people. We shouldn’t project our worldview on theirs.

        “Thus I could write to you with one mode of consciousness, while also having a telephone conversation with a friend by means of another.”

        True parallelism! I don’t know if that level of parallelism exists in the octopus or not. I’m not sure if researchers know that. Just my observations, which are scant, suggest they are mostly centered on a single “thought” line (so to speak), but the tentacles have some autonomy.

        No idea what level of consciousness they might have!

      • Philosopher Eric

        Well this isn’t so much me projecting my own experiences upon them, but rather predicting their function given my theory. This is to say that my ideas are indeed falsifiable. I haven’t really observed them much myself.

        If it’s found that individual tentacles seem to experience existence phenomenally, do they fight? That would seem evolutionarily problematic. Or if they do have independent phenomena but don’t compete, then why? None of this would sit well with my ideas. But “non-conscious autonomy”? That yea works.

      • Wyrd Smythe

        “This is to say that my ideas are indeed falsifiable. I haven’t really observed them much myself.”

        Heh, yeah, nor I, and we’re not likely to. I do find them interesting enough that I may start reading about the research that pertains to them. You might consider the same route with a view to testing your ideas.

        “Or if they do have independent phenomena but don’t compete, then why? None of this would sit well with my ideas.”

        There may not be a contradiction. You’re assuming they would have conflicting values, but maybe they don’t. Maybe they all “want” the same thing on behalf of the creature overall.

        Or, indeed, there may be no phenomenal experience, but that may depend on how richly one defines that. The tentacles very clearly experience, and react to, the environment independently. The real question is how rich or high-level that experience might be.

        OTOH, doesn’t your valence theory start very early, probably before the flatworm that is our common ancestor? If so, you just have to account for divergent paths.

      • Philosopher Eric

        There may not be a contradiction. You’re assuming they would have conflicting values, but maybe they don’t. Maybe they all “want” the same thing on behalf of the creature overall.

        My delayed response is given that my position seems difficult to effectively express. Better to not to say anything at all than to say something in a confused way! But let’s try this anyway:

        My ideas do not concern “values” (as in “liberty versus law enforcement”, or “country music versus rap”), but rather what founds values. This can be referred to as “value” itself. (We’ve discussed this a bit in the past. According to my single principle of axiology it’s possible for a machine which has no phenomenal experience, to output something which does, or “value” for something other than it to experience. I consider value to be the strangest stuff in the universe.)

        Furthermore value seems private — one conscious entity cannot directly feel what another does. Thus each proposed phenomenally conscious octopus arm will have its own ultimately unique interests in a conscious capacity (which necessarily will be to feel as good as it possibly can each moment). Thus if two such arms should affect each other, one should naturally tend to conflict with the interests of the other’s inherently separate interests — one valence is not the same as another valence. And indeed the main octopus as a conscious entity will inherently have its own unique valence apart from their’s. The only way for there to be perfect alignment here, would be for them all to experience the exact same value, or thus be the exact same conscious entity. So with evidence of single organism multi consciousness, my own ideas on the matter would grow suspicious. Instead I suspect that these arms function non-consciously.

        OTOH, doesn’t your valence theory start very early, probably before the flatworm that is our common ancestor? If so, you just have to account for divergent paths.

        Right. I’ve stated that the Cambrian explosion, dating between around 541to 516 million years ago was probably incited by central organism processors. Furthermore given how much emerged during this period, it does seem likely to me that the conscious form of function evolved here as well, that is given apparently more “open” environments.

        The flat worm, evolving only 270 million years ago, may or may not have possessed a value dynamic. Do they function under sufficiently open environments today? Perhaps, though I wouldn’t think so. If they were “conscious” however, then yes we should have common phenomenal origins. And given our very different circumstances, it’s not strange to me how different we are nonetheless.

      • Wyrd Smythe

        “The flat worm, evolving only 270 million years ago, may or may not have possessed a value dynamic.”

        There might be some confusion here. From the SciAm article I referred to:

        The history of large brains has, very roughly, the shape of a letter Y. At the branching center of the Y is the last common ancestor of vertebrates and mollusks—some 600 million years ago. That ancestor was probably a flattened, wormlike creature with a simple nervous system. It may have had simple eyes. Its neurons may have been partly bunched together at its front, but there would not have been much of a brain there.

        The common ancestor shared by humans and octopuses is 600 million YA, so it apparently predates when your theory kicks in.

        It then separately occurred in both humans and octopuses?

      • Philosopher Eric

        Ah much better Wyrd. My quick search must have given me a strange number given how common flat worms happen to be. 600 million YA is the sort of number I expected. So right, surely these lines of “consciousness” evolved independently, and who knows how many examples in these lines lost phenomenality, or even evolved it once again since.

        Anyway my point is about value being inherently private and unique, thus setting up conflict between interacting conscious entities (yes, like us). And actually I could go on to say that failure to formally acknowledge such natural conflict (given the social tool of “morality”), largely holds back our mental and behavioral sciences. Here “prescription” defiles “description”, and so we fail to grasp ourselves.

  • Wyrd Smythe

    FWIW, after years of debating this, I think it turns on two propositions:

    1. On The One Hand: Given an accurate enough numerical simulation of the brain, why wouldn’t it produce the same results as a brain?

    2. On The Other Hand: Simulated X isn’t Y, for various pairs of X and Y (arguably for all physical X).

    I think these are the tough questions for the respective sides.

    As I’ve mentioned before, I used to assume computationalism. It’s only in the last decade or so reading about it and discussing it that I’ve become so skeptical. What I really am is agnostic with my skepticism leaning me away from computationalism.

    And I think, in the context of those two questions, one reason is that, while I can sketch an answer to the first question, I have no idea how to answer the second, nor have I ever heard a good answer to it.

    So the second seems the stronger, more serious, objection.

  • Wyrd Smythe

    In a recent blog post, Scott Aaronson makes a really interesting point:

    In other words, a physical system becomes a “computer” when, and only when, you have sufficient understanding of, and control over, its state space and time evolution that you can ask the system to simulate something other than itself, and then judge whether it succeeded or failed at that goal.

    In other words, one qualification for calling a system a “computer” is that it can give you the wrong answer. This is in contrast to, for instance, referring to a molecule as a “computer” because any “answer” such a “computer” gives you cannot be wrong.

    Physical systems are what they are. With computation, there is always the issue of GIGO (garbage in, garbage out), and of incorrect (buggy) computation. In fact, as we’ve learned, it’s almost impossible to have bug-free code.

    Food for thought here…

    • SelfAwarePatterns

      James,
      That was interesting. (Although vaguely familiar. Either I read it when it came out or you’ve shared it before.)

      To add to this, I’m currently reading ‘Brain Structure and Its Origins’ by Gerald Schneider. He points out something interesting. Sponges do not have a true nervous system, but they do have cells that resemble neurons. (Schneider refers to them as operating in a “neuroid” manner.)

      So convergent evolution of neurons might not be the heavy lift the article implies. Although given their functional role initially in reproducing functionality originally in unicellular organisms, Schneider sees their evolution as inevitable, so maybe the sponge proto-neurons are yet another independent line.

    • Philosopher Eric

      Yes definitely interesting James. I don’t get the “Because they are so complicated, they are unlikely to have evolved twice” argument regarding neurons. Evolution has built all sorts of complicated things. Once seems far too pessimistic for something this basic. If one did happen and there’s lots of time, then in general I’d expect more occurrences as well.

    • Wyrd Smythe

      Interesting article, thanks for linking it, James.

      If sponges have something “neuroid” then perhaps ur-neurons go back much further than we think. The article leads me to believe this is all a topic of much debate and uncertainty.

      The eye seems a clear example of convergent behavior. The article points out we seem fine with that idea, but insist brains and neurons only happened once (presumably due to their complexity). And evolution was working with a common ancestor and a common set of tools and a common environment, so it’s not so surprising, perhaps.

And what do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: