Morell: Animal Wise

I recently read Animal Wise: How We Know Animals Think and Feel (2013) by Virginia Morell, correspondent for Science and contributor to National Geographic, Smithsonian, and other publications. She’s author of several books including Wildlife Wars (2001), which she co-authored with Richard Leakey.

Morell takes us on a tour of current research into the minds of animals, starting with ants and working up through various species to our primate relatives. Dear to my heart, she reserves the last chapter for our best friends, dogs.

I found it a wonderful exploration with some real eye-openers.

The tour is nicely outlined by the chapters:

  1. The Ant Teachers
  2. Among Fish
  3. Birds with Brains
  4. Parrots in Translation
  5. The Laughter of Rats
  6. Elephant Memories
  7. The Educated Dolphin
  8. The Wild minds of Dolphins
  9. What it Means to be a Chimpanzee
  10. Of Dogs and Wolves

The only (minor) disappointment for me (after anticipating it up to that final chapter) was that Morell didn’t mention dog researcher Clive Wynne. (See post: Wynne: Dog is Love)

As the chapters suggest, the tour starts with “lowly” ants and proceeds “upwards” through fish, birds, and rats, to “intelligent” creatures such as elephants, dolphins, and chimpanzees. Each chapter focuses on a different researcher.

(I quote “lowly” and “upwards” and even “intelligent” here because there can be some question as to how accurate they are depending on what we mean.)

Dogs get final billing, not from their intelligence (which isn’t superlative), but from from their amazing relationship with humans. They’ve been domesticated for at least 10,000 years (longer than any other animal), and it turns out their brains have a lot to teach us.

§ §

Morell’s journey begins with Nigel Franks, who studies ants (and wears khaki pants). His main interest is in how ants decide things. For example, how do they find and choose a new nest if their current one is destroyed?

In order to identify and track individual rock ants in their colonies, Franks and his colleagues code them with tiny dots of paint (in places harmless to the ant, of course). High-speed cameras allow observation of each ant.

One discovery is that ants apparently teach other ants.

Which is, or at least was, a source of controversy. Firstly, that any animals teach; secondly, that tiny simple ants teach.

Of course, it depends on what is meant by “teach.”

Animals certainly learn, but teaching is a different process. An early definition required a ‘teacher’ modify its behavior due to the ‘student.’ Franks observed this in ants and reported it.

Others, rejecting the notion ants teach, insisted the definition be expanded (in some cases to the point it only included humans). In the new definition, the teacher must have an understanding of the mental state of the student. The teacher must know if the student is learning.

The experiments Franks and others have done seems to indicate communication and awareness between ant teachers and students. An ant teaching another ant the path to a new nest site pauses while the student takes in the landmarks. It waits for the student to tap the teacher’s legs to signal, “Okay, got it!”

Physiologically, ant brains contain the neuronal structures, mushroom bodies, which appear associated with learning. That says they can learn — as almost any animal can — but doesn’t imply they can teach, so it remains somewhat controversial.

More significantly, teaching is a social behavior, and there seems a strong link between intelligence and social behavior. Simply put, the requirements of interacting with a group require a brain. This theme is repeated increasingly throughout the book.

§

Nigel Franks, like many of the ethologists Morell visits, adores his subjects. His garage at home is filled with ant colonies no longer useful in his studies, but which he doesn’t have the heart to let perish.

Another theme repeated through the book is the love researchers have for their subjects. Some of them refer to the animals as “colleagues” with whom they’ve worked, in some cases, for many years or even decades. (That does not mean their studies aren’t objective. They take great care in how they describe animal behavior, but Morell digs into what they really think.)

Their back stories vary. Some discovered a love of animals early in life; some stumbled into it due to circumstances. One planned to be an accountant but fell in love with dolphins and the idea of not having a desk job. (Ironically, he found himself entering lots of spreadsheet data anyway.)

What’s universal (and to me very enjoyable) is the commitment, both to the search for understanding, for bridging that gap, and to the animals themselves. Morell mentions a 1991 survey of ethologists who overwhelmingly responded that their primary motivation was the desire to know what it was like to be that animal (because there is something it is like to be a bat).

§

Morell’s second stop is Stefan Schuster, a neuroscientist in Germany who studies archerfish.

These fish have a squirt gun ability (due to specially shaped tongue and mouth) which they use to knock prey down from low-hanging branches. They’re extremely accurate, both in targeting and in judging the force required (and where the prey will land).

(The stories of the fish targeting the eyeballs of researchers — including Morell — are pretty funny.)

It’s not surprising the fish must learn this capability. Their neural nets are doing action-result programming. Athletes and musicians do something similar. It’s sometimes called “muscle memory.”

The eye-opener for me was that these fish apparently can learn by watching.

The fish don’t normally target moving objects, so Schuster and his (human) colleagues have trained groups of them to do so. It takes the fish a while to learn the new skill. Sometimes aggressive fish hog the best spots, forcing the other fish to watch.

When the bully is removed, the other fish demonstrate the skill without practice. In contrast, a new group of fish allowed to watch the tantalizing moving target — but prevented by researchers from taking a good firing position — do not demonstrate the skill when allowed into position.

Something about watching another fish accomplish the task teaches them the skill (despite the other fish definitely not being a deliberate teacher).

§

In chapter three, Morell takes us to Irene Pepperberg and her famous colleague, the exceptionally intelligent Alex the grey parrot.

Alex, who lived to be 31, does seem to have been exceptional. It’s not clear if this was due to circumstances (Pepperberg got him when he was one) or was innate with Alex. Two other parrots in Pepperberg’s lab are not performing as Alex did.

One thing I found interesting about Alex was his desire to learn. (To the point of practicing words when he was alone.) He even assisted in training the other birds (including scolding them).

I’m not really a bird guy, but Alex sounds fascinating, and his intelligence is impressive.

Pepperberg’s intent is teaching parrots to communicate with us. Morell’s fourth stop is with Karl Berg who is trying to decode how parrots “talk” in the wild.

Birds turn out to be vocal learners. Only humans, whales, dolphins, seals, bats, elephants,… and birds, can learn new vocalizations. In contrast, many animals are auditory learners — they can learn what new sounds mean.

Another thing about birds is that some appear to have individual names, which may be given to them by their parents. These are reflected in their vocalizations. They each have a pattern that’s apparently their name, and they use the patterns of others to call and interact.

These two chapters also illustrate the difference between studying animals in captivity and studying them in the wild. That theme is repeated with dolphins and chimpanzees. It’s also significant between dogs and wolves.

§

Chapter five introduces laughing rats. Laughing because they’re being tickled.

The head rat tickler is Jaak Panksepp, a neuroscientist at Washington State University. To quote Morell:

Panksepp and his laughing rats have helped overturn the old, Cartesian idea that emotion and reason are separate entities. Today, emotion and cognition are acknowledged to be inextricably intertwined — at least in humans. Some researchers are still reluctant to assign anything more than just a few emotional behaviors to other animals.

Panksepp has identified seven basic mammalian emotional systems in the brain. He labels them: FEAR, RAGE, LUST, CARE, PANIC/GRIEF, PLAY, and SEEKING. (The capital letters indicate these are scientific labels for those brain systems.)

As with all these animals, the rats are fascinating. One thing that strikes me is their apparent love of play. I’m convinced dogs play, and it doesn’t surprise me that rats do. (I’m pretty sure whale breach, at least in part, because it must be fun.)

Most dog owners are pretty convinced their dogs grin. Turns out rats laugh.

§

This has gotten long, so I’ll skip getting into the elephants (they have long memories and express a special interest in elephant bones), the dolphins (very complex and competitive societies), the chimpanzees (our closest cousins), or the dogs (finely tuned best friends who love us instinctively).

Suffice to say those chapters are as engaging and fascinating as the earlier ones.

I recommend the book without qualification for those with an interest in animals. Morell is easy and enjoyable to read, and the tour of animal research is riveting.

§ §

A key point here is that “think and feel” both speak to mental content. We use the metaphor of head versus heart, but it’s really all just the head.

One even can question whether intelligence requires emotional thinking. If one determines a goal, makes a plan, and then is frustrated accomplishing it, is an emotional reaction part of a full evaluation of the situation? (In contrast with the way machines keep mindlessly trying apparently without frustration.)

In any event, the link between intelligence and emotion is intriguing. The forms of intelligence we know of all seem to experience emotion, in some cases quite profoundly. Is that merely due to biology and chemicals, or is emotion a fundamental aspect of intelligence?

One I find striking about humans is that we easily can fake or suppress our emotions. We can control them, almost without effort. (Granted, some people less than others.)

With animals, that’s generally not true. Animals tend to be transparent about their emotions. They also generally act on those emotions. (Their behavior can sometimes be deceitful, which could be seen as crude emotional control.)

§ §

One of my favorite quotes is due to W.G. Sebald: “Men and animals regard each other across a gulf of mutual incomprehension.” The researchers in this book are trying to narrow that gap.

One thing I can’t help but note with regard to dolphins, elephants, primates, even dogs. Why don’t they try to communicate with us as we do with them?

Instead (all through the book) researchers treat them carefully due to their animal nature. (Especially the larger primates and wolves. With birds, fish, and rats, it’s more a matter of not stressing them out. It turns out even the distant scent of a cat disturbs rats, so cat-owning researchers must make special effort to be scent-free.)

Eye contact is a biggie. If communication were a mutual goal, eye contact shouldn’t be a problem. This is what sets dogs hugely apart — all that eye contact.

Stay safe, my friends! Wear your masks — COVID-19 is airborne!

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

35 responses to “Morell: Animal Wise

  • Wyrd Smythe

    Wynne didn’t publish Dog is Love until 2019, so it’s possible Morell didn’t mention him because he wasn’t really active when she wrote the book. He only arrived at Arizona State in 2013, which is when Animal Wise was published.

  • Wyrd Smythe

    Currently I’m a bit over halfway through Sean Carroll’s Something Deeply Hidden — his testimony to the Joy of MWI. I find it… unpersuasive.

  • SelfAwarePatterns

    Sounds like an interesting book. It’s always interesting finding out what animals are capable of. Many of the capabilities we take to be uniquely human are really more matters of degree than sharp delineation.

    On the other hand, studying animal intelligence and cognition is subject to a lot of pitfalls. You can be too skeptical, so that you don’t accept any evidence for sophisticated cognition. But it’s also extremely easy to fall into anthropomorphism, projecting mental states onto the animal that it doesn’t have.

    A good example is Panksepp’s primary emotions. He based them on observed animal behavior. But from what he wrote, he didn’t make much of an effort to separate affect displays from actual affects. He just assumed all affect displays demonstrated affects. Constructivists like Lisa Feldmann Barrett and Joseph LeDoux make a distinction between survival circuits and actual felt emotions. Although I think Barrett and LeDoux go too far the other way.

    I’ve also found it profitable when hearing about sophisticated capabilities in simple animals, like ants and fish, to follow the citations and see what the studies actually say. Often the evidence is far murkier and requires a lot more interpretation than is implied.

    • Wyrd Smythe

      I do think there is a spectrum, but I find the gap between humans and all other animals intriguing. As I remarked in the post, I’m struck by how they don’t seem to work at communicating with us anywhere near the degree we do with them. You’d elephants and whales, especially, would be invested in letting us know someone is home.

      Even as expressive as dogs can be, there’s something missing in their ability to communicate with us effectively. I take it as significant that, despite fair facility with sign language, the apes never ask questions. (Alex the parrot apparently did, which I find impressive.)

      But I absolutely do see crude simple versions of our intellectual skills in animals. The book is a good tour of seeing those traits in them.

      “But it’s also extremely easy to fall into anthropomorphism, projecting mental states onto the animal that it doesn’t have.”

      It was interesting how most of the researchers were painfully aware of the necessity of not projecting. None of them are blind to these issues, they live with them daily. They are very careful in what they publish.

      At the same time, they’re literally living with these animals and their personal intuitions (which they fully realize are subjective) universally seem to see a great deal of mind present in animals.

      There were a few places in the book where I thought behaviors might have less going on than the researcher thought, but to their credit, in most cases, the researchers were cautious about their perceptions. (Morell was expressly interested in what they really thought, so she tended to go after those perceptions.)

      “Constructivists like Lisa Feldmann Barrett and Joseph LeDoux make a distinction between survival circuits and actual felt emotions.”

      Yes, and I can’t help wonder the same thing as with Illusionists: what’s the difference?

      Emotion and sentience seem to go hand-in-hand, perhaps understandably since sentience essentially means the ability to feel. If animal minds are on a spectrum, perhaps they simply have crude versions of whatever we have.

      That said, I do wonder about fish and insects with regard to pain. But I’m not convinced fish and insects (and lizards) aren’t anything more than biological algorithms. I think they’re entirely survival wiring, no cognition to speak of. Possibly not even anything it is like.

      Although maybe they’re just that far down the spectrum.

      “I’ve also found it profitable when hearing about sophisticated capabilities in simple animals, like ants and fish, to follow the citations and see what the studies actually say.”

      Yeah, the book is just a tour, and there is some of it I wouldn’t mind following up on. I will say that, if the actual research is anything like the high points she touched on, it’s pretty impressive. At the same time, given it’s a spectrum, it’s also not surprising.

    • Wyrd Smythe

      FWIW, quoting from the book:

      “By electrically stimulating the brains of rats and guinea pigs Panksepp has mapped and defined seven fundamental emotional systems found in the mammalian subcortex. He calls these FEAR, RAGE, LUST, CARE, PANIC/GRIEF, PLAY, and SEEKING — and uses all caps to spell them to emphasize that they are scientific terms. Each one represents a specific system in the brain and not simply the sensation, say, of fear or lust. The seven networks serve similar functions in all mammals, from rats to humans.”

      So it doesn’t sound like those are based on behavior so much as specific neural systems in the brain.

      • SelfAwarePatterns

        It seems like the big difference between humans and other animals is symbolic thought, although even that may be more a matter of degree than sharp distinction. Animals seem able to form simple symbol-concept associations, but they show limited to no ability to manipulate them. For example, some apes can do sign language, but we’re not talking about sentences with grammar. It’s more learning their own name, the name of food items, and simple actions, like “feed”.

        The difference between survival circuits and felt emotions is that survival circuits are more reflexive. The felt emotions are used in reasoning, and enable higher level circuitry to decide which survival circuits to allow or inhibit. I think they’re right to make the distinction, but wrong to conclude that only humans have felt emotions. They are right that actually demonstrating it in animals is tough.

        On Panksepp, yes, he stimulated the circuits, but the way he decided what each circuit was about were the behavioral results. The problem is distinguishing survival circuits motor output from something where felt emotion was in the loop. A principle of animal research is you don’t attribute something to higher cognitive function that can be attributed to simpler mechanisms.

        It’s worth noting that Panksepp’s current camp consider the consciousness of these primal emotions to be an anoetic form of consciousness, a version that only comes about if we accept that there can be phenomenal consciousness without access consciousness.

      • Wyrd Smythe

        Symbolic thought is certainly necessary. As you say, animals can understand representation, but have no grammar or syntax. (Alex the parrot was unique in having some rudiments of that.) Our complex rich language is part of that gap I mentioned. Maybe it’s not the symbolic thought itself so much as the creative ability to manipulate them and build on them.

        One of the points of the book is that (at least in biological beings) emotions and thoughts are the same thing (when talking about animals capable of having thoughts in the first place).

        Consider rats. Panksepp discovered they are incredibly sensitive to the scent of cats. Researchers who own cats got unresponsive results where others didn’t. Cat scent essentially makes them fearful, even in rats with no cat experience for generations. It’s deeply programmed.

        It’s a survival circuit, clearly, but the fear is real. Human phobias are often based on imaginary things, but the fear they cause is real enough.

        “On Panksepp, yes, he stimulated the circuits, but the way he decided what each circuit was about were the behavioral results.”

        But as opposed to what? It’s not like he could ask the rats.

        You’ve said before that a black-box that seemed conscious, say it could pass a Rich Turing Test, pretty much needs to be considered as conscious regardless of the mechanism. Doesn’t similar logic apply here? If rats act in ways that very clearly seem fearful, can’t that be taken as fear? What more would be required?

        “It’s worth noting that Panksepp’s current camp consider the consciousness of these primal emotions to be an anoetic form of consciousness,”

        I suspect to some extent that’s true. That we can fake or suppress emotions doesn’t mean we can literally control them — you can’t not feel what you feel in the moment. Our emotions are primal and have deep roots. (We have our own versions of cat scent. Heights, for instance.)

        I don’t know about P.C. versus A.C. here, but I do think emotions come from our “unconscious” thoughts sometimes. (I do also like Freud’s Id-Ego-Superego model, at least as a metaphor. Our stronger emotions tend to come from the Id in that model.)

      • SelfAwarePatterns

        Definitely I think complex symbolic thought is more a result of a lot of foundational capabilities that, while not necessarily unique to humans, are much stronger in us, such as recursive metacognition.

        On comparison with the Turing test, first I’m not sure the behavioral evidence Panksepp uses would qualify as passing that test, even one modified for a non-language animal. He stimulates a certain circuit, a fragment of the system, and gets a set of motor responses he interprets as, say, RAGE. Is that actually rage as we understand it? Suppose we built a robot that growled and bared its teeth when a certain circuit was stimulated. Would we accept that the robot was feeling rage?

        A more careful test would be to put the animal in a situation where it receives multiple stimuli, and then has to choose which one to allow and which to inhibit. Doing so demonstrates that it feels the valance, in that it can use that valence in decision making, that it’s not just a reflexive reaction. But notably, the animals ability to engage in that kind of processing seems to require more than just Panksepp’s primal emotion circuits.

        In the end, consciousness is in the eye of the beholder. There’s no agreed upon definition we can use to resolve different interpretations. But something like volition is a more precise term, and subject to testing. Whether or not we associate consciousness with volition is, of course, a philosophical matter. But whether the capability of volition is present seems like a testable proposition.

        Most importantly, without volition, what adaptive reason is there for an animal to actually feel something it can only react reflexively to? What is a “feeling” except the information to the reasoning parts of the brain that the reflexive parts are reacting a certain way?

        One of the problems with discussing emotions is that the terminology is a mess. Does “emotion” refer to the lower level action program? Or the conscious feeling? Or both?

        It’s worth noting that the feeling is a complex psychological state with many causal factors other than just the lower level action program(s).

      • Wyrd Smythe

        Recursive meta-cognition, for sure, and also that creative ability. Humans name things (which is why I was struck by birds possibly having names).

        “On comparison with the Turing test, first I’m not sure the behavioral evidence Panksepp uses would qualify as passing that test,”

        To be clear, the Turing Test was with regard to a putative “conscious” black box. I was saying that by analogy if we observe behaviors under artificial stimulation that match behaviors we see volitionally — in circumstances where the emotion seems appropriate — then I think the assignment is a good first approximation.

        Something I’d take provisionally until shown evidence otherwise; put it that way.

        “Suppose we built a robot that growled and bared its teeth when a certain circuit was stimulated. Would we accept that the robot was feeling rage?”

        I’m sure some would, but we don’t create the rats. Better to say we discover a machine that has complex behaviors with strong parallels to both human and animal behaviors. We might, for instance, observe rage behavior when the machine was seriously frustrated. (Loud vocalizations, destructive or frantic behavior, etc. Tantrums, basically.)

        If we then artificially stimulated parts of this machine and saw the same behaviors, it seems reasonable (at a first approximation, anyway) to assume the stimulated system had something to do with the behavior.

        My impression (per the book) is that Panksepp is defining neural correlates, but I’d have to dig into his papers to really know what more there is to it. (I would assume you’d be on board with neural correlates, so I also assume there must be more to the picture.)

        “But notably, the animals ability to engage in that kind of processing seems to require more than just Panksepp’s primal emotion circuits.”

        That makes sense to me. AIUI, those are just meant as basic building blocks. Brains are holistic; these fundamental systems don’t work in isolation.

        I think, too, that even if the circuitry is simpler and more reflexive (again, we’re talking spectrum here), that doesn’t mean emotions are felt — they still have cognitive effects even if cognition is rudimentary.

        A hooked fish in the boat behaves in a way that appears as distress — violent thrashing and gasping. An automatic system might behave the same, but even such a system’s circuits in such a state are probably varying wildly in something similar to mental panic. A fish brain, much more complicated than any machine we can make now, would likely also have wildly firing circuits. The neural net would be in a situation it has no programming for and which contains damage-indicating inputs.

        In such a system that seems a good analogue for “panic” — more complicated systems have more complicated responses, but I think there is a spectrum here.

      • SelfAwarePatterns

        I’m on board with neural correlates. But often a phenomenon that seems like a clear concept to us doesn’t map cleanly to any discrete neural correlates. Barrett’s point is that individual cases of anger, for instance, don’t have the same neural correlates. It isn’t even guaranteed that they intersect, except in what may be the reporting systems.

        Barrett does allow for the four Fs(feeding, fighting, fleeing, mating) mapping to survival circuitry, which actually accounts for four of Panksepp’s primal emotions. And the stuff about anoetic consciousness really makes me wonder how much the differences between the two camps amount to terminology.

        I do agree that it’s a spectrum. It’s why you so often see me talk about hierarchies. When another species is feeling an emotion as we understand “feeling” is somewhat of a meaningless question. As they get closer to us in the taxonomy, their experience will be closer to ours, with no sharp line, no point where a light bulb suddenly comes on.

      • Wyrd Smythe

        I don’t know that there’s a lot of daylight between us when it comes to consciousness in biological Earth beings. (What difference there is seems mostly to be where we are on the subjective-objective scale. I’m, perhaps, a bit more inclined towards provisional duck typing, so to speak.)

      • SelfAwarePatterns

        You might be right. Although I’m not sure what you mean by subjective-objective scale. On duck typing, I’m probably more inclined toward reductionist categorizations.

      • Wyrd Smythe

        I meant you seem to see more things as subjective than I do. We just talked about morality on your blog. Consciousness is obviously another one. Maybe it’s more accurate to say I get the impression you see the subjective nature (which I agree exists) as more of a showstopper than I do. I think we can, at least to some extent, get past that with analysis.

        Definitely you’re more reductionist than I am, although it can depend on the topic. 😉

      • SelfAwarePatterns

        Ah, ok. I actually don’t see an understanding of something’s subjectivity as a showstopper. I think it tells us to focus our efforts elsewhere. For example, instead of trying to determine some elusive moral truth, maybe we should just focus on whether a rule is in our collective best interests. And instead of trying to figure out whether X is conscious, let’s figure out what specific capabilities X has.

        But this is probably me being more reductionist. 🙂

      • Wyrd Smythe

        I think it really does take all types — no one’s view is complete. I think we converge on truth by considering lots of inputs and finding the ones that make the most sense. And there are many things where I think there is no one right way to view them. (For example, conservative and a progressive views of life are both valid, and I think a society needs both.)

        Besides, if we both saw life the same way, we’d have nothing to talk about. 🙂

        (FIWI: I noticed your use of the phrase “some elusive moral truth”. The difference is I don’t think they’re so elusive. It’s that the basically simple ideas are so complex to apply. Kant’s C.I., for instance, or the Golden Rule. Simple ideas, but real life is complex.)

        Also FWIW, the ironic thing to me about our views on consciousness is that you’re a type-A materialist, so you should believe (and I think you do) that there is a mechanism of consciousness in brains and we can discover that mechanism. Consciousness in type-A materialist brains is an objective phenomenon. The subjective experience of consciousness is just what it feels like to be a sufficiently complex brain mechanism. The “hard” problem doesn’t need any quotes around it.

        So it seems that a type-A materialist should see consciousness as an objective thing. (Certainly it seems a computationalist should — how can HW+SW be created without a precise definition?) We ought to be able to eventually say with confidence, for example, whether an apparently comatose person is conscious. And be able to identify the level of that activity in animal brains.

        Meanwhile, I’m the one who goes so far as to be open to the idea humans have some sort of ineffable metaphysical soul. Or, more reasonably, that physicalism has aspects that are forever ineffable — that some things are always beyond us.

        But if not, then under physicalism, I do believe consciousness to be an objective phenomenon we can discover.

        It just seems a little backwards to me sometimes. 😀

        I do take your point that a lot of it is what people say or how they see it or what they will agree to, but that’s exactly my point about subjectivity. It makes discussion such a challenge.

      • SelfAwarePatterns

        Your description of Type-A materialism, assuming we’re talking about Chalmers’ categories, might more closely align with Type-B materialism.

        The type-A materialist denies that there is a “hard problem” distinct from the “easy” problems; the type-B materialist accepts (explicitly or implicitly) that there is a distinct problem, but argues that it can be accommodated within a materialist framework all the same.

        …The type-A materialist, more precisely, denies that there is any phenomenon that needs explaining, over and above explaining the various functions: once we have explained how the functions are performed, we have thereby explained everything. Sometimes type-A materialism is expressed by denying that consciousness exists; more often, it is expressed by claiming that consciousness may exist, but only if the term “consciousness” is defined as something like “reportability”, or some other functional capacity. Either way, it is asserted that there is no interesting fact about the mind, conceptually distinct from the functional facts, that needs to be accommodated in our theories. Once we have explained how the functions are performed, that is that.

        …Type-A materialism offers a clean and consistent way to be a materialist, but the cost is that it seems not to take consciousness seriously. Type-B materialism tries to get the best of both worlds. The type-B materialist accepts that there is a phenomenon that needs to be accounted for, conceptually distinct from the performance of functions, but holds that the phenomenon can still be explained within a materialist framework.

        http://consc.net/papers/moving.html

        I’ll say this for Chalmers, he is pretty good at describing viewpoints he disagrees with. I’m not wild about the comment that we don’t take consciousness seriously, since it begs the question, but otherwise it’s hard to improve on.

        All of which is to say, from your perspective, you might be giving me too much credit. 🙂

      • Wyrd Smythe

        ROFL!

        It’s been a while since I read that paper, but it sounds like type-A denies subjective experience, that it’s the territory Chalmers carves out for Illusionists.

        Do you disagree subjective experience is real and requires explaining? (I believe you have some panpsychism sympathies? Is it that everything has subjective experience?)

        Kind of a funny phrasing: Is the subjective objective?

        (I do generally like Chalmers a lot!)

      • SelfAwarePatterns

        I think subjective experience is real and being explained by the research on what Chalmers calls the easy problems. Chalmers would probably lump me in with the Type-As using what he considers a deflated definition. Although it’s worth noting that I agree with the illusionists ontologically, just not with their terminology.

        My only sympathies for panpsychism is that, if I were troubled by the hard problem, I could see its appeal.

      • Wyrd Smythe

        “I think subjective experience is real and being explained by the research on what Chalmers calls the easy problems.”

        What explanation is there for subjective experience? I’m under the impression it’s still a big mystery as to how it occurs in an information processing system.

        “Although it’s worth noting that I agree with the illusionists ontologically,”

        I have to admit, I keep forgetting your commitment to reductive functionalism. From my perspective, it’s something you’re missing. To me, the view doesn’t seem alternative so much as incomplete. I think the reality is richer than the view permits.

        (Their terminology is definitely unfortunate.)

      • SelfAwarePatterns

        When it comes to subjective experience, a lot depends on accepting it as a complex composite phenomenon that can be broken into its constituents. Many outright refuse to do that. It closes off any real chance of an explanation.

        Consider Chalmers’ easy problems:

        the ability to discriminate, categorize, and react to environmental stimuli;
        the integration of information by a cognitive system;
        the reportability of mental states;
        the ability of a system to access its own internal states;
        the focus of attention;
        the deliberate control of behavior;
        the difference between wakefulness and sleep.

        Let’s add in prediction of automatic dispositions and ongoing interoceptive monitoring of the body. There are almost certainly others we could add, and we can further divide the items Chalmers list into additional sub-capabilities, but they would all be functional components.

        Consider the relation of each of these to subjective experience. What would it mean if any one of them were removed, or altered? What would it mean if all of them were removed?

        Now, considering all of their contributions, what is left of subjective experience to explain? If you list actual experiences, consider if those experiences could be reduced to the components above. If not, what would be missing?

      • Wyrd Smythe

        The bottom line is that subjective experience is not something we can explain. 😉

        There is nothing in the composite, nor the constituents, that even suggests there is something it is like to be a mechanism. Subjective experience, for now, is entirely outside the framework of physics.

        Which is exactly why Chalmers can define it as the “Hard Problem” without immediate contradiction. Because it involves something that seems (but may well not be) outside of physics.

        There does, to me, seem a contradiction between the idea that consciousness is so complex that we so far can’t define it and the idea that there is no “Hard Problem” — as we’ve said before, the subjective nature of consciousness is one of the confounding issues.

        “Consider the relation of each of these to subjective experience. What would it mean if any one of them were removed, or altered? What would it mean if all of them were removed?”

        No doubt the subjective experience would be altered. You can add altering one’s blood chemistry (through drugs or alcohol). I don’t understand what that has to do with anything.

        “Now, considering all of their contributions, what is left of subjective experience to explain?”

        The Hard Problem: How and why it occurs.

        Again, nothing in our physics suggests or hints at the idea of a physical system having subjective experience. (Which we agree is real — is objective. Which, from a physicalist point of view, means we need new physics. That’s the physicalist version of the Hard Problem; we need new physics. It’s almost like needing SUSY.)

        “…what would be missing?”

        Why there is something it is like to be those systems.

        The way I see it, materialism begs the question here. It assumes the subjective mind is explained by physics as we know it because materialism requires that be the case. There is a commitment to a metaphysics involved. (You’re up front about that commitment, but I perceive it as giving you a bit of tunnel vision sometimes.)

      • SelfAwarePatterns

        I don’t see breaking the problem into manageable components as any metaphysical commitment. It’s what we do with any other type of problem. If it had failed, we might have been faced with something fundamental, but I don’t see that as the case.

        I do see the refusal to even attempt examining the components as most definitely a metaphysical commitment, one that begs the question in favor of dualism.

        But this is the old disagreement. I would have been shocked if we’d resolved it this time. 🙂

      • Wyrd Smythe

        “I don’t see breaking the problem into manageable components as any metaphysical commitment.”

        This unfairly conflates two separate things I said.

        At the top of my reply, I wrote: “There is nothing in the composite, nor the constituents, that even suggests there is something it is like to be a mechanism”

        Which was in response to: “When it comes to subjective experience, a lot depends on accepting it as a complex composite phenomenon that can be broken into its constituents.”

        So I’m disagreeing that it depends on whether subjective experience can be broken down into components. Firstly, because I don’t see that it affects the argument there is no physics to account for subjective experience (which surely a material reductionist requires). Secondly, because a common view of subjective experience itself is that it is irreducible and incorrigible.

        The first point, it seems to me, suffices. Even if one can break subjective experience into constituent parts, does it show how phenomenal experience arises? As far as I know, the answer is no.

        At the bottom of my reply, I wrote: “The way I see it, materialism begs the question here. It assumes the subjective mind is explained by physics as we know it because materialism requires that be the case. There is a commitment to a metaphysics involved.”

        Specifically, the metaphysics of assuming type-A materialism is a correct view of reality. This is a completely different statement from my reply about composites and constituents.

        “It’s what we do with any other type of problem. If it had failed,…”

        You know, I’m sure, that I agree completely. I’m all about analysis; I have never said, nor indicated, otherwise.

        But (and this seems to be fact to me) breaking down the problem has failed… so far. We do not have an explanation for why there is something it is like to be a mechanism.

        I think we agree the answer almost certainly lies in what happens in a physical system with the right properties (neural network, composition, behaviors, etc). The difference — where this sub-thread started, really — is that (I think) you’re more committed to that being the case than I am. I’m more open to it being physically unsolvable or, not impossibly, involving metaphysics of some kind.

        “I do see the refusal to even attempt examining the components as most definitely a metaphysical commitment, one that begs the question in favor of dualism.”

        Such a putative refusal (which, again, never came from me) could be on any number of grounds, but likely many of them would be metaphysical. “Do not tamper with things beyond yer ken!” That wouldn’t have to beg dualism, but, again, probably many of them would.

        But it’s irrelevant, since no one here is advocating that. 🙂

        “But this is the old disagreement. I would have been shocked if we’d resolved it this time.”

        I sometimes think it’s that we never really clarify the actual points of disagreement. In places where we really understand the issues, but see them differently, we have and have moved on. In other places, it seems hard to get to the heart of the matter. Those are the ones we seem to keep returning to.

        I’m fine with “agreeing to disagree” but I like to be clear on exactly what I’m disagreeing about!

      • SelfAwarePatterns

        Well, I certainly wasn’t trying to unfairly conflate things. And I’m totally on board with clarifying our disagreement.

        Maybe it’s in what we think subjective experience actually is. For me, the topics of the “easy” problems are the components of subjective experience, of “something it is like”. I can’t see any reason why any aspect of subjective experience that anyone can name, can’t be handled by one of those functional components, or ones like them.

        Which is why I usually ask, what’s missing? I see repeats of “subjective experience”, “what it’s like”, “phenomenality”, etc, as just reiterating the overall problem, not addressing the question about missing aspects.

        It’s why I asked you to consider what effect removing each of those components would have on subjective experience, up to and including removing all of them. If we remove all of them, do we still have experience? I don’t think we do. But if we do, what is left?

        I’m trying to think if there’s any other way to clarify this. But maybe I should just stop here and hand it over to you, since I may be completely off base on what you’re thinking. Or if I’m just completely missing something, maybe you can tell me what it is.

      • Wyrd Smythe

        Sign me up! (As an aside, I imagine a TV show, which I’d call Consensus, that would have two panels of people, the panels representing two sides of an issue. The point of the show would be drawing out the specific points of disagreement and, in particular, agreement. As I’ve said many times, I have an abiding faith in the dialectic. 🙂 )

        “Maybe it’s in what we think subjective experience actually is.”

        Maybe so. For many years I’ve been meaning to write a post about our “inner voice” — recently I’ve seen attention to the idea in articles about how some people don’t have one (or say they don’t). I have a very strong inner voice, and ever since college I’ve pondered it.

        Anyway, one starting point might be asking if you have an inner voice. (My guess is yes.) That inner voice comes from the inner subject. (One reason I haven’t posted about this is needing to look into bicameral mind theories. That inner voice was at first thought to be the voice of the gods. If disassociated, it may be what people hear when they hear voices. It’s a fascinating topic.)

        It’s Descartes’ famous statement. It’s the one true fact we all know for certain. Our subjective identity, that unified experience of me.

        “Which is why I usually ask, what’s missing?”

        I agree a complex phenomenon is comprised of functional parts. I agree that, from the outside, an observer could analyze human behavior as a set of functions that could represent that behavior.

        In a sense, this is Chalmers’ zombie argument. Human behavior can be replicated by a set of functions designed to generate certain behaviors. (What might be a point of discussion is that such a set would be based on observation, analysis, and deliberate design, in contrast with human brains, which evolved.)

        But is human behavior on just a functional basis coherent? The zombie argument asks if something, phenomenal experience, is missing. Kind of the same question you’re asking.

        A question I ask about zombie world is: Why do they have art? It could be that very high-level programming just requires them to do so, but that seems to defeat the argument. What low-level functions cause a drive to create and appreciate art (or music)? That seems a hard question to answer.

        More to the point, while you don’t like the answer, it is true there is something it is like to be a human. There is, specifically, something it is like to be you, as there is with me.

        You say this is reiterating the problem, and perhaps that’s true. It’s pointing out the Hard Problem because the Hard Problem is what we have right now. There is a mystery about how an information processing system has subjective experience. No one can say what’s missing because we don’t know. That’s what makes it the Hard Problem — where does subjective experience come from?

        “It’s why I asked you to consider what effect removing each of those components would have on subjective experience,…”

        That’s actually a really interesting proposition, and I’m going to give it a wack. (Gimme a minute. 🙂 )

  • Wyrd Smythe

    “Consider Chalmers’ easy problems:”

    With the caveat that many of them are hard… just not Hard. 😉

    You asked:

    “Consider the relation of each of these to subjective experience. What would it mean if any one of them were removed, or altered? What would it mean if all of them were removed?”

    If all — meaning all, not just the ones listed — functionality of the brain were removed, of course there would be no consciousness. Removing different parts of course removes or alters some part of consciousness.

    You also asked:

    “Now, considering all of their contributions, what is left of subjective experience to explain? If you list actual experiences, consider if those experiences could be reduced to the components above. If not, what would be missing?”

    The first question is easily answered: why any or all of these feel like something to us. You say that just restates the question, and perhaps it does. The question is all we have right now.

    We can mix and match functions, but that never explains why there is something it is like to be a human with all those functions.

    In terms of the specific functions:

    “the ability to discriminate, categorize, and react to environmental stimuli;”

    At what level? My computer can do that for certain definitions of those terms. Judges do it on a much higher level. The first case is objective; the second is subjective. Therefore subjectivity is orthogonal to this functionality.

    I suspect one could have subjective experience without these functions.

    “the integration of information by a cognitive system;”

    Isn’t that very vague? What cognitive systems lack subjective experience? What kind of integration of information? That, again, is a spectrum from simple computer tasks to complex human ones.

    I suppose one could have subjective experience without integration or cognition.

    “the reportability of mental states;”

    Doesn’t this assume subjective mental states exist? Otherwise, how could they be reported? What exactly is a mental state, anyway? A neural correlate or a subjective experience?

    Certainly one could have subjective experience without being able to report on it. On a simple level, there are profound experiences one can have where words fail. For me, skydiving, seeing Saturn through a telescope, being deeply in love, certain music, seeing the Grand Canyon, and a number of others I could name are like that. Profound subjective experiences I have no idea how to report.

    “the ability of a system to access its own internal states;”

    My car and my computer do that depending on what is meant by internal states. To the extent this means mental states, it repeats the previous point, so same answer.

    If it means, my knee is bugging me, then it’s a lot like my car detecting low tire pressure in the accessing internal states sense. What differs is my subjective experience, which links knee pain to people I’ve known who needed knees replaced and a lot of other concerns.

    “the focus of attention;”

    My subjective attention? Or just task switching like a computer?

    “the deliberate control of behavior;”

    Doesn’t this also assume subjective experience? What is doing the deliberating?

    “the difference between wakefulness and sleep.”

    Isn’t sleep a thing conscious animals do? Isn’t it thought to be part of processing all the things the animal experienced? Or, in dreams, having imaginary subjective experiences?

    Or just “sleep” like my computer can do?

    “Let’s add in prediction of automatic dispositions and ongoing interoceptive monitoring of the body.”

    I’m not sure how those factor into subjectivity. I’m not sure how any of these really factor into subjectivity. Some don’t seem terribly relevant to it, others seem more the result of it.

    And I don’t see how any of it addresses (let alone answers) the question: why is there something it is like to be a human. (Or, presumably, a dog or chimpanzee or parrot or elephant or dolphin or whale or, for that matter, bat.)

    As a material reductionist, don’t you feel there should be physics for that?

    • SelfAwarePatterns

      I definitely have an inner voice. If someone said they didn’t, I would wonder if they understood the question. I suppose if someone grew up with a language disorder, they might not have one. But we all seem to practice what we’re going to say to ourselves, which with time evolves into our own inner voice.

      Interestingly, I took speed reading lessons many years ago. One of speed reading techniques is breaking that inner voice, so that what you read doesn’t go through it, and you take it in faster. That’s when I decided speed reading isn’t for me. I like my inner voice. It’s an old friend. I didn’t want to do anything to damage it.

      I think traditional philosophical zombies are only coherent if both dualism and epiphenomenalism are true. Behavioral zombies are more plausible, but I don’t see how we could ever rule out what we think was a zombie wasn’t actually conscious. Overall, I think the zombie concept is fatally anchored in dualistic intuitions. Do away with those intuitions, and zombies disappear as a plausible or coherent concept.

      I do think you ask good questions about them. Why indeed would they have art? Or talk about consciousness?

      “We can mix and match functions, but that never explains why there is something it is like to be a human with all those functions.”

      Why not? What about “something it is like” is not explained by them?

      So, if you couldn’t discriminate, categorize, and react to stimuli, then could you have the experience of Saturn you mentioned? How would you know which is the planet, the rings, or in general what you were looking at? And how could you react with any emotion to it if the react to stimuli functionality is absent?

      “I suppose one could have subjective experience without integration or cognition.”

      I take integration to be things like integrating information across sensory modalities, memory, affective reactions, and imagination. Again, consider the examples you listed without these capabilities. Seeing the Grand Canyon and understanding what you’re seeing, seems impossible without it.

      On reportability, the way Chalmers words does assume mental states. So let’s just change it to the reportability of introspected brain states.

      “What differs is my subjective experience, which links knee pain to people I’ve known who needed knees replaced and a lot of other concerns.”

      Right, but we accounted for that above with integration. Is there something missing from that accounting?

      “My subjective attention? Or just task switching like a computer?”

      The multilevel concentration of resources on certain stimuli, actions, or concepts. I imagine when you were looking at the Grand Canyon, there were other stimuli coming in, but your focus was on the canyon. Now try to imagine that experience without attention.

      “Doesn’t this also assume subjective experience? What is doing the deliberating?”

      It definitely assumes all the other functionality. Imagine it happening without any of those other things.

      “I’m not sure how those factor into subjectivity. I’m not sure how any of these really factor into subjectivity.”

      I’ve tried to answer that in my responses above.

      “As a material reductionist, don’t you feel there should be physics for that?”

      Well, I think the physics for the above functional systems are discoverable and are being discovered. And I take them to be components of “something it is like”, subjective experience, phenomenal experience, or whichever synonymous phrase we might want to use. They are the meat of it. They are the solution.

      You said you wanted clarification on the disagreement. Is it a little clearer now?

      • Wyrd Smythe

        “I definitely have an inner voice. If someone said they didn’t, I would wonder if they understood the question.”

        I know the feeling! Apparently that’s the common reaction. Yet there appear to be people who really don’t. Or (and this is my suspicion) they haven’t learned to hear it (yet).

        Interesting about the speed reading. Something I noticed a long time ago is that, if I actually do hear the words in my head, my reading is negatively impacted. Certainly slower, but comprehension seems to go away — the voice is too distracting. At the same time, I definitely “hear” what I’m reading, so it seems there is a “thinking” voice and a “reading” voice for me, maybe.

        “I think traditional philosophical zombies are only coherent if both dualism and epiphenomenalism are true.”

        I guess I don’t really understand that, but any kind of zombies, to me, begs the question by assuming such are possible. They’re just too artificial for me to really even think about.

        My attitude is: Show me a zombie world that actually exists, and then we’ll have something to talk about. 🙂

        “Why not? What about ‘something it is like’ is not explained by them?”

        That really is the heart of it. I would ask what is explained by them. None of the systems listed themselves have subjective experience. Why would an aggregate of those systems have it? What comes from the aggregate not found in the pieces?

        Isn’t there something of a combination problem here? The pieces don’t seem to have subjective experience, so why does the whole?

        There is also that subjective experience can consist of or depend on various functions, but they may be just the accessible aspects. The functions listed are certainly part of subjective experience, no question about that.

        But I don’t see that they comprise it, and they certainly don’t explain it..

        “Well, I think the physics for the above functional systems are discoverable and are being discovered.”

        Which means we don’t know what they are right now. And you are expressing your belief in what the solution will turn out to be.

        All a-okay, and you might be right. My original point was just that subjective experience is currently a mystery. And that you have a commitment to an outcome.

        “You said you wanted clarification on the disagreement. Is it a little clearer now?”

        Maybe. Is it fair to say the claim is that these functions are sufficient for a system to have subjective experience? Not report it — have it. (In other words, for the claim to be correct. That’s the big problem with zombies — they have to make that claim, but it’s a lie.) That any system implementing these functions, and these functions alone, is necessarily conscious and having subjective experience?

        In other words, functionality = subjectivity?

      • SelfAwarePatterns

        The Windows kernel doesn’t have Windows, which it wouldn’t, being part of Windows. Likewise, I don’t think we should expect the systems that provide the underlying functionality of subjective experience to have subjective experience.

        My only commitment is to following the evidence where it leads.

        I think I’ve made it clear that I think it’s all functionality. I’ve also noted there may be additional functional components not listed, although I think Chalmers’ list captures a good portion of it.

      • Wyrd Smythe

        “Likewise, I don’t think we should expect the systems that provide the underlying functionality of subjective experience to have subjective experience.”

        That’s fair. The question then is how subjective experience arises in the aggregate. I believe we’re in agreement the answer is: We don’t know (yet). Yes?

        Are we in agreement no aggregate system (we know of) other than brains has subjective experience?

        “My only commitment is to following the evidence where it leads.”

        But your very next sentence is:

        “I think I’ve made it clear that I think it’s all functionality.”

        That’s the commitment.

      • SelfAwarePatterns

        On how the functionality provides subjective experience, I fear I might sound like a broken record here, but I discussed above the effects of the presence or absence of functional capabilities.

        There is definitely still an enormous amount to learn, but simple statements that we don’t know yet imply we’re completely ignorant, and I think that’s misleading. There is ongoing progress.

        No other system currently has the functionality of an organic brain.

        Currently, we only have evidence for functionality, and I only see functional problems to be resolved. Which is why I think it’s all functionality. If evidence pointing somewhere else is discovered and widely validated, my thinking will change.

      • Wyrd Smythe

        “On how the functionality provides subjective experience, I fear I might sound like a broken record here,”

        This is where the debate normally stalls because you stand pat on a list of functions and I stand pat on subjective experience. Let’s see if we can get past it.

        FWIW, we would agree if the functionality in question is that of the brain itself. Clearly whatever the brain does produces subjective experience. I know brains produce consciousness and subjectivity. I suspect (but don’t know) a physical device built along the same principles and scale would do so likewise. I’m skeptical (as we’ve often discussed) any other system will (but I could be wrong).

        We part ways with regard to high-level functions interacting to produce the same effect. My problem is not seeing subjective experience in these high-level functions or their aggregates. I fully acknowledge the premise it might work that way, but I just can’t see it as more than a belief.

        Is it fair to say, then, that you believe a set of black boxes that provide the functionality, regardless of how they provide it, when connected, would be conscious and have subjective experience? (Assume some black boxes are I/O systems.)

        “…but simple statements that we don’t know yet imply we’re completely ignorant,…”

        I stand by my “simple” statement that we don’t know yet. Because we don’t know yet. (I can’t control what you think I’m implying, but I wish you’d try to take my words at face value. I do take care to say what I mean and mean what I say, no more, no less.)

        “Currently, we only have evidence for functionality,”

        What about the evidence of your every waking moment? Do you feel like a zombie?

      • SelfAwarePatterns

        “My problem is not seeing subjective experience in these high-level functions or their aggregates.”

        What would it look like to see subjective experience in them? What evidence would you have to see?

        As I noted early on in this discussion, a lot depends on being willing to dissect subjective experience, to look at the effect the presence or absence of various functional capabilities have on it. We can keep circling around this veil, but without piercing it, progress seems very difficult.

        In principle, functionality is always multi-realizable. (Although not necessarily with the same trade offs in size, performance, and energy efficiency.)

        As far as I can see, the evidence of my every waking moment is evidence of functionality, of my system’s level of alertness, taking in information from the environment, integrating it with memories and proclivities, simulating possible actions, selecting actions, self monitoring, etc.

        Would a zombie ever know it was a zombie?

      • Wyrd Smythe

        Wouldn’t it be awesome if future generations looked back at our debates like people today do Einstein and Bohr? 😉

        “What would it look like to see subjective experience in them? What evidence would you have to see?”

        To some extent, since I believe either new physics, or at least new understanding, may be required, if I could answer that effectively I could probably solve the problem. (Quantum physics is kind of in the same pickle. If we knew where to look for the next advance, that alone might be the answer. Like realizing the keys you can’t find are in the other room, and then you find them immediately.)

        What I can say is I want a set of physical principles that explains what it means for a system to have subjective experience. (Ha! I’m kind of asking for what you were asking for regarding moral calculus: specific principles. If, as I think is the case with morality, there are no specific principles, my search would be doomed. But I think morality is a diffuse social construct, whereas I think a conscious mind is a physically real something that can be fully defined, so I don’t guess the search is futile. Heh, but I suppose it could be a Hard Problem… [G,D&R])

        “As I noted early on in this discussion, a lot depends on being willing to dissect subjective experience, to look at the effect the presence or absence of various functional capabilities have on it.”

        Which you know I’m totally on board with. My problem is that the premise: “Add together all the necessary functions and subjectivity arises” seems like a magic spell to me. As I’ve said, I see it as an unproven belief at this point.

        I’m seeking what underlies those functions and integrates them such that subjectivity emerges. (I think the functions are emergent, too. It’s [system]→[functions]→[mind].)

        With any other system, from HVAC to cars to computers to human heart-lung, we can break them down into separate functions that comprise those systems. Those building blocks work together according to understood physical principles to make the system do what it does. We understand the principles of heating, cooling, engines, computation, circulation, and respiration.

        To me, the function list of subjectivity is a list of elephant parts with an assumption that putting them together results in an elephant. I think we don’t really understand elephant systems. (Yet. 🙂 )

        “In principle, functionality is always multi-realizable.”

        This is one of our bones of contention. I think there are important constraints and nuances.

        Abstract systems are easily multi-realizable. Text and mathematics, for instance, can be realized in countless ways. But a car motor, or my old example of a laser, are physical systems with physical requirements. The key distinction to me is that the results of different implementations provide identical value. Any form of readable text gives me the same information. Any (accurate) implementation of a calculator gives me the same calculations.

        But consider a Tesla car. A computer models the system and uses that model to control an electric motor. That makes a Tesla perform extremely well, but one could also create a car with just the electric motor (and simple controls) that would perform the function tolerably well. But it’s not possible to implement that function with just a computer. When it comes to cars, the motor is the key thing. It provides RPMs and torque — the multiple realizations for systems that do that is severely constrained. (Likewise there are severe constraints on what can produce laser light.)

        With brains, the premise is that a computer model of a brain or mind could be hooked up to the appropriate I/O devices and function as a conscious being. The premise may prove to be true, but right now it’s an open question.

        As you know, I’m a physical structuralist wrt consciousness, so I do think physically isomorphic alternate realizations ought to be conscious.

        “As far as I can see, the evidence of my every waking moment is evidence of functionality,”

        Who is it that enjoys a good book, tasty meal, or great tune? Who is it that sometimes feels sad or happy? Who is it that is curious about things? Who emerged from all those functions?

        I think it’s possible a reductive functional view might miss something. (If nothing else, there is the problem of knowing when one has captured all the required functions.)

        As an analogy, I think we agree that reducing the brain to just the functions of the network and synapses misses important things. Glial cells and myelin sheathing contribute, and proximity of neurons seems to matter (which means the brain has two networks: synaptic connections and proximity connections). So a true model of the brain has to account for those contributions. It can’t be reduced to just the network functions.

        IIRC, Chalmers has a good argument grounding subjectivity as distinct from functionality. I’ll try to find it.

        “Would a zombie ever know it was a zombie?”

        Well, it’s a conundrum, isn’t it. If they know nothing, or know they’re a zombie, then they’re not really an analogue for us. If they think they are not zombies, their beliefs are false, and they’re not really an analogue for us. It also doesn’t work if we’re zombies (so their beliefs would be true), because then what the point of the analogy? It’s just a duplicate world.

        There’s just nowhere to go with zombies, man!

  • Wyrd Smythe

    An analogy that occurs to me is the blind men and the elephant. The various functions listed are the ears, legs, tail, etc. They are all true and necessary parts of the elephant, but they are just manifestations of it. (In this case, behaviors would be included in the traits.)

    Given these manifestations, one can conceive of building a simulated elephant that replicates all the traits. From the outside, the simulation could be indistinguishable from an elephant.

    At least for the observed subset of elephant traits, which could be an issue. What if real elephants act in a way that wasn’t observed and therefore not accounted for? Then the simulation would, at some point, fail to act like an elephant.

    The other observation is that elephants “know” they are elephants, but what should a simulation “know” about itself? If it knows it’s an elephant, isn’t it lying? If it knows it’s a simulation, then it’s clearly not en elephant. (This is also an issue with zombies. Do they think they’re having subjective experience?)

    So a functional analysis of consciousness seems to have an issue with observing all the required functions — which, how could you ever be sure? It also seems to have a potential issue with its own self-awareness.

    The alternative is to recognize the parts, but understand the elephant as a whole — which is what the hard problem is: the whole elephant.

And what do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: