The Giant File Room

After a weekend of transistorized baseball, it’s time to get back to wandering through pondering consciousness. I laid down a few cobblestones last week; time to add a few more to the road. Eventually I’ll have something on which I can drive an argument.

There are a number of classic, or at least well-known, arguments for and against computationalism. They variously involve Pixies, different kinds of Zombies, people trapped in different kinds of rooms, and rock walls that compute. (In fact, they compute rooms that trap Pixies. And everything else.)

Today I’m going to ruminate on the world’s most unfortunate file clerk.

In 1980, John Searle introduced a classic conundrum for computationalism in his paper, “Minds, Brains, and Programs” — which introduced “the Chinese Room.”[1]

His argument involves a clerk in a very large file room.

The clerk’s job involves receiving request messages (on one of those old-fashioned air tube things), processing the request, and sending a reply on its way (again, though a tube).

Very standard bureaucratic task, but there’s a twist: The clerk can’t read the language of the requests or the replies! Everything is in a foreign language the clerk doesn’t know (for example’s sake, say Chinese).

This is where the giant file room comes in. The clerk, who is very fast, is able to match (or index) the symbols on the request to a record in the files. That record contains the reply.

All the clerk does is look up the record, copy the reply, and send it out.

§

That’s from the inside. From the outside, it’s a different story.

All the requests the file clerk handles come from Chinese-speaking people making enquiries of the Giant File Room (GFR). As far as they can tell, a person is replying to them: the Grand Friendly Responder.

In particular, note that the replies all make sense relative to their enquiry.

§

This situation can be taken one of two ways:

¶ On the one hand, it’s meant to point out that the operator in the room doesn’t have any understanding of the request or the reply.

The operation is purely mechanical: a given set of symbols indexes (matches) a record somewhere in the file system. No understanding is required.

¶ On the other hand, it can also suggest the human mind is nothing but a lookup system. That all our responses are just indexed by perceived requests. That we are as mechanistic, as robotic, as the Chinese Room.

The problem is the question of where our sense of understanding the question and the answer come from.

Here’s where illusionism (as I understand it) suggests, yes, that’s exactly what’s going on. What feels like understanding is just what that indexing “feels” like.

§

Many different arguments spring from this, particularly with regard to the intended point that a mechanistic system can appear “conscious” without having any phenomenal experience.

One of the bigger points involves seeing the room as a virtual or composite system that “understands” Chinese.

The idea is the clerk plus the files plus the looking up comprise that system. An analogy compares the clerk to a computer CPU and the files to software (as in an FSM).

Which, fair enough, but where’s the phenomenal experience?

Does any part of the room (or any computer) get annoyed at dumb questions or even just repeated questions? Does Google get impatient when you just can’t manage to phrase your query right?

The room is a behavioral zombie — something that acts human from the outside, but which has none of the human internals, including the autobiographical self.

What if one of the requests asks about phenomenal experience? Does the room lie? If it doesn’t, it gives away the game.

If it does, that contradiction makes it different from a “real” consciousness (it’s lying about something central to personal existence).[2]

Of course, illusionism (again, as I understand it), says we’re lying!

§

I do think the systems approach has a point. We need to consider the room as a whole.

For me that includes consideration of how the room came to exist. Metaphors need to be grounded in some sort of reality if they’re to be taken seriously.

The system I see doesn’t just include the clerk and the file system. It also includes The Designer of the file system. Someone — or some process — had to figure out how to index all those replies.

To the extent the GFR reflects an understanding, it reflects the understanding of the The Designer who created it.

It is essentially a very good knowledge-capture or expert system, one we imagine is good enough to answer any reasonable question correctly and naturally. Or say, “I don’t know.”[3]

Note that one difference here is that the room can’t ever figure out the answer to a new question. It can only reply in ignorance.

§

So it’s a poor analogue for consciousness, is my point.

It’s exactly what it appears to be: a mindless lookup system capable of answering a limited set of inputs.

To make it a better analogue, runners would have to update the file system constantly to replicate learning. It might even simulate figuring out if future updates answer previously unanswerable questions.

But again, what’s the designing intent behind those runners and updates?

Something is shaping the room. Creating it, updating it, it all requires intentional design and implementation for the room to work.

The GFR is just a reflection of the conscious intent of The Designer.[4]

§

As an aside: At what point does a reply of “I don’t know” become a giveaway?

There is the obvious problem of saying it too much. Or otherwise pretending to be less than competent. That has been tried as a way to bluff the Turing Test. But if things are in earnest, then no fraud is permitted.

One giveaway might involve questions A, B, and C, where the system can answer the first two but not the third. Yet the third would be a synthesis of A and B.

As a suggestive example, if it knew (A) about sibling relationships and (B) about parent-child child relationships, it wouldn’t necessarily know (C) that my parent’s sibling’s child is my cousin.

This particular omission would be glaring on the part of The Designer, but it provides a sense of the kind of thing I mean. There are lots of interlocking concepts in the real world. The file system needs to include them all.

It’s also possible weird gaps might show up over time.[5]

(As far as the question of determining if some random single system is “conscious,” I think the only answer involves spending time with it.)

§

This still does leave us with the idea of a system, perhaps a more complicated one than that poor file clerk, that does a better job of emulating the Great Friendly Responder.

What’s involved in such an emulation, and what does it mean in terms of a that system being really conscious? (And, therefore, what exactly is this “consciousness” property we’re seeking?)

There is always the additional question of what it means to appear conscious versus actually being conscious. To what extent does that matter?

Given the prevalent presence of ‘thinking’ machines in our lives, younger generations seem more likely to not bother with the distinction. Acts like a duck is a duck.

Personally, I am interested in what it means to be (really) conscious.

Stay indexed, my friends!


[1] His argument isn’t new, it just became a meme. The idea has antecedents in older arguments, such as Leibniz’ Mill. (See also the SEP entry.)

[2] Not that we’d necessarily know that, but that wouldn’t matter from an objective point of view.

[3] The clerk has the fallback, when a request doesn’t find a match (because it’s impossible to pre-answer all questions), of a special reply that says, “Say what?”

[4] So when it replies about having phenomenal experience, it could answer (truthfully) on part of the room, “No,” or (equally truthfully) on the part of The Designer, “Yes.”

[5] I had a manager once who was pretty good about answering questions… except when corporate secrets prevented it. It was often possible to discern the outline of what he couldn’t talk about by exploring its boundaries.

In contrast, another manager was more of a sneak. His answers weren’t as clear or concise, so it was harder to see the outline of what he was hiding.

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

88 responses to “The Giant File Room

  • SelfAwarePatterns

    At this point, you know my feelings on the Chinese room. My issue with unrealistic thought experiments is that the intuitions they evoke aren’t meaningful.

    For the room to work as Searle described it, would require that we wait for possibly long periods of time while the harried individual looks things up. That delay would destroy any intuition that we were having a real conversation with someone. We can fix this by having a computer system either to help or take over the task, but then the intuition of the thought experiment evaporates.

    You can strengthen the scenario by weakening the room’s ability to perform, as you do, having there be questions it can’t answer that a healthy competent human could. But of course, now the room is in danger of flubbing the Turing test.

    But timing issues aside, if the room can reproduce the competent responses of an actual Chinese person, say describing the school they attended growing up, and successfully answering follow up questions about that school, their friends, teachers, etc, all staying consistent, then we have to consider the possibility that within the lookup procedures and files is an entity that thinks it grew up in China.

    Aside from Broca’s and Wernicke’s areas, my brain doesn’t understand English. And Broca’s and Wernicke’s areas don’t understand things like what the word “father”, “mother”, or “football” actually refer to.

    Which is to say, each of us could be considered an English room.

    • Wyrd Smythe

      “At this point, you know my feelings on the Chinese room.”

      That it’s a razor for slicing between existing views and that we’re, as you say, just “English rooms.” (I really wish I had a Spanish annex.)

      “That delay would destroy any intuition that we were having a real conversation with someone.”

      Which is why I framed it as requests and responses to a putative person in a bureaucracy.

      “But of course, now the room is in danger of flubbing the Turing test.”

      I’ve never considered the Turing Test requires a system answer all questions. No human could pass it, in that case. “I don’t know,” has to be a valid answer.

      The question I posed was at what point does that alone become a giveaway.

      “[I]f the room can reproduce the competent responses of an actual Chinese person, […] then we have to consider the possibility that within the lookup procedures and files is an entity that thinks it grew up in China.”

      You disagree with the conclusion of the post, then? That the room is “a mindless lookup system capable of answering a limited set of inputs.”

      If the facts you mentioned about a person’s life were put into a book, would there be an entity within the book?

      “Which is to say, each of us could be considered an English room.”

      We certainly have the lookup capability within us, although it’s not a very good one. Almost as if the clerk were drunk or incompetent. Or that the file system itself had a lot of chaos and distortion.

      To me it’s like how information processing is a necessary part of consciousness, but not the whole. Lookup capabilities also necessary, but not the whole.

      We contain “rooms” but we’re much more. As I mentioned, the room can’t learn or combine facts to figure out a question for which it doesn’t have an answer.

      With regard to Broca’s and Wernicke’s areas, clearly the whole brain, the whole network, is what understands “father” or “mother” or “football,” and that was another point I made — the whole system including The Designer is the room.

      To the extent it can answer any question, it’s because The Designer put it there. The room is just an extension of The Designer, nothing more.

      • SelfAwarePatterns

        I wish I had a Spanish annex myself, and a French one.

        “The question I posed was at what point does that alone become a giveaway.”

        This comes back to my view that there isn’t a fact of the matter. It’s for each interrogator to decide. Given the variance in responses from actual conscious humans, there’s no standard guaranteed to consistently tell a human from a mindless mechanism. (Although it’s striking in practice how quickly we can tell.)

        “If the facts you mentioned about a person’s life were put into a book, would there be an entity within the book?”

        The issue is that the book has no executable component, no active agency. It depends on us actually providing that agency.

        It is interesting that we have the ability to apply a theory of mind to characters described in the book (historical or fictional). We can become happy, sad, fearful, or angry about what happens to those characters, even when we know they’re not real people.

        “As I mentioned, the room can’t learn or combine facts to figure out a question for which it doesn’t have an answer.”

        That seems like a stipulation added to the scenario. Certainly if we weaken the capabilities of the room, then we weaken the case for any kind of conscious embedded entity. Of course, if we add instructions that oblige the clerk to record or edit information that can be accessed in future answers, then we’ve added a learning component.

        “To the extent it can answer any question, it’s because The Designer put it there. The room is just an extension of The Designer, nothing more.”

        Even if we incorporate the instructions above? If so, then how are we not more than an extension of natural selection, our designer?

      • Wyrd Smythe

        “(Although it’s striking in practice how quickly we can tell.)”

        Isn’t it! It speaks again to the vast gap between the one instance of consciousness we know about and everything else intelligent that doesn’t have that something.

        “The issue is that the book has no executable component, no active agency.”

        An interesting distinction. A book that must be referenced, a room (with an operator) that must be queried, and a computer (for speed of lookup). Same information available in all cases.

        You perceive a virtual entity only in the second two? It’s not in the information, it’s in the agency, the responsiveness of the systems?

        “It is interesting that we have the ability to apply a theory of mind to characters described in the book (historical or fictional).”

        This gets a little into Hofstadter’s idea about how our existence is smeared and extends to those to know us well enough to predict our responses. We literally exist as small models in the minds of others. And, as you say, in books and shows.

        It’s a pretty disconnected form of existence. We can empathize with historically false characters as easily as genuine ones. It’s really more about how we see ourselves reflected in things (than actual models in inanimate objects).

        The data may be there in the pages, files, or RAM, but as computationalists point out, the data doesn’t mean anything without apprehension and interpretation. (Except the IIT folks are screaming from their table that the data means everything. I guess one just has to pick a table.)

        “That seems like a stipulation added to the scenario.”

        How so? (As far as I know, Serle never allowed for synthesis of new answers.)

        “[I]f we add instructions that oblige the clerk to record or edit information that can be accessed in future answers, then we’ve added a learning component.”

        How would that be possible? The clerk doesn’t understand the symbols.

        Imagine the scenario. The clerk can’t match the symbols to any file. One possible response: do nothing. Another, resort to a canned response intended for this scenario. What other options are there?

        The clerk can certainly note the symbols that couldn’t be matched in a logbook. Future fails can be looked up in the book and tick marks added to indicate repeat queries…

        But without some external process to examine the logbook and update the file system, the logbook doesn’t mean anything.

        “If so, then how are we not more than an extension of natural selection, our designer?”

        Oh, I’m sure we are! It’s what we evolved into: a mobile neural net with adequate I/O, an excellent capacity to learn and figure out, and (for some reason) subjective experience. 😀

      • SelfAwarePatterns

        “and everything else intelligent that doesn’t have that something.”

        To me the “something” is being a machine like us. We’re very good at recognizing our own kind.

        “It’s not in the information, it’s in the agency, the responsiveness of the systems?”

        I think it’s in both. We have an action component, so to trigger our intuition of a machine like us, the action component is necessary. But agency without content seems just as problematic.

        “We literally exist as small models in the minds of others.”

        Good points. We never know anyone except through those models, models that can persist after the person is gone, or are of a person that never really existed.

        But here’s the thing. We never know ourselves except through the self model, much of which might refer to things that have never really been there.

        “As far as I know, Serle never allowed for synthesis of new answers.”

        I recall Searle being vague on many points, such as what he means by “understand.” He never really goes into detail on just how capable the room is, but the implication is that it can respond in any way a Chinese person can, at least well enough to pass a Turing test. If it can’t, then that just weakens the point of the argument even more.

        “The clerk can’t match the symbols to any file.”

        But if the clerk is that incompetent, how can he apply any instructions to the symbols at all? At a minimum, the clerk must be able to map a sequence to the relevant instructions. But having that capability, those instructions can involve recording symbols and editing them. No actual understanding by the clerk of the any of the symbols, either the ones being received or the ones being recorded, edited, or recalled, is necessary.

      • Wyrd Smythe

        “We’re very good at recognizing our own kind.”

        Is it no more than the same skill dogs and dolphins have, or does it also involve how loudly higher consciousness attests to itself?

        (I ask because your phrasing hints that you’re downplaying that aspect and framing it as just kind recognizing kind.)

        “We never know ourselves except through the self model, much of which might refer to things that have never really been there.”

        True of all our models, yes?

        As we’ve touched on before, consciousness (and our self model) has the interesting property that we see it from the inside, whereas all our other models we see from the outside. So with models of self versus others, there’s that Johari Window thing going on.

        Our self-model is distinctly different from various other-models we have. That’s why I see Hofstadter’s view as somewhat poetic. (And poignant when you know his wife died young.) Even so, bits of us do in some crude fashion live in the minds of others. It’s kind of a neat idea.

        “[T]he implication is that [the room] can respond in any way a Chinese person can,”

        Right. No one would expect a Chinese (or any) person to know everything. It’s perfectly okay for the room or person to say “I don’t know” without damaging their putative humanity.

        It gets interesting with regard to questions for which it could synthesize or guess at answers. The deep learning neural nets can do that to some extent.

        “But if the clerk is that incompetent, how can he apply any instructions to the symbols at all?”

        I’m not sure you understood what I said. The inability to match isn’t due to incompetence. It’s due to being asked a question for which there isn’t an answer to match.

        Searle is pretty clear the clerk can only match symbols from requests to answers. There is nothing about editing because, how would that be possible? That assumes knowledge of the unanswerable question, but if that knowledge exists, then a file for it would exist and the input symbol could be matched.

        An unanswerable question requires knowledge the system doesn’t have.

      • SelfAwarePatterns

        “Is it no more than the same skill dogs and dolphins have, or does it also involve how loudly higher consciousness attests to itself?”

        I think they’re different degrees of the same capability. Social animals have an innate ability to recognize fellow systems like them. Again, we have to be careful not to privilege the human version more than a difference in intelligence level.

        “True of all our models, yes?”
        Definitely. The main thing is to realize the self model is just as subject to having gaps and oversimplifications as the models of external things.

        “As we’ve touched on before, consciousness (and our self model) has the interesting property that we see it from the inside, whereas all our other models we see from the outside.”

        I think I agree but the wording here is tripping me up a bit. We only have the same access to all models, and they’re all the conscious part of us ever have access to, but some of those models are of external things, and others are of our self.

        ” It’s due to being asked a question for which there isn’t an answer to match.”

        This particular sub-thread started off about whether the room could learn. I can’t see any reason why the room’s procedures can’t include learning algorithms. Or provide for some kind of state between answers. If not, then it’s less capable than my phone.

        Again, you can always stipulate that the room can’t do that. But the room loses ever more of whatever intuitive argumentative force it had.

      • Wyrd Smythe

        “I think they’re different degrees of the same capability.”

        You seem to be dodging the question despite my asking it directly. Per our previous discussion on this, I accept that you don’t find humans particularly special. As you say:

        “Again, we have to be careful not to privilege the human version more than a difference in intelligence level.”

        You don’t. I think it’s a point of view that ignores the evidence.

        As I’ve said many times in these discussions, I absolutely, positively, completely, and utterly, do privilege human intelligence and consciousness as something amazing and very special. As perhaps being the one nondeterministic thing the universe has ever created.

        It makes me sad you don’t see the specialness. 😦

        “I can’t see any reason why the room’s procedures can’t include learning algorithms.”

        Do you mean like, if I pin a location in Maps, the phone “learns” that I want it to remember that location? Searle didn’t seem to account for something like that, a question that asks the room to remember something for a query later.

        That would require three groups of input symbols: one to say “Remember this,” one to provide a retrieval key, and one to provide the thing to remember. But this requires special behavior on the part of those making the requests.

        It’s a complication that I’m not sure changes that much. How one takes the Chinese Room says more about one’s axioms than anything else. I don’t call pinning a location learning (because these remembered facts don’t have any synergy or association), but I can see a point of view that would.

      • SelfAwarePatterns

        “It makes me sad you don’t see the specialness.”

        A lot of people reach similar points in their discussions with me 🙂

        Humans are, of course, special to us humans, and we’re undoubtedly the smartest primate. But the notion that we’re special beyond that, that’s there’s some sharp break between us and the rest of nature, human exceptionalism, is one that science has not been kind to over the centuries.

        “How one takes the Chinese Room says more about one’s axioms than anything else.”

        That’s my beef with most of these philosophical thought experiments. Most of them are circular, only being meaningful for people who already agree with the biases of the author. At best, they make people think, but often they’re just rhetoric for a particular viewpoint.

      • Wyrd Smythe

        “A lot of people reach similar points in their discussions with me”

        😀 Does that signify anything to you? 😀

        As an aside, I always loved the rephrasing of: “If you can keep your head when all about you are losing theirs,… perhaps you’ve misunderstood the situation!”

        “Humans are, of course, special to us humans,…”

        Yes, relativism — a view I explicitly try to avoid. That’s not what I’m trying to discuss, and I’d like to move past the idea.

        “But the notion that we’re special beyond that, that’s there’s some sharp break between us and the rest of nature,…”

        I have presented multiple (objective) examples of how we are quite special. Two more occurred to me: Look at the planet’s night-side. Or look at what’s in orbit around it.

        This is the discussion we where having recently. We left off with cyanobacteria, which took hundreds of millions of years to change the atmosphere, versus what humans have done in a few thousand.

        You apparently don’t see that as standing out (loudly). I truly don’t fathom why not.

        “…human exceptionalism, is one that science has not been kind to over the centuries.”

        Ah. I wonder if there are two points that might get us on on the same page.

        I’m absolutely not talking about human exceptionalism, except for the coincidence that humans happen to have what I am talking about. Which is higher-consciousness exceptionalism.

        It isn’t humans that accomplished that much so much as their higher consciousness.

        The other point is that I think I know what you mean about science and human exceptionalism. That many supposedly exclusive human capabilities have been found in animals. (Yes?)

        For me that’s like how an animated cartoon has characteristics of real life. Of course there are similarities. But that doesn’t conflate them as anywhere near identical.

        So, what I’m saying is that higher consciousness, whatever it is, whatever has it, is loud and obvious and tends to knock the ball out of the park.

        Is that something we still disagree on?

      • SelfAwarePatterns

        “Does that signify anything to you?”

        Since the reason varies, from not being able to convince me to believe in God, to accepting paranormal phenomena, particular theories of consciousness, the mathematical universe hypothesis, and so on, it only tells me that people are annoyed by skepticism.

        “Is that something we still disagree on?”

        I dislike the term “higher consciousness”.

        I can agree that, among the small number of species that appear to have metacognition, ours is more developed, and we currently appear to be the only species with symbolic thought. (Volitional symbolic thought in case James of Seattle sees this comment.)

        We also appear to be a species that has a unique combination of dexterity and intelligence. People often focus on our intelligence, but overlook the importance of the hand. If we go extinct, dolphins won’t build the next civilization, but if we don’t bring them down with us, one of the other great apes might.

      • Wyrd Smythe

        “Since the reason varies,”

        Okay, I had to laugh there, because the thought that went through my head was, “Oh, there you go conflating stuff again.” From my perspective, there’s a significant difference between, say, trying to persuade you about computationalism (which would fall into the class of things you listed, and resistance is expected and fine) versus trying to persuade you about what seem to me to be facts in evidence.

        For instance, that the property of human brains usually labeled “consciousness” (but high intelligence is fine) appears, as far as we can tell, to be an objective fact of reality.

        Or, in this case, that higher consciousness attests to its existence, loudly. That, absent something weird and science fictional, like Wang’s Carpets, the effect of higher consciousness on its environment is unmistakable.

        I’m surprised we can’t agree on those. [shrug]

        “I dislike the term ‘higher consciousness’.”

        Why is that? I don’t mean to imply “highest” — just that what we possess is (much) higher than we see any other examples of. Even if you disagree with the “(much)” aren’t we still significantly higher than any example we know?

        How are you with “higher intelligence”?

      • SelfAwarePatterns

        The word “higher” just strikes me as implying a strict hierarchy of some type. But as we’ve discussed, many animals have cognitive abilities that surpass ours, in certain modalities.

      • Wyrd Smythe

        That animal expertise may actually be a limiting factor. What makes us so effective is that we’re AGIs (Advanced General Intelligences 🙂 ).

        It’s what’s proving so hard to replicate. We’ve been good at machines that surpass most animal expertise for a long time. (Like: Radar, airplanes, submarines, space vehicles, industrial machines, mining machines,…) But something with our AGI? Nothing close, either animal or human-made.

        But, I get it (and surrender), you just don’t see that AGI as anything remarkable.

      • SelfAwarePatterns

        It’s worse than you think Wyrd 🙂

        I actually don’t think the phrase “general intelligence” is accurate. It’s another term we use to privilege our type of intelligence, to say that progress in intelligence inevitably leads to something like us, gene survival machines.

      • Wyrd Smythe

        You’ve denied the objective properties of brains.

        You’ve denied how humans have risen above all other forms of life.

        You’ve denied the idea of higher versus lower consciousness.

        You’ve denied the idea of higher versus lower intelligence.

        Now you’re denying the idea of “general intelligence.”

        I give up. I don’t find all this denial productive. Where does it lead?

        Is there anything you do believe about these matters?

      • JamesOfSeattle

        [a little late to the party, so I’m gonna start from scratch, below]

      • Wyrd Smythe

        Mike tags out; James tags in. 😀

  • Philosopher Eric

    This one isn’t all that difficult from my perspective, though it does get to the same place that you took this post Wyrd. In order for the Chinese room to function “consciously” (from your stringent definition or my far weaker one), it would need to be sentient. With this associated agency it wouldn’t just be a lookup table, but something with personal interests from which to reason about things. A room with a clerk in it isn’t going to feel, and thus isn’t going to function as something which feels like a Chinese person.

    • Wyrd Smythe

      Would you agree with Searle’s point, then, that — by extension — a computer wouldn’t be sentient, either, and hence not conscious?

      • Philosopher Eric

        Yep Wyrd, I’m way on board with you and Searle there!

        One thing to note here is that what we commonly call “computers”, also do more than just compute. My phone outputs screen images for example. These images may reflect its computations, though the screen images themselves are not computations. What good is a computer that doesn’t manifest its computations in various ways?

        If my phone were to output sentience instead of screen images, technically even I wouldn’t say that it’s “conscious” however. Instead I’d say that it creates a conscious entity, or the thing which feels good and bad. Similarly my head isn’t conscious, though apparently it does somehow create a conscious entity, or me.

        I don’t know what it is that my head does in order to produce a sentient being, but there’s no way that a clerk can look things up such that sentience is outputted (let alone lights up a phone screen). Who’s going to say otherwise?

      • Wyrd Smythe

        “One thing to note here is that what we commonly call “computers”, also do more than just compute.”

        For instance, they can be used a paper weights, boat anchors, and heat sources! 🙂

        “If my phone were to output sentience instead of screen images,…”

        The phrase “output sentience” throws me a little. As a “computer guy,” for me, the idea of “output” specifically refers to (essentially) physical outputs, not behaviors.

        From what you go on to say, that your phone isn’t sentient, but produces sentience, and that, likewise, your head also isn’t sentient but produces it, I take your meaning.

        I’d agree. The sentience, intelligence, consciousness, isn’t in the thing itself, it’s in the behavior of the thing. These things are emergent properties of systems.

        “…there’s no way that a clerk can look things up such that sentience is outputted…”

        In this case the word “output” raises what I’m more and more considering as one of the central, key points.

        Is our sentience, intelligence, consciousness, in the outputs we systems create, or is it in the behavior of our systems?

        If the outputs comprise the system, then any system that can generate the right outputs is necessarily conscious. Those outputs should faithfully replicate sentience, intelligence, and consciousness. That is what computationalists hold.

        But if they arise from the innate physical behavior of the system, which physicalism actually suggests, then alternate ways of producing those outputs cannot produce consciousness.

        And here’s why it’s good to distinguish those ideas. It’s probably possible to simulate sentience and intelligence. Video games, to an extent, already crudely simulate sentience and intelligence. So does other software.

        But consciousness seems a harder proposition.

  • Philosopher Eric

    Mike,
    I agree with you that the human has the same sorts of things going on as many other forms of life, so I tend to not call it inordinately “special”. I’d say the human is special given its language and resulting intelligence, and that the trappings of civilization has recently made it extremely powerful, though I still consider it to be a standard causal product of nature. (My position here might make Wyrd just as sad as yours, but then at least I’ve just agreed with him that a room with a clerk will never be useful to call “conscious”, and the same for anything simulated by a computer. Yes I’ll accept his anti computationalism in this capacity.)

    But let’s now see if (or how) you’re going to make me sad. It’s not the human that I consider inordinately special, but rather a trait which it shares with many forms of life, or sentience. I consider this to be the “fuel” which drives the conscious form of function, as well as what defines the value of existing as anything over a given period of time.

    What is the value of your existence to you over some given period? I’d assess this as the aggregate score of your sentience (which is adding the positive moments while deducting the negative) over that period. What is the value of your existence to me over a given period? However you affect my sentience over that period. What is your value to America? How you thusly affect America’s aggregate sentience score. This formula can effectively be repeated for anything at all. I consider sentience to be far and away the most special element to existence — value in itself!

    So what do you think? Are there complications that I’m overlooking? Why aren’t things, at least conceptually, this simple? In other words, how are you now going to make me sad? 🙂

    • SelfAwarePatterns

      Eric,
      I think sentience is just the communication of programmatic responses to situations. In the brain, it’s the lower level machinery reacting reflexively, but with a circuit to planning functionality that allows the reflex to be allowed or inhibited. (Most of the connections from the cortex to the brainstem are inhibitory in nature.)

      So there’s no reason in principle it couldn’t be incorporated into the procedures of a Chinese room, or any technological system. In other words, there’s no reason in principle that the room couldn’t be angry, fearful, or happy. (Unless we just stipulate that those procedures aren’t part of the room.)

      On complications, I think you’re overlooking just how complex our sentient feels are. We’re a wide collection of impulses that are often contradictory. Indeed, I think a major cause for the evolution of reasoning faculties in animals was to resolve those contradictions, to decide which impulse should be allowed, the one to go after food, or the one to flee from the predator near that food.

      This is before getting into whether we should value short term or long term sentience, and what should be the appropriate balance between them, particularly when there is uncertainty associated with our predictions of that longer term sentience.

      The idea that all of this can be reduced to a single metric strikes me as highly dubious. You might come up with a formula, by why should anyone view it as more authoritative than any of the other cultural systems developed throughout history?

      • Philosopher Eric

        Mike,
        One issue is that when I spoke of value above, I actually meant this both conceptually and ontologically, not practically and epistemologically. It’s not like I’ve got all sorts of working specifics here. I’m the architect rather than the engineer. This is broad deduction rather than narrow induction. So let’s try this again.

        We presume that nothing without life harbors any value to itself. Furthermore we presume that more primitive varieties of life exist this way as well. At some point something came along that could feel good and bad however, or harbored sentience. This is where “value” emerged.

        It’s from this premise alone that I’m able to conceptually define the parameters of value for anything at all . No need to get into diverging cultural values between humans. That instead concerns epistemology. If you’ll grant that value does exist (which shouldn’t be a problem since I know that you feel things like “pain”), then the following framework cannot be false.

        The main element to grasp here is perfect subjectivity. Each positive moment of feeling is “good” in this regard for it, and each negative moment of feeling is “bad” in this regard for it. Thus an associated “score” must ontologically exist, though you’d simply feel the moment rather than have such empirical information. Theoretically the compiled scores of any number of moments represent how positive to negative existence is for the subject over an associated period. If existence happens to feel both good and bad concurrently, then that’s simply the way things are — don’t trouble yourself about divvying this up. Here we’re going conceptual rather than actual.

        This model scales up to define the welfare of any number of beings over defined frames of time. What’s best for a specific personal or social subject, by definition, will be what provides it with the highest score over its associated period. If something produces a more positive result now rather than later, then it’s better for the subject now rather that the subject later. Thus this isn’t about resolving conflicts between the welfares of different things, but rather effective descriptions of how value works. This isn’t not a “cultural system”.

        In the end it doesn’t actually matter that you believe “…sentience is just the communication of programmatic responses to situations”. Thus a room could be sentient by means of an associated computational lookup function. Furthermore a simulation of your life could then be sentient by means of a standard computer. If true then this sentience would still constitute value for a defined entity that experiences it, or would remain under the framework that I’ve just presented.

        While you may see sentience as computation, also realize that I see it more like the LED screen on a phone. Electrical phone screens are certainly not the sorts of thing which some “Chinese room” could operate by means of encoded messages! My current belief is that non-conscious brains somehow produce sentience to thus facilitate agency, or the conscious form of function.

      • SelfAwarePatterns

        Eric,
        I don’t deny value at all, but I think your conception that it didn’t start until sentience is wrong. Biological value predates sentience. Remember that video I shared the other day of a single celled organism trying to survive? Life, all life, tries to maximize its biological value, to enable its genes to survive. Sentient value is an elaboration of that, a mechanism in service of it rather than the originator of it.

        Of course, sentient value, once it exists, exists separate and apart from biological value. That’s why so many of us struggle to eat healthy diets, and that birth control is a thing.

        But more broadly, of course it’s better if more people feel better than feel bad. I don’t know too many people who would argue with that. Attempting to quantify it, particularly just in principle, seems very similar to hedonistic utilitarianism, although I know you see your ideas as different than that. But I’m still not clear exactly how.

        On the LED screen on a phone, there are numerous ways to look at it. One is in terms of the physics of the screen, which are obviously only going to be replicated by another phone with another screen. The other is to look at the information the screen conveys. Certainly the room can’t convey the same phenomenal output the screen can, just as it can’t produce body language, etc. But the final way to look at it is in terms of whatever agency is being communicated. The original Turing test, to eliminate bias, was envisioned in terms of a teletype machine, where the interrogator doesn’t know whether they’re talking with another human in different room, or a machine. It’s in that last sense that we have to assess the Chinese room.

      • Wyrd Smythe

        “It’s in that last sense that we have to assess the Chinese room.”

        Hence the design of the experiment (communicating by messages).

        There’s no reason the room couldn’t return a picture (or several). It can return whatever it has in its files that the input symbols match.

      • Philosopher Eric

        Mike,
        As I recall there was at least one other occasion where you dodged my “sentience/value” association by submitting a “life/value” association. (Not that I think you’d quite say that you were dodging! I don’t consider this conscious evasion, but still…) Back then I recall associating your response with the ideas of our good friend Ed Gibney over at the Evolutionary Philosophy blog. Of course he very much equates life with value.

        As I’m defining “value” however (and thus per my EP1 it’s your obligation to accept it in the quest to grasp what I’m saying), life doesn’t harbor it in itself. Sure to us it may have looked like the amoeba in that video was “trying to survive”. That’s what we’d tend to imagine of ourselves under such circumstances. Similarly a robot might seem to us like it truly wants to clean the carpet or whatever (or at least if our idiot robots weren’t so crappy!).

        Upon reflection however we presume that none of these things have any actual agency. Regardless of human perceptions (or the teleonomy illusion) there shouldn’t be anything that it’s like to exist as ameba, vacuum bots, viruses, and so on. This is to say that existence should be personally “valueless” to them as I’m defining the term.

        I’ve proposed value/ sentience as something which a machine might produce if it’s structured in the proper way, just as my phone produces screen images. While the images on the screen may reflect computation, I’m saying that a screen image itself is produced by more than computation. Here the computer is designed so that its information is able to animate something which does something other than compute — a machine that lights up its pixels as instructed. By extension the computations in my head do not produce sentience itself, but rather are rigged up to something which produces sentience, and does so as instructed. So just as my phone screen is animated by a computer, my sentience is animated by what neuroscientist in general refer to as a neural based computer.

        You’ve instead proposed that computation alone is able to produce sentience. Thus a Chinese room or my phone would in principle be able to produce something which feels horrible to wonderful if the proper computations were to occur in these machines. (In the future I’ll need to keep your position here in mind in order to better grasp your ideas.)

        I’ve now come to a realization that I think furthers the theme that Wyrd has been presenting in these posts (though I expect heavy scrutiny from him about this as well!). I’ve decided that it’s useful to define the “compute” term such that any functional output that isn’t simply recycled into more computation, should be termed something other than “computation”. In effect computers do nothing functional beyond themselves, without being rigged up to other instruments (like switches to machines).

      • Wyrd Smythe

        I’ll have to look at the first three ¶s later, but I have no real argument with the last three.

        As you know, I think “compute” has a perfectly good definition, but whatever. You’re definitely on point recognizing that abstract computation is only meaningful in terms of other abstract computation.

      • SelfAwarePatterns

        Eric,
        I actually got the biological value idea from Antonia Damasio. I do think the idea has a lot of…value, which I believe you are hasty in dismissing.

        Regarding my obligations under your EP1, I don’t perceive that there is a lot of daylight between what I called sentient-value above and Eric-value. But maybe I’m missing something?

        Or are you saying your EP1 forces me to forego mention of any other concept called “value”? I would have thought it forced you to consider Damasio’s concept, and then consider what the differences and similarities are between Eric-value and biological-value.

        The reason I think this is important is that sentience didn’t magically just appear at some point in evolution. It has components, components that evolved at differing times. (I know you disagree with this, which is why I keep pointing this line of reasoning out to you.)

        On screens, certainly every computational system that has any effects on the environment have an I/O system. Nothing about the chip in your phone is meaningful without its I/O systems (screens, etc). But what about a brain is meaningful without its sense organs, neuromuscular junctions, or hormonal glands, without its body?

        Now, the embodied cognition folks do say that the body is crucial for sentience. I can see where they’re coming from, but then the question is, would a virtual body suffice? In considering that question, also consider that our brain builds a body image map. It’s through that image map that we feel our body. In other words, we already work with a virtual body in our brain. Does it matter where the data for the image map comes from? If so, why and how?

      • Philosopher Eric

        Mike,
        It’s often hard to keep track of the moving themes to our extended conversations. Recaps can thus be helpful. To me the following seems about right, though otherwise let me know:

        I began by agreeing with you that the human isn’t inordinately special. (Yes we’ve recently become inordinately powerful, though I like to keep the two separate.) From there I proposed something that I do consider inordinately special, or sentience. I equated this with the “value” of existing. Furthermore I proposed this stuff as the “fuel” which drives the conscious form of function. (Note that “conscious” here is from a lower order definition I that personally consider useful, not to take anything away from the higher order definition that Wyrd has presented in these posts).

        You then suggested that sentience isn’t actually all that special either since you consider it computational in nature. This is to say that any technological system could in principle produce it through the proper algorithms, or even a clerk in a file room that goes through the proper look up routine might produce something that’s “angry”, “fearful”, “happy” or whatever. Furthermore you mentioned a complexity to sentience that you didn’t think I was accounting for involving subject value inconsistencies and cultural beliefs.

        I responded by clarifying that I was speaking in terms of broad architectural deduction rather than narrow engineering induction, and so was indeed able to provide a concise metric of value that’s coherent. Furthermore I discussed the virtues of perfect subjectivity, or something which I think solves all value conflicts between different subjects. (This is to say that each defined entity has its own associated value metric.) I also noted that this framework applies even if you’re right that sentience may be produced purely by means of computational operations.

        This is where you stated that you thought my conception of value was “wrong”, and justified this by bringing in the notion of “biological value”. So in my last reply I explained that given my first principle of epistemology you’re not allowed to bring in another definition for a term that I’ve already presented to thus claim that my definition is wrong. Apparently I didn’t state this succinctly enough given your last reply.

        Yes you’re entirely correct that under this principle it’s my obligation to accept any definition for value that you propose. Furthermore I would like to get into this matter with you since I do realize that there were dynamics before the emergence of sentience that also need sorting. If we’re going to use the same term in two separate ways however then that will need to wait.

        I see from your last reply that my proposed improvement to the “computer” term wasn’t quite broad enough. Not only should any output of a computer that isn’t just recycled back into itself be defined as something other than computation, but any input should be considered this way as well. As I understand it the Chinese room has one form of input, written messages in, and one form of output, written messages out. So these two elements of the system aren’t computational given my stipulation, though the look up procedure by which inputs are converted into outputs is indeed computational in nature.

        Back to business then. I suspect that sentience exists as an output of a machine somewhat like my phone screen image exists as an output of a machine. Thus sentience cannot be computation in itself, even though I do consider it to be animated by computation just as surely as my screen image is animated by computation. Under this scenario it’s not possible for sentience to exist virtually, since here no machine exists to produce it.

        Conversely you suspect that sentience doesn’t require any dedicated machine to produce, but rather can exist through pure computation. If so then you’re right — a virtual entity or a Chinese room could feel happy or whatever given associated computation. Agreed?

        Though my own interpretation is relatively boring when compared with yours, there is one funky matter associated with mine that I think you’ll appreciate. We’re obviously able to use the computation of my phone to not only animate its screen, but to animate another screen given a wireless or corded port. So….

        (Wyrd’s going to squeal about this one given how ridiculous, and I must agree with him that it’s pathetically ridiculous. Nevertheless….)

        In a conceptual sense the information that a central nervous system produces could be piped to a separate body for output function as well.

        Finally I shouldn’t fail to mention my original point. It’s that I consider sentience supremely special. This should be the case whether dedicated machines are required to produce it, or even pure computation. For anything without sentience existence should be perfectly valueless to itself, and for anything with sentience value should be composed of an aggregate score of its positive minus negative examples over a defined period of time. This is my central thesis.

      • Wyrd Smythe

        “In a conceptual sense the information that a central nervous system produces could be piped to a separate body for output function as well.”

        Mind-swapping is a much explored idea in science fiction! 🙂

      • SelfAwarePatterns

        Eric,
        “Specialness”, like “consciousness”, is ultimately in the eye of the beholder. Sentience is certainly special to us sentient systems in the sense that we recognize the common ways we and other systems work, an empathy mechanism we have as a social species.

        But if you’re saying there’s something special about it in absolute terms, then I’d ask what you think makes it special? Are the laws of physics inside a brain any different than the laws anywhere else? If an alien species, instead of sentience, had mentience, and insisted that mentience was what was special, is there anything we could show them to prove we were right, or that they could show to prove they were?

        On sentience being a non-computational output, are you saying it’s something other than nerve firings or hormone transmission? If so, I’d ask what scientific evidence you can point to for that? Or are you saying it is nerve firings but emergent in some sense? Again, I’d ask what evidence you could point to, and why the same thing couldn’t emerge in a technological system?

        You can use your architect card to speculate endlessly, but getting anyone to spend time on your architecture requires connecting it to what is actually known about the brain.

        “In a conceptual sense the information that a central nervous system produces could be piped to a separate body for output function as well.”

        I actually don’t find this ridiculous. Obviously it’s not anything we can currently do, but it’s a common scenario in science fiction. John Scalzi’s ‘Lock In’ novels are formulated on exactly this concept.

      • Philosopher Eric

        Mike,
        Is there something “special” about sentience in absolute terms? Yes like all terms, it’s in the eye of the beholder. But some classifications of things seem to warrant such an epistemic distinction more than others do. Math people classify prime numbers as special for their properties. Biologists classify life as special in the sense that it’s the sort of physics which they’re trained to study. Wyrd classifies the human as special given its capabilities versus all other animals. And I’m classifying sentience as special in the sense that it’s the feature that regulates all that’s “valuable” to anything throughout all of existence. (I can go further into the value term if you’re not entirely sure what I mean, though as mentioned this is not the same as Damasio’s definition.) So on to your specific questions.

        I consider it possible for a machine, biological or not, to causally produce a punishment/ reward dynamic, or value, for something other than that machine to experience. I call the created entity “conscious”, and whether functional or not. This sentient machine could be purely computational, and thus as you suspect it could be produced by a Chinese room, though I suspect not. To produce something that feels good/bad it may be that more involved physics is required.

        Yes I do suspect that neurons and hormones are heavily involved to produce sentience in us and other forms of life, but whatever about that. Does sentience exist for things like flies or garden snails? Given their biology and behavior, in some capacity I suspect so. How about amoeba or plants? Given their biology and behavior I suspect not. Few argue with me about such failure (beyond the panpsychist fad).

        Yes the theory is that sentience emerges from certain causal processes. What evidence do I have about this? Well I’m sentient and so have a first hand account. Apparently all respectable associated scientists believe that sentience emerges from various undetermined physical processes in my brain, so I’ll go with the experts here.

        As for an alien species, if they tell us that they have “mentience” rather than “sentience” then we’ll ask them “What’s the difference?” If we decide that existence can be horrible or wonderful for them phenomenally then we’ll simply call that “sentience”. They instead function as “physical zombies”, then this would refute a good bit of my own ideas about why phenomenal awareness has emerged.

        I do not present any ideas which contradict what’s known about the brain that I know of. Hopefully what I’ve said here helps square us up. Furthermore unlike panpsychists I present quite normal definitions for the terms that I use. There is nothing that it’s like to exist as something that’s not sentient, but there is for something which is sentient. Thus what I’m talking about may be referred to as “basic consciousness”.

        On piping output from a central nervous system to a similarly wired system, as I said, under my ideas it’s conceptually possible. Thus if generally accepted my ideas leave plenty of room for sci-fi fun that’s actually based upon science (not that sci-fi needs to be so founded).

        The “ridiculous” accusation comes in when one rationally assesses the limits of the human potential to create versus the potential of evolution to create. Evolution needn’t understand anything in order to do what it does. This gets to Wyrd’s conversation with James. Evolution does not exist as an agent. Often people seem not to grasp the liability of agency in this regard. While we’re tasked with figuring things out, “the blind watchmaker” needn’t ever figure anything out to do what it does. It took billions of years and a vast planet of trial and error to create us and the rest of our ecosystem. Conversely we’re able to do a bit of medicine and engineering, though do not have billions of years and the micro and macro tools that evolution has. But at least evolution should never write any good science fiction! 🙂

    • Wyrd Smythe

      “I agree with you that the human has the same sorts of things going on as many other forms of life, so I tend to not call it inordinately ‘special’.”

      Seriously, I’d like you both to go up to the ISS, look down at the night-side of the planet, look at all the junk in orbit, consider that we did all that in a blink of an eye, and tell me how special humans are not.

      Egalitarianism is a fine thing, but it can go too far.

      “My position here might make Wyrd just as sad as yours,”

      If you mean about physicalism, I’m fine with that. It’s the inability to find foundational ground that’s depressing. Disagreeing on basic axioms makes it hard to talk about what lies beyond them.

  • JamesOfSeattle

    Okay, here we go …

    First, I’m wondering if Wyrd’s concept of the room includes ongoing conversations which would necessarily require some form of memory, or alternatively, the library of Babel. Consider the question “How long did it take?”. Consider the possible prior questions 1. “Did you make your breakfast today?”, and 2. “Did you graduate from college?” In order for the room to be pure lookup, for any given response the operator would have to find an entry for the entire previous conversation plus the new input. Let me know if I should explain the Library of Babel.

    Second, you can consider how the room responds to this question: “If we’re shaking hands, whose hand are you holding?” (This was one of Luciano Floridi’s questions when he was judging a Turing test.)

    Third, I think Wyrd is conflating intelligence and consciousness. I absolutely think there is something that makes humans special, and which Wyrd is calling higher consciousness. But I think what makes us special is a particular cognitive ability, which is the ability to 1. Create internal Mechanisms which 2. can (volitionally? Hi Mike) generate symbols for arbitrary concepts, and 3. generate other mechanism that can interpret those symbols. The arbitrary concepts is key, and is what other animals cannot do anywhere near the scale we can. It is that cognitive ability which allows language, culture, etc. And it’s true that it is an ability which requires consciousness, but in my understanding every cognitive ability, including looking up values in a table, requires consciousness.

    The Chinese Room does not have this cognitive ability as it cannot generate internal mechanisms. It can only use the ones already generated. But from my standpoint, those mechanisms have consciousness, and so the room has a mind, but a very simple, Mike would say reflexive, I think, mind. It cannot think about its responses, because it doesn’t have any such cognitive abilities. But there’s no reason to think you couldn’t add them.

    *

    • Wyrd Smythe

      “First, I’m wondering if Wyrd’s concept of the room includes ongoing conversations which would necessarily require some form of memory,”

      Searle’s conception, if I understand it, leans more towards Library of Babel. The idea being there would be a response to the question, “How long did it take?”

      An obvious way around it would be to refuse to engage in the “pronoun game,” and then the reply would be, “How long did what take?”

      “Let me know if I should explain the Library of Babel.”

      Not necessary. 🙂

      “Second, you can consider how the room responds to this question: ‘If we’re shaking hands, whose hand are you holding?'”

      The obvious answer is, “Yours.” Am I missing something there?

      “Third, I think Wyrd is conflating intelligence and consciousness. I absolutely think there is something that makes humans special, and which Wyrd is calling higher consciousness.”

      I’ll plead guilty as charged. As you go on to say, “[E]very cognitive ability, […] requires consciousness.”

      So they’re easy to conflate. 🙂

      “The Chinese Room does not have this cognitive ability as it cannot generate internal mechanisms.”

      That’s kind of where Mike and I left off. If the users were willing to take part in its training by providing the tuples of symbols I mentioned (LearningCode, QueryKey, QueryResult) then the room potentially could learn.

      The first symbol group, the LearningCode, links to the learning instructions for the room. The QueryKey is the symbol group future queries would use. It’s the key the operator matches when processing requests. Finally is the result that query should return.

      So it’s really a mechanism for extending the file system. And, per Searle’s intent, requires no understanding on the part of the operator.

      But it does demonstrate that learning is possible without understanding on the part of the system directly involved, but clearly there is understanding somewhere — as I’ve said, in The Designer.

      • JamesOfSeattle

        I think there are simpler ways to allow for learning and memory without damaging Searle’s point. And given that the Room is supposedly able to pass the Turing test, I expect such methods would be necessary, because the alternative, the library of Babel, would very quickly run out of space/material (as in, not enough matter in the universe) for the stored responses for any reasonably long conversation. And punting on pronouns (How long did what take?) I don’t think is a real option.

        I addressed Searle’s semantics/syntax conclusion below in reply to another thread, but I want to address “understanding” here. What does “understanding” require? That’s where the handshake question comes in. [And no reason you would know this, because I pretty much just threw it out there.] In order to answer that question reasonably, the system would have to 1. have an understanding of the concept of a handshake, and that “shaking hands” refers to that concept, as opposed to, say, shaking our hands to get water off them, 2. be able to handle counterfactuals (“if we were shaking hands”).

        So if the Chinese Room answered that question with “yours”, I would attribute understanding to it, regardless of the internal mechanism (including library of Babel). Likewise if it answered “That question makes no sense because I’m a room and don’t have hands”.

        *

      • Wyrd Smythe

        “I think there are simpler ways to allow for learning and memory without damaging Searle’s point.”

        Such as?

        “I expect such methods would be necessary, because the alternative, the library of Babel, would very quickly run out of space/material…”

        As with the concept of a Turing Machine, thought scenarios like this often assume physical resources don’t matter. It would clearly be impossible to physically implement Searle’s Room as described!

        “And punting on pronouns (How long did what take?) I don’t think is a real option.”

        Why not?

        “That’s where the handshake question comes in.”

        The point of The Room is that no understanding is required. The original question was: “If we’re shaking hands, whose hand are you holding?”

        From The Room’s point of view, that question is just a string of symbols the operator matches to a reply, “Yours!” Which is a perfectly valid answer.

        The implication is The Designer anticipated the question and provided a ready answer for it. That’s the implication behind the entire Room — it contains an answer for every reasonable question.

        Searle’s point is: Looking up an answer is not how we answer that question. We do it by analyzing the content. That’s why computers struggle so with a statement like: “Time flies like an arrow. Fruit flies like a banana.” That one does require understanding.

        The counterfactuals, as far as a lookup response, just don’t matter. They absolutely do in parsing the semantics of the question.

        “So if the Chinese Room answered that question with “yours”, I would attribute understanding to it,”

        Yet that understanding would clearly not be there. (It is there in the mind of The Designer, if that’s what you mean.)

      • Wyrd Smythe

        p.s. I don’t know if you’re a baseball fan. If so, sorry about the Mariners.

      • JamesOfSeattle

        Me: “So if the Chinese Room answered that question with “yours”, I would attribute understanding to it,”

        Wyrd: “Yet that understanding would clearly not be there. (It is there in the mind of The Designer, if that’s what you mean.)”

        I say no, the understanding is clearly there. The mind of the designer may be long gone. To understand the semantics you need to reference that designer. But the system understands. The machinery needed to give appropriate responses is there. The system gives the appropriate responses. The system understands what a handshake is, and if you inquire further will give appropriate responses. What else could you mean by “understanding”?

        *

      • Wyrd Smythe

        “I say no, the understanding is clearly there. The mind of the designer may be long gone.”

        But the designer’s mind is reified in the design of the room.

        “What else could you mean by ‘understanding’?”

        The ability to combine known facts into new conclusions based on the understanding of those facts. The room can’t do that because it has no understanding.

        As I pointed out in the post, one way we might catch the room out is if it knows A and B but can’t combine them into an obvious C.

      • JamesOfSeattle

        “But the designer’s mind is reified in the design of the room.”

        Pretty sure the designer did not design Searle and his brain, but never mind. And no, the designer’s mind is not reified in the design of the room, unless you think the room can do everything the designer can, the way the designer does it.

        “The ability to combine known facts into new conclusions based on the understanding of those facts.”

        Okay, this is a capability which you have excluded from the room a priori. That just makes the conclusion uninteresting. Kinda like saying a car has a combustion engine, but an EV doesn’t have a combustion engine, so it’s not a car.

        *

      • Wyrd Smythe

        “Pretty sure the designer did not design Searle and his brain, but never mind.”

        Searle is the designer of the room. Or, more abstractly, what I call The Designer is whatever consciousness arranged for the room to exist. The someone, or something, that implemented the file system.

        “And no, the designer’s mind is not reified in the design of the room, unless you think the room can do everything the designer can, the way the designer does it.”

        The Designer’s intent and knowledge are reified in the file system. That doesn’t mean everything. It means what the room needs to be the room.

        If not from The Designer, where do you think the semantic content came from?

        “Okay, this is a capability which you have excluded from the room a priori.”

        Where in Searle’s intent do you find any dispensation for it?

      • JamesOfSeattle

        “If not from The Designer, where do you think the semantic content came from?”

        Pretty sure the Designer did not invent Chinese. [doffing glib hat]

        “Where in Searle’s intent do you find any dispensation for [allowing other internal processes like working memory]”

        Searle’s intent was to show computers cannot have understanding/consciousness because all they have is syntax. He would say that if you add cognitive abilities by simply adding more syntax, that is not enough. But I believe features you require could be added by simply adding more syntax. Therefore if we added those abilities to the room, you would (have to) say it has understanding, but Searle would disagree. You be right, though.

        *

      • Wyrd Smythe

        “Pretty sure the Designer did not invent Chinese.”

        I have no idea what that means. My question is: Where does the room’s semantic content come from?

        “But I believe features you require could be added by simply adding more syntax.”

        Yes, and I challenged you to say how it was possible, since you didn’t seem to like my idea.

        “Therefore if we added those abilities to the room, you would (have to) say it has understanding”

        You don’t seem to understand my point of view, because I would never find understanding in the room.

  • keithnoback

    I’m confused. Does syntax generate semantic content, or is content just borrowed?
    ‘Cause I think that was the whole of the original point…

    • Wyrd Smythe

      That basically reflects my point of view (to the extent I find what is essentially a form of science fiction useful at all).

      The general response is along the lines that semantics is an illusion. Under computationalism, all semantics eventually reduces to syntax.

      • JamesOfSeattle

        My understanding of Searle’s main point is that you cannot get semantics from syntax, which is correct. However, that does not mean that semantics are not there in the syntax. You just can’t get the semantics by looking at the syntax alone.

        Semantics requires cooperation between what creates the symbol and the mechanism (syntax) that interprets the symbol. But that cooperation happens before the interpretation happens. Semantics isn’t an illusion, it’s an abstraction, a pattern, and you can’t explain that pattern without referencing the causal history of the interpreting mechanism. In the case of the Chinese Room, the operator is only a small part of the mechanism and plays almost no role in the coordination that happened to generate the semantics.

        But the semantics are there, and so, the experience.

        *

      • Wyrd Smythe

        “However, that does not mean that semantics are not there in the syntax. You just can’t get the semantics by looking at the syntax alone.”

        The way you wrote that sounds contradictory: if we can’t find semantics in the syntax alone, then the semantics are not in the syntax — they are external.

        As computationalists constantly point out, data (syntax) doesn’t mean anything without external interpretation (external semantics).

        But, you go on to say that, so that’s okay. 🙂

        “In the case of the Chinese Room, the operator is only a small part of the mechanism and plays almost no role in the coordination that happened to generate the semantics.”

        Exactly. All the semantics lies with The Designer. Any reasonable implementation of The Room, is just a knowledge system. A theoretical implementation of The Room, to meet Searle’s intent, is pretty much science fiction. As you point out, it requires something along the lines of the Library of Babel.

        To that extent Searle’s point is, I think, made. Whatever we are, we’re more than just a lookup system.

  • JamesOfSeattle

    “Exactly. All the semantics lies with The Designer. ”

    Not what I’m saying. The semantics is a pattern spread through the designer and the mechanism. You can’t have one without the other.

    And of course we’re not a lookup system, but that doesn’t mean the lookup system has any less understanding, or consciousness.

    *

    • Wyrd Smythe

      “The semantics is a pattern spread through the designer and the mechanism. You can’t have one without the other.”

      Yes. We’re saying the same thing on this point.

      “And of course we’re not a lookup system, but that doesn’t mean the lookup system has any less understanding, or consciousness.”

      I would say it does. From what I’ve overheard between you and Mike, I take it you would define my thermostat as “understanding” it’s gotten too cold and being “conscious” that it needs to turn on the furnace?

      If so, it’s not a view I share.

      • JamesOfSeattle

        I know you require certain cognitive abilities for consciousness, and understanding, and that’s fine. But Searle is trying to say computers cannot have consciousness or understanding because they cannot have semantics, because all they have is syntax. And I would say that syntax is all any of us have. But we have, and computers have, semantics because of our syntax.

        *

      • Wyrd Smythe

        “But Searle is trying to say computers cannot have consciousness or understanding because they cannot have semantics, because all they have is syntax.”

        Yes, James, I know.

        “I would say that syntax is all any of us have. But we have, and computers have, semantics because of our syntax.”

        You seem to be assuming your conclusion while at the same time contradicting yourself, so I clearly don’t follow your meaning.

        You assert “syntax is all any of us have.” I disagree, but no matter, it’s your view. But then you say we have semantics anyway because of our syntax? How does that follow?

        In turn, your assertion the room understands is unfounded and explicitly contrary to Searle’s point. (FWIW, I absolutely agree (or think if you disagree) understanding is not found in syntax.)

      • JamesOfSeattle

        Wyrd, how would you determine if something has understanding?

      • Wyrd Smythe

        Well, I think we agree (?) that “understanding” (i.e. semantics) involves a set of symbols linked to a meaningful idea.

        By “meaningful” I mean the symbols connect to a specific idea, and that idea links to many other ideas — hence the meaning. When we say something is “rich in meaning” we mean that it links to many different ideas.

        So, to determine understanding, for one thing, I look for those associated ideas when the system in question processes various symbol sets.

        There is also the ability to disambiguate fuzzy input. Consider (again) the phrase: “Time flies like an arrow; fruit flies like a banana.”

        (The phrase actually comes from the early days of AI research, and it highlights the problems of getting machines to understand. It’s all about syntax versus semantics.)

        The words “flies” and “like” link to multiple concepts, which makes the sentence fuzzy. (“Time,” “arrow,” “fruit flies,” and “banana” are all nouns, so no problem there.)

        It requires understanding of the world to disambiguate the sentence.

        We need to understand that the incidental construction, “time flies,” doesn’t refer to a real thing, even though syntactically it’s fine. There are horse flies, bottle flies, blow flies, black flies, sand flies, and (of course) fruit flies.

        We need to understand the metaphor about “time flying” in the first place to understand it can fly like an arrow. We also need the connection that arrows fly fast, setting off the word “like” as a comparison. Only then can the system arrive at: {noun:Time} {verb:flies} {relation:similar-to} {noun:arrow}.

        The second part is much easier: {noun:fruit flies} {relation:enjoy} {noun:banana}

        Unless it’s been confused by the first part and thinks all fruit flies (through the air) the same way bananas do. (Which would be logical in the sense that they would all follow a parabola, or logical in the sense none of them fly for lack of wings. Logical, but understanding tells us otherwise.)

        And finally there’s the ability to generate new facts from the ones known. That requires understanding the known facts and how they relate.

        I’ve spent most of my life studying one thing or another, and there are a lot of areas for which I have a deep understanding. (Demonstrated through interaction with others sharing the depth of understanding.) One thing that’s very clear is when we’re talking to someone who lacks that depth of experience. Their lack of real understanding is apparent.

        Consider any topic you know well (that 10,000 hours thing) and consider how quickly you can tell if someone shares your understanding or not. It comes out fairly quickly in how ideas link (or not) and how they generate new ideas and thoughts (or not).

        In a way, that’s why I (and anyone with the math experience) don’t experience that frisson with regard to the rope around the Earth. There’s an initial moment of “Hmm…” followed by connecting certain facts. That demonstrates understanding the situation.

        Part of it, the reason for the initial “Hmm,” is a deeper kind of understanding of humanity that involves the background question: “Why is this question being asked in the first place?”

        It’s like the dream scenario thing. That the question is being asked (usually in a kind of wink-wink way) is a clue to be careful about a quick answer. 😀

  • JamesOfSeattle

    By the way, Searle’s argument is based on the intuition that a lookup system could not possibly have understanding. (Without defining what understanding requires). Remind you of any recent discussions?

    *

    • Wyrd Smythe

      I don’t like guessing games (because I’m bad at them). What do you mean?

      • JamesOfSeattle

        Sorry about that. I was referring to the discussion of a rope around the earth and the effect of adding one yard to it. Many of us [yourself excluded] have the intuition that adding a yard to such a rope could not possibly have a noticeable effect on the radius.

        Even very strong intuitions can be wrong.

        *

      • Wyrd Smythe

        “Even very strong intuitions can be wrong.”

        Ah. And whose strong intuitions are wrong, yours or mine? 😀

        (I always thought: if everyone is too biased to see clearly, why should I trust the word of the person telling me everyone is too biased to see clearly? Maybe, by their own testimony, they’re too biased to see clearly. Subconscious transference is very common.)

        My takeaway, as you point out, is a little different. 😉

        Uninformed intuitions, no matter how strong, are still uninformed.

        Strong informed intuitions are a whole other matter. As one poster there pointed out, the word of an experienced bridge engineer carries a lot of weight.

        For me, there is also that the idea of “intuition” seems to deny the role of rational analysis and logic in working through a scenario. “Intuition” is often associated with emotion or gut instinct. Rational thought applies logic and experience.

        Here’s an example: dream scenarios in movies or TV shows. They are often filmed to make you intuit that something real — but weird — is happening to the character.

        But if you have experience with watching shows, that weirdness signifies something else: “Oh, dream sequence, here we go.”

        Experience, training, and logic matter!

        Your point, if I read you correctly, is that the common (strong) intuition “a lookup system could not possibly have understanding,” is wrong.

        And that is based on your personal view that my thermostat “experiences” temperature and “understands” to tell the HVAC to do something.

        Am I stating that fairly?

        I’m sorry, but I can only see that as metaphorical or poetic, not physical. Experience is subjective, and that requires certain (very complex) machinery that is not at all in evidence in a thermostat. Or a laptop. Or any machine we’ve made so far.

        Computationalists want to extend the definition of “computer” and “computation” to suit their needs, and I see you wanting to extend the definition of “experience” and “understand” to suit your needs.

        And you’re all perfectly free to do that, but I’m not getting on the bus. 😛

      • JamesOfSeattle

        Experience is subjective, and that requires certain (very complex) machinery

        But would you say that if the machinery is there, the ex-erience is there? Because Searle is saying that if the machinery is not biological, there is no experience because there is no semantics.

        Searle is saying the semantics come from the wetware somehow, but he doesn’t say how. I’m saying the semantics comes from the coordination between what creates the symbol (Chinese symbols on paper here) and what creates the interpreting mechanism to generate a response associated with the meaning of the symbol. Semantics is not a pattern you find in the designer, or in the mechanism. It’s a pattern you find in the combination (and the causal history of those things, which in this case includes the development of the Chinese symbols themselves). But if this is correct, then there is a pattern of semantics in the Chinese Room.

        *

      • Wyrd Smythe

        “But would you say that if the machinery is there, the experience is there?”

        If the machinery for subjective experience is there, then yes. So far, the only case we know of the right machinery involves animal brains.

        The whole discussion involves whether it’s possible in machines.

        “I’m saying the semantics comes from the coordination between what creates the symbol…”

        I’m not following your point. Can you be specific with an example?

        The Chinese language really has nothing to do with this. The point of that was the idea the operation can’t understand the symbols. They could easily be binary numbers.

        I keep thinking we agree the semantics is in the design of the system (and comes from the designer), but then you say no, so I really don’t follow.

        If you mean the original semantics come from the physical history of the people involved, well, yes, of course. And?

        “But if this is correct, then there is a pattern of semantics in the Chinese Room.”

        Yes. Put there by the design of the room. Embodied in the file system. From the designer who got it from their causal history.

        Yes? No?

      • JamesOfSeattle

        I can answer your questions, but I don’t think it is necessary to make the point. You just said/wrote this:

        [Me]“But if this is correct, then there is a pattern of semantics in the Chinese Room.”
        [You] Yes. Put there by the design of the room. Embodied in the file system. From the designer who got it from their causal history.

        Searle would say no, there is no semantics put there by the designer. He would say there is only syntax there.

        *

      • Wyrd Smythe

        Only syntax in the operation of the room, yes, but he would certainly agree there is semantics in the design of the room.

        The whole point is the room stands a chance of fooling us because it appears to have semantics. Yet its operation does not.

        Searle’s point is that semantics can’t come from the room. Thus, if there appears semantic content, it has to come from somewhere else. He never got into the design of the room, it’s just a given to make the point about lookup systems.

        *I* reject the room on ontological grounds and because its appearance of semantics obviously comes from its designer. Just as with any knowledge-based system, which is exactly what the room is.

      • JamesOfSeattle

        “The whole point is the room stands a chance of fooling us because it appears to have semantics. Yet its operation does not.”

        This is where we differ, and I think it’s because we have different understandings of how semantics works. I would say the room does not have the appearance of semantics, it has the semantics. The part of semantics that relates to the room is there in the operation. The part of semantics that relates to the designer is/was in the designer. The part of the semantics that relates to the input is there in the person putting Chinese characters on the piece of paper.

        Admittedly, the room cannot be a semantics designer because it cannot create new mechanisms, but that’s just a limitation in the design of the room. There’s no reason a computer cannot be a semantics designer. Learning is one example of such design. Creating a language understood by another computer is another. Both have been done.

        *

      • Wyrd Smythe

        “This is where we differ, and I think it’s because we have different understandings of how semantics works.”

        Okay. I’ve explained what I think it is. What do you think it is, and how does that differ?

        “The part of semantics that relates to the room is there in the operation. The part of semantics that relates to the designer is/was in the designer.”

        Ah, well, we certainly do disagree here. Nothing can have semantics on its own. It can only have semantics in virtue of its design.

        “Admittedly, the room cannot be a semantics designer because it cannot create new mechanisms,…”

        It’s one of the fundamental reasons the room fails to be conscious.

        “…but that’s just a limitation in the design of the room.”

        I’ve asked for an example of how the room could be extended.

        “Creating a language understood by another computer is another.”

        I can assure you that language was created by a human being. The Designer.

      • JamesOfSeattle

        “I’ve asked for an example of how the room could be extended.”

        The room could have one or more chalkboard. A looked-up instruction in the notebook could say “write these characters characters on chalkboard 1, these characters on chalkboard 2, and write these characters on the output. Another instruction might say “if this character appears on chalkboard 1, then respond with these characters, otherwise respond with these other characters.

        “[inability to create new mechanisms is] one of the fundamental reasons the room fails to be conscious.”

        This statement implies that anyone who loses the ability to learn is rendered unconscious. What am I missing?

        “I can assure you that language was created by a human being. The Designer.”

        An Artificial Intelligence Developed Its Own Non-Human Language

        Finallly, I’m sorry I didn’t get a clear, concise understanding of what you think semantics is. Would you mind trying again?

        *

      • Wyrd Smythe

        “The room could have one or more chalkboard.”

        That is essentially the same thing I proposed that you said you didn’t like. The only difference I had the new replies added to the file system, not chalkboards.

        “This statement implies that anyone who loses the ability to learn is rendered unconscious. What am I missing?”

        The fact that it doesn’t imply anything of the kind.

        What it does say is that a system that never had a learning capacity can’t be conscious.

        Regarding your link, did you notice the subtitle? “When Facebook designed chatbots…”

        Those were designed by a human being. I say again, computers do not originate anything. (I was a software designer for 40 years. I should know.)

        “Finallly, I’m sorry I didn’t get a clear, concise understanding of what you think semantics is.”

        From my last reply to you on the thread above this one:

        Well, I think we agree (?) that “understanding” (i.e. semantics) involves a set of symbols linked to a meaningful idea.

        By “meaningful” I mean the symbols connect to a specific idea, and that idea links to many other ideas — hence the meaning. When we say something is “rich in meaning” we mean that it links to many different ideas.

      • JamesOfSeattle

        [regarding the chalkboard, my bad. I assumed messing with the file system had to be done at design time as opposed to run time]

        “What it does say is that a system that never had a learning capacity can’t be conscious.”

        So you’re saying that whether a system is conscious now depends on its history? I’m pretty sure that’s a (very) minority view.

        “Those were designed by a human being. I say again, computers do not originate anything. (I was a software designer for 40 years. I should know.)”

        That sounds a lot like “your brain does not originate anything because it was designed by your genes, or designed by natural selection.” It’s possible to design something that then designs other things. AlphaGo designed/originated strategies. Those Facebook computers designed/originated a language. Natural selection designed your brain.

        “[semantics] involves a set of symbols linked to a meaningful idea … “

        I translate what you wrote, substituting “meaningful” as described to remove circularity, as “semantics involves a set of symbols linked to an idea which is linked to many ideas”.

        Can the set of symbols be just one symbol?

        How many is many? More = richer, but how many is rich enough?

        In theory, any idea can be linked to any other idea, which means every idea can be linked to every other idea. How do you determine whether one idea is linked to another, or whether a symbol is linked to an idea?

        Here’s how I answer these questions:

        A symbol is linked to an idea if it was created for the purpose of being interpreted as the idea. Interpretation is the process of using the symbol as input and generating an output which is valuable relative to the idea. This process generally requires coordination between that which creates the symbol and that which creates the mechanism.

        All of that is there in the Chinese room scenario. The room scenario involves symbols that are connected to ideas which are connected to other ideas. What’s missing?

        *

      • Wyrd Smythe

        “So you’re saying that whether a system is conscious now depends on its history?”

        You’re arguing in bad faith here, James. I’m sure you know I didn’t mean that.

        “that sounds a lot like ‘your brain does not originate anything because it was designed by your genes, or designed by natural selection.'”

        I can’t account for why you think it sounds that way.

        Again: All computer systems were designed by intelligent minds. All operations they do are because of what was designed into them.

        “Natural selection designed your brain.”

        Unless you’re a creationalist or a believer in Intelligent Design, no it didn’t. Not in the sense of “design” we mean when we talk about designing machines. The brain is the end result of eons of evolutionary tinkering.

        “…as described to remove circularity…”

        What “circularity”?

        “Can the set of symbols be just one symbol?”

        Obviously.

        “How many is many?”

        Doesn’t matter, it’s not the point, enough to do what needs to be done.

        “In theory, any idea can be linked to any other idea, which means every idea can be linked to every other idea.”

        If you’re going to treat them like featureless blocks, sure. But ideas are usually linked to each other because of reasonable connections due to the features of the idea. That said, what’s reasonable for one may not be for another.

        “How do you determine whether one idea is linked to another, or whether a symbol is linked to an idea?”

        It’s part of the training of the system. For example, as an infant, you’re shown red things and told they are “red” (the symbol for the idea of red). Because many different things are red, and because there are many kinds of red, over time the symbol “red” becomes linked to many ideas that, one way or another, embody redness. The semantics behind the symbol “red” become rich with multiple meanings.

        “The room scenario involves symbols that are connected to ideas which are connected to other ideas. What’s missing?”

        I explained what’s missing in the post and in multiple replies to you. At this point you either just don’t get it or you don’t accept it. I don’t know what else to say.

      • JamesOfSeattle

        you: “What it does say is that a system that never had a learning capacity can’t be conscious.”

        Me: “So you’re saying that whether a system is conscious now depends on its history?”

        You: “You’re arguing in bad faith here, James”

        I don’t think I’m arguing in bad faith. I’m trying to pick out what is required for consciousness, and in fact causal history is involved, but the history of learning capacity is not involved.

        ——

        Me: “Natural selection designed your brain.”

        You: “Not in the sense of “design” we mean when we talk about designing machines. “

        This is an important difference. There is, in fact, a sense in which natural selection designed your brain, and it is the same sense in which a computer can design a new language, and it is the same sense in which this new language has semantics, and it is the same sense in which the Chinese room has understanding.

        *

      • Wyrd Smythe

        “I don’t think I’m arguing in bad faith.”

        You don’t appear to be making any attempt to understand what I’m saying.

        “[T]he history of learning capacity is not involved.”

        Which is not what I said. For instance, there is no “history” of flying capacity for my car. There is, however, a complete lack of any ability to fly.

        I’ll say it again: Machines do not learn (or have semantics), except as designed in by some other intelligent system. Ultimately there is only one intelligent system we know of, so all machines, and all their capabilities, come from us.

        “There is, in fact, a sense in which natural selection designed your brain,…”

        The same sense in which a watershed “designs” a river system.

        “…and it is the same sense in which a computer can design a new language,…”

        Because that capability was designed into it by a human being.

        “…and it is the same sense in which this new language has semantics,…”

        All of which came from human beings.

        “…and it is the same sense in which the Chinese room has understanding.”

        Because it was designed by a human being.

      • JamesOfSeattle

        Let me try again.

        Me: “Natural selection designed your brain.”

        You: “Not in the sense of “design” we mean when we talk about designing machines. “

        So, two senses of the idea of design:

        Sense A — the sense you mean above, the sense that requires a human

        Sense B — the sense I mean when I say natural selection designed your brain, and possibly the sense in which a watershed designs a river system (I’d have to think about that)

        So …

        Natural selection designed your brain (sense B), and a computer can design a language (sense B), and the Chinese room understands Chinese (sense B).

        So if the Chinese room understands Chinese, why do we care how it got there?

        *

      • Wyrd Smythe

        “Sense A — the sense you mean above, the sense that requires a human”

        Yes. (But only because humans are the only intelligent designers we know of.)

        “Sense B — the sense I mean when I say natural selection designed your brain, and possibly the sense in which a watershed designs a river system (I’d have to think about that)”

        Also yes. Natural forces over a very long time creating an end result.

        “Natural selection designed your brain (sense B), and a computer can design a language (sense B), and the Chinese room understands Chinese (sense B).”

        I have said repeatedly, that, in my view, you are wrong about computers designing languages. Those were human-designed algorithms in action. You are likewise wrong the Chinese Room “understands” anything.

        I’ve explained how I see it several times. If you believe otherwise, that’s your choice.

      • JamesOfSeattle

        And when a human designs a language, those are natural selection algorithms in action. What difference does it make?

        And I suggest (with some trepidation) that the sense in which the Chinese Room doesn’t understand is not a sense that is useful.

        If I can ask the Chinese room how to make a grilled cheese sandwich, and it responds just like a person who understands how to make a grilled cheese sandwich, and can answer follow up questions, like how hot should the grill be?, what kind of cheese is best?, etc., that is the sense I mean, and really the only sense I care about when asking if there is understanding.

        *

      • Wyrd Smythe

        “And when a human designs a language, those are natural selection algorithms in action. What difference does it make?”

        In all cases, the source is human intelligence.

        “If I can ask the Chinese room how to make a grilled cheese sandwich,”

        You are free to see it any way you like! Mike, no doubt, would agree. That’s the as if approach. The room appears as if it understood. But, of course, under the hood, it seems it does not — or at the least there seems no understanding in any of the parts.

        You and Mike can absolutely see the room as having semantics, of itself.

        My interests lie elsewhere. I want to know what’s under the hood. And I see any semantics as having been put there by “The Designer.”

        I see our minds as having evolved over eons in a natural selection process imposed by physical reality. Could an artificial mind be evolved that way? Perhaps, but we’re not anywhere close to being able to pull it off.

      • Philosopher Eric

        Let me give this a try Wyrd (though obviously correct me as you see fit).

        James,
        Natural selection doesn’t “design” anything because, as the term is commonly used, there is no “designer” here. This is to say that there is no purpose driven entity that’s thinking. There is no teleology. There is however an illusion of teleology. This illusion is commonly referred to as “teleonomy”. It’s what Richard Dawkins called “the blind watchmaker. That seems to be what you’re talking about.

        Still you can say “I don’t care. People seem designed by nature to me!” That’s fine, but note that he’s not going to follow along with your convention. And he’s certainly not going to say that a Chinese room “understands” just because you say it can function like a human functions.

      • Wyrd Smythe

        Correct on all points, Eric.

      • JamesOfSeattle

        The problem here is that teleonomy is not an illusion, it’s a real thing. And in fact understanding teleonomy is necessary to understand consciousness. It is necessary to understand semantics. It explains how you get semantics inside your brain. Teleology is just second order teleonomy. Teleology explains how you get semantics between brains.

        What sets humans apart from animals is their vast capacity to generate semantics within their brains, but you need teleonomy to get there. Teleology is just the next higher order step you can take once you have that internal capacity for semantics. But teleology is just a reiteration of teleonomy.

        And teleonomy is purpose driven. But it’s more general purpose than a specified human purpose. Richard Dawkins uses the term archeo-purpose.

        Wyrd said “The room appears as if it understood. But, of course, under the hood, it seems it does not — or at the least there seems no understanding in any of the parts.”

        Exactly! There is no understanding in any of the parts! The understanding is in how the system works. Same as in your brain. There is no understanding in any of the parts. Understanding is in how the parts work together.

        *

      • Wyrd Smythe

        “The problem here is that teleonomy is not an illusion, it’s a real thing.”

        I’ll go along with that, but I think our understandings of it differ.

        “And in fact understanding teleonomy is necessary to understand consciousness. It is necessary to understand semantics.”

        For instance, I don’t agree with that. But I would say evolution is.

        “Teleology is just second order teleonomy.”

        By which I take you to mean (human-based) teleology comes after (evolution-based) teleonomy, which clearly it has to, but the concepts are a bit Yin-Yang — very different. There is no “just” about it.

        “Teleology explains how you get semantics between brains.”

        That’s debatable, depending on what you mean by semantics (which, remember, is a very general term with many applications). Dogs take meaning (semantics) from the behavior (symbols) of other dogs — tail wagging, for example. Do you feel teleology was involved in that case? Isn’t it teleonomy as a result of evolution?

        I’m not entirely sure I even see human language as “designed” more than “evolved” — it’s certainly not like Klingon or Esperanto — those clearly have a teleology.

        “And teleonomy is purpose driven.”

        This is exactly why I’m not sure how much I like the term, teleonomy.

        It does have some controversy associated with it. It’s mainly a term invented for biologists to be able to refer to goals without having to invoke teleology — which evolution denies.

        The Wiki page for it has a useful example that refers to a bird species migrating in the fall:

        “…in order to escape the inclemency of the weather…”

        Versus:

        “…and thereby escapes the inclemency of the weather…”

        Careful biologists would avoid the former due to the implications of teleology. But teleonomy allows them to talk about purpose in evolved species.

        My concern is that it actually makes it easy to conflate evolution and teleology in the unwary who miss an important distinction: Teleonomy is about the appearance of purpose in evolved organisms as the result of evolution. But as the Wiki article points out, “the process of evolution itself is necessarily non-teleonomic.”

        So teleonomy is not, itself, purpose-driven. It is about purpose arising in evolved organisms.

        “Understanding is in how the parts work together.”

        Yes. I’ve never denied semantics exist in the operation of the room. That’s almost axiomatic. The point we’ve been discussing is where those semantics come from.

        I say “The Designer” whereas you seem to disagree.

        I went back over this thread to see if you ever defined semantics. It seems to amount to:

        “Semantics requires cooperation between what creates the symbol and the mechanism (syntax) that interprets the symbol. But that cooperation happens before the interpretation happens. Semantics isn’t an illusion, it’s an abstraction, a pattern, and you can’t explain that pattern without referencing the causal history of the interpreting mechanism.”

        But what exactly is semantics to you? How about some concrete examples?

      • Wyrd Smythe

        [tap, tap] Is this thing on? Can anyone hear me? [Walks away muttering…]

      • JamesOfSeattle

        [experiencing low-ish bandwidth, stay tuned]

  • keithnoback

    This discussion always ends up in the stuck in the same roundabout.
    Searle has proposed several variants which eliminate the room itself in hopes of clarifying things.
    It always helped me to go back to Propositional Logic 101, Lesson 1 where you learn that logically valid, yet false, statements are possible.
    That’s all.

  • JamesOfSeattle

    [okay, re-starting here so the reply button isn’t so far away]

    What exactly is semantics to me? Semantics is a pattern discernible in certain processes. So any process can be described in the form:

    Input —> [mechanism] —> Output

    A semantic process is one where the input was created for the “purpose” of representing a pattern, the mechanism was created for the “purpose” of recognizing the input and generating an output which is “valuable” with respect to the meaning (the pattern represented) of the input.

    “Purpose” here refers to the teleonomic concept. I haven’t explored teleonomy deeply yet, so my description may need some work, but here goes nothin’: Some physical systems/circumstances tend to change the world in such a way that it moves toward a particular state. When the world deviates from that state, the system tends to push it back toward that state. The generic term for this is, I think, homeostasis. In chaos theory I believe it is called an attractor state. Any such system can be said to exemplify the teleonomic “purpose” of directing the world toward that state. The state in question can be called the (teleonomic) “goal”.

    Natural Selection is a generic description for a particular set of systems that work toward their teleonomic goals in a particular way, namely through variation and selection. Evolution is the result of one such system.

    An example of teleonomic purpose is chemotaxisis, when bacteria move toward a food source. So,

    Input (nutrient concentration) —>[mechanism]—>Output(motion up gradient)

    Here the Mechanism was created by a Natural Selection system for the purpose of moving the bacterium toward food. Note: this is not a semantic process as described above because the input was not created for a purpose.

    Let’s consider a case where the input is created for a teleonimic purpose: neurons. Let’s consider a simplified system: a cone cell in the retina generates neurotransmitter 1 in response to red photons. A neuron responds to that neurotransmitter by generating neurotransmitter 2 deep in the brain. So we have:

    neurotransmitter 1 —> [neuron] —> neurotransmitter 2

    Neurotransmitter 1 is a symbol representing that the red photon was absorbed. It was created for the purpose of representing that event. Likewise, neurotransmitter 2 is a symbol for that same red photon event, and thus it is a valuable response relative to the meaning of the input. So this process counts as a semantic event, albeit a degenerate one. Usually when we talk about semantics we consider a mechanism that can take more than one input and generate an output relevant to the meaning of the particular input, but in the neuron case just described there would be only one possible output, and one meaning, thus, a degenerate case, but still a case of a semantic process.

    Any problems with the above?

    *

    • Wyrd Smythe

      “What exactly is semantics to me? Semantics is a pattern discernible in certain processes.”

      Well, at the very least, we have quite differing views of semantics.

      A “pattern”? Any pattern? In a “process”? So no semantics in a book?

      “Some physical systems/circumstances tend to change the world in such a way that it moves toward a particular state.”

      That’s the first time I’ve heard of attractors as tending to “change the world” but I’m familiar with dynamical systems with attractors in the phase space.

      “Usually when we talk about semantics we consider a mechanism that can take more than one input and generate an output relevant to the meaning of the particular input…”

      That explains why you see the GFR as having semantics of itself.

      To me, you’ve just described a basic IPO mechanism, and I see no semantics in that without reference to what designed the IPO module.

      On the one hand you say the bacteria does not have semantics “because the input was not created for a purpose.” But the neurotransmitters in the visual system do because they were created for a purpose?

      What “purpose” that isn’t equally true for the bacteria? Didn’t evolution provide that purpose in both cases?

      Where did the “purpose” and “semantics” in the GFR come from? How did they get there?

And what do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: