In the last post I explored how algorithms are defined and what I think is — or is not — an algorithm. The dividing line for me has mainly to do with the requirement for an ordered list of instructions and an execution engine. Physical mechanisms, from what I can see, don’t have those.
For me, the behavior of machines is only metaphorically algorithmic. Living things are biological machines, so this applies to them, too. I would not be inclined to view my kidneys, liver, or heart, as embodied algorithms (their behavior can be described by algorithms, though).
Of course, this also applies to the brain and, therefore, the mind.
That said, minds make the situation complex on multiple accounts. For one, minds are the only topic we study while also having direct subjective experience. We know what consciousness feels like from the inside.
There is also that brains are, by far, the most complex natural mechanisms we’ve encountered. We simply don’t understand how they work in purely biological and mechanical terms.
But that is a nut we are cracking — one we will eventually open. We may run into any number of physical limits (resolution, bandwidth, data size, computational intractability, even Gödel) that prevent technology, but we ought to be able to grasp the science.
At the least we’ll have an understanding about why any forever locked doors are forever locked (because Turing or Gödel locked them and threw away the key before we got there).
Given the actual hard problem (how phenomenal experience arises in a biological machine when physics has no account for it), plus the uniqueness of the brain as an information processing system, it’s possible some kind of naturalistic dualism turns out to be true.
Dualist only in the sense that it might be surprising to discover that solid materials arranged the right way, and stimulated by energy, can emit coherent light. That seems almost… magical. But it happens.
Or that different materials arranged a different right way, and stimulated by energy, can emit coherent thought. That, too, seems almost magical (but it happens).
[It’s only our commitment to naturalism that forces us to insist it can’t possibly be magic. Maybe someday science throws up its hands and says, “We just can’t figure it out. It’s effectively magic to us!”]
§ §
Anyway, to the topic at hand: The Mind Algorithm.
To review, the question is due to a list, presented in my Structure vs Function post. The list divided (numerical) mind simulations into three categories:
- A physical simulation of the brain.
- A functional emulation of the brain.
- A unified mind algorithm.
The first two options have foreseeable paths that we’re already setting forth on. The first involves physics simulations at some level of granularity. The second involves algorithmic replacements for brain functions.
In the post I said the last was something of a Holy Grail and chimera (remember, the Holy Grail didn’t exist although many sought it).
The challenge involves the possible existence of a “mind algorithm” so what would that entail?
§
One approach is to view physical machines as Turing Machines.
Machines typically have a closed set of states they cycle through, which is exactly why algorithms are good for describing them.
So it’s very tempting to see them as reified algorithms, and — when it comes to human-made machines — I can see a justification for the view. As I wrote last time, one can view the blueprint or design as the “program” and the machine itself as the engine. (I also said I think that’s more a metaphor than a reality.)
In any event, accepting the view, and seeing biology as a machine, gets us to algorithms reified in biological things.
A problem I see is that it’s one thing to look at a clock and pick apart the causal machine and implied algorithm, the machines in biology are very, very tiny. In some cases, molecular.
I’m not certain the resulting organ (the brain) is a “machine” in the usual sense (of a fully deterministic mechanism).
More to the point, I’m therefore not certain we can view the brain, or what it does, as algorithmic. Its complexity may transcend such a description.
§
Which, again, is not to say algorithms can’t describe these systems to some degree of precision. They absolutely can.
The issue whether a physical process is, itself, an algorithm (in more than a metaphorical sense). At a very fine grain, how information in DNA becomes proteins, for example, things do look very mechanical and algorithmic.
Above that level, however, things become increasingly analog and less algorithmic. I’m not sure having some properties of algorithms at low levels is a strong argument that a system is algorithmic.
There is another constraint I couldn’t get to last time. It involves the instruction set, which is the list of things the engine knows how to do.
Obviously an algorithm can’t have instructions that can’t be performed — that would break the algorithm. So it is limited to the instruction set known to a given engine.
It’s possible to view a simple machine, with a closed set of states, as an algorithm because we sense the instruction set for it is manageable. An instruction set for a putative mind algorithm seems a formidable challenge. Such a thing might be almost infinite.
This is also why the simple “machines” of biology appear algorithmic — we sense that manageable instruction set.
§
One more consideration I’ll throw into the mix: Algorithms, at least the obvious algorithms, are — it seems to me — distinctly human inventions.
They’re very early members of the information age (which combined technology and mathematics). As such, I’m not sure nature deals in algorithms, as tempting as it may be to see them there.
There is an interesting question about where algorithms source from. Can the thing that invented the idea be, itself, an algorithm?
Certainly algorithms can be created (by humans!) to generate other algorithms, but can an algorithm come up with a totally new idea (not some mashup of previous ideas)?
One characteristic of many mathematical operations is that they result in new number values, but not new numbers. When you add two integers, you always get an integer. The formal expression is that “integers are closed under addition.”
Algorithms seem, generally speaking, to be closed under their instruction sets.
They aren’t original thinkers. I like to think I am.
§
One final note: Al Gore and algorithms … these things are not the same. From what I’ve seen Al Gore has no rhythm at all.
Stay mindful, my friends!
∇
January 9th, 2020 at 7:52 am
The question of algorithms seems similar to the question of mathematics and platonism. Do we discover them, build them based on patterns we see out in the world? Or do we create them whole cloth (nominalism)? Or do they come from some platonic realm? I’m a mathematical empiricist (as in the ultimate axioms of mathematics are empirical), and my gut reaction is the same for algorithms.
January 9th, 2020 at 8:35 am
“The question of algorithms seems similar to the question of mathematics and platonism.”
One point I’m making in this post is that I see algorithms as a construction of intelligence, so while I see basic mathematics as potentially Platonic, higher math constructs are more questionable, and algorithms are a very high-level math concept.
(If we’re talking Church-Turing algorithms, then we are saying algorithms are strictly mathematical as illustrated by lambda calculus.)
I’ve used that Kronecker quote (about God making the integers) a lot because I think there’s some truth to it. I’m not sure the line is between the integral types and the reals (although there is an argument for it), but I do draw the line at, for example, math involving different uncountable types.
And likewise am inclined to draw the line at algorithms, but that’s just my take.
“I’m a mathematical empiricist (as in the ultimate axioms of mathematics are empirical), and my gut reaction is the same for algorithms.”
For years I’ve had a note on my idea board that reads: “Reals are Platonic. Naturals are Aristotelean.” Which is to say that integers are empirical, but the reals are more idealistic.
I’ve never posted directly about the topic (having mixed feelings myself), and lately I’m taken with the idea of ontological anti-realism, which suggests there simply is no fact in the matter. That makes it even harder to decide what I think.
It is said most mathematicians are Platonists (perhaps secretly). Math is too eerie. The more you know about it the worse that feeling gets. It gets hard not to see it as a discovery.
January 9th, 2020 at 1:57 pm
On integers, I’m currently poking around in Morris Kline’s Mathematics for the Nonmathematician. He discusses how upset the Pythagoreans, who based their entire philosophy and religion around whole numbers, were when irrational numbers were discovered. (Supposedly the guy who discovered them did so at sea, and was thrown overboard for discovering something so upsetting.)
And not just them. Apparently everyone resisted considering irrational numbers to be numbers, for millenia, until the scientific revolution and its need for mathematical robustness, forced the issue. It makes me feel better for thinking there’s something not right about irrational numbers, despite just about everyone who knows math insisting to me that there’s no problem.
Kline, incidentally, was either an empiricist or nominalist. So apparently platonism, while popular among mathematicians, isn’t universal.
January 9th, 2020 at 2:23 pm
“Apparently everyone resisted considering irrational numbers to be numbers, for millenia…”
Yeah, the simple damned square root of two threw everyone for a loop. Not to mention pi, which is just the ratio of a circle’s diameter and circumference. That such simple concepts led to Devil Numbers was really hard to swallow.
“It makes me feel better for thinking there’s something not right about irrational numbers,”
Pardon the pun, but in a very real way, the irrationals are the real numbers. (The rationals are also real, of course, but they don’t have to be — they can always be expressed as p/q.)
And what you’re talking about here is exactly what I meant in my previous comment about the reals being Platonic (which is to say idealistic). They introduce all sorts of Zeno’s Paradox like problems.
The confounding thing is the square root of two and pi (and many others) that are so simple as to seem empirical. BTW, FWIW, the problems of the continuum are known to at least some.
(As an aside, “empirical” is an interesting term to me here. Our empirical experience with objects leads to making up math, or our empirical experience with objects leads us to discover the Platonic realm. I can see that either way. And nominalism would almost seem to exclude any math at all! Call me old-fashioned, but I think I’ll stick with Platonic and Aristotelian. 😛 )
January 9th, 2020 at 3:51 pm
That continuum post looks interesting. I’ve always taken Zeno’s paradox to show that space must ultimately be quantized at some level. But I’ve also heard there are solutions that don’t involve that. (Although the one or two I tried to follow were opaque to me.)
My view is closer to Aristotle’s, although if I remember correctly, there are some skanky things in Aristotelian realism that a modern empiricist is leery of, such that red actually exists in the world. (Not that we can really blame Aristotle for not figuring that out in the 4th century BC.)
January 9th, 2020 at 4:27 pm
“But I’ve also heard there are solutions that don’t involve that.”
There’s an old math joke: An infinite number of mathematicians walk into a bar. The first orders a beer, the second orders half a beer, the third a quarter beer, then an eighth, and so on, each ordering half what the previous one did. The bartender says, “You guys are all nuts!” and pours two beers. (I referenced that joke in my Cantor’s Diagonal post. Real numbers are… “interesting.” 😮 )
Zeno’s Paradox can be resolved (without quantizing) by “taking the limit” of the series 1/2 + 1/4 + 1/8 + … When you do that, you find the answer converges on the value 1.0. (But be careful. Similar logic can lead one to thinking 1/0 is infinity. It’s not. It’s an undefined operation. It’s just not allowed. Equating it to infinity breaks things.)
A similar example is that 1.999… and 2.0 are just two ways of spelling the exact same real number, 2.0. (A lot of math students lose their shit over that one. Some never recover.)
“…such that red actually exists in the world.”
That’s one of those tree falling in the forest questions. Depends on how you define sound (or red). Air (or other mechanical) vibrations in the 20 Hz to 20,000 kHz certainly exist even if there are no ears to hear them. Likewise, photons with wavelengths in the 560-580 nm range also exist, even if nothing “sees” them.
I’m a philosophical realist, so I consider the vibrations and photons real and primary, and the hearing and seeing of them secondary effects.
January 9th, 2020 at 4:34 pm
“My view is closer to Aristotle’s,”
Which actually is the post-relevant topic I meant to react to. (I love math and easily get lost talking about it.)
Doesn’t that create some tension with the idea that our brains could be natural algorithms?
If algorithms are math (definitely true) and math is something we make up — a game of symbols — (and it’s entirely possible this is true), then how can the brain (or any natural thing) be an implementation of an algorithm?
(From what you say, I’d judge we’re both on the same page regarding algorithms as artificial constructions? As such, there’s nothing odd about using algorithms to describe brains (or any natural thing). But can any natural thing be reified math?)
January 9th, 2020 at 8:18 pm
I’m not entirely sure if I’m following the logic, so sorry if this reply misses the mark.
Remember that I’m a mathematical empiricist, so saying an algorithm is an artificial construction isn’t to say, at least for me, that it’s created whole cloth without reference to things out in the world.
Evolution produces things that effectively function as though they were designed. It’s possible for us to study the result and then reverse engineer that design, which to me is all an algorithm is, at least at a certain level of abstraction. Then, as we can with any design, we can use it to build a new physical system that implements the same functionality, or if the functionality falls within algorithmic boundaries, configure our general algorithm engines to implement it.
To me, the only question here is whether a mind falls outside of that boundary. I’ve read a lot of cognitive neuroscience in the last few years, and haven’t seen anything to make me think it does. But I’m open to the possibility that it might, but there needs to be some plausible reason.
Although as I’ve noted before, performance, capacity, and power consumption issues might be serious constraints with traditional hardware architecture. They may not prevent it in principle, but might in practice.
January 9th, 2020 at 9:35 pm
“I’m not entirely sure if I’m following the logic, so sorry if this reply misses the mark.”
I wrote a reply (that, of course, got long), but I think I need to ask some questions first…
“Remember that I’m a mathematical empiricist,…”
Remember I struggle with exactly what “empirical” mean in terms of mathematics… hence my questions.
I’ve argued that math can be a priori. Our empirical experience with math on top of that is, I think, what makes it so eerie. It seems like something we invent, an intellectual toy, but then it turns out to describe reality in some astonishingly useful way.
The thing is, if math is empirical (and I know you’re an instrumentalist), then are you saying we discover it in the world? That it’s out there to find?
To me, that’s closer to the Platonic view, that math exists without us and we discover it. (My understanding is that it’s not clear Plato really thought a perfect realm of forms existed in any concrete way. He may have meant it in a conceptual way. For me the contrast is between whether math is discovered or invented.)
Anyway, if you do, in fact, believe math is out there to be discovered empirically, it would make sense to think a mind algorithm might also exist to be found.
I had taken you for more of an Aristotelian (which would still be compatible with instrumentalism), in which case algorithms are inventions of ours and not likely to be found in nature.
January 10th, 2020 at 7:18 am
I took Aristotle to hold to mathematical realism (Aristotelian realism), but maybe it’s best to leave him out of it. I’d have to go bone up on his positions in order to compare where I match and don’t match up.
When I say “empiricism” in this context, I’m really using it as shorthand to just say we don’t make it up disconnected from the wider reality. In truth, I think we have some innate intuitions about quantities, but we also discover things empirically. I know PI can be derived deductively, but that’s not how the first people to make use of that ratio discovered it. They discovered an approximation of it be measuring circle like shapes.
Of course, we then turn around and deductively work out all kinds of things, some of which match reality, and some of which don’t. We do this in more than just math, but in all things. But in most things, when we construct a model that doesn’t match reality, we either refer to it as a false theory, fiction, or something along those lines. In math when it happens, we call it pure abstraction.
You noted my scientific instrumentalism, but I think your interpretation of it reminds why I hate taking on any “isms”. People always project commitments on me I’m not comfortable with. My instrumentalism doesn’t prevent me from having theories about reality, and one of those theories is that mathematics is founded on relations we observe or have innately evolved to know about in the world. That’s all I really need to explain the “unreasonable effectiveness” of mathematics.
So yes, we create mathematics, including algorithms, but our creations, if they’re going to be useful, are constrained by empirical reality.
January 10th, 2020 at 8:30 am
Let me ask you a question I should have asked up front: What is your definition of an algorithm?
January 10th, 2020 at 9:48 am
I’m fine with the standard definitions, such as an ordered procedure for accomplishing a task.
January 10th, 2020 at 11:11 am
Okay, great! And that includes the notion of an instruction set and an engine capable of reading, understanding, and performing, that instruction set, I assume? (Whether that be a recipe and a chef or a set of 808x code and an Intel chip.)
Other than DNA/RNA, which is literally code patterns being expressed into objects — a good analogy might be those early programmable looms — do you see anything else in nature that clearly uses this pattern?
(To be specific, of a set of code and something that reads, understands, and performs, that code.)
January 10th, 2020 at 1:09 pm
I think your stipulations narrow the scope to a specific type of algorithm. But in my mind, it really doesn’t matter. I’ll accept them. What’s important is whether an algorithm is possible that is functionally equivalent to a particular system.
January 10th, 2020 at 1:50 pm
What type of algorithms don’t follow those stipulations? I’m under the impression those are generic attributes.
January 10th, 2020 at 1:53 pm
(We’re on the same page in terms of modeling functionality. The point we’re debating is whether a natural system is an algorithm.)
January 10th, 2020 at 2:35 pm
Obviously I think what neural networks do is algorithmic.
Above, I allowed that algorithms, as human notation, are created by us, but they describe something a physical system does, either prescriptively or descriptively.
January 10th, 2020 at 10:16 pm
“Obviously I think what neural networks do is algorithmic.”
Right. I’m just trying to understand why.
It’s an abiding belief you have, which seems in contrast to your skeptical nature. In all our conversations over the years, the only justification for the belief seems to be analogies with logic gates and memory. I am frankly mystified how such an abiding belief grounds on that. (Of course, maybe, as you say, I’m missing something. Always a possibility.)
You go for what I’m calling “strong” computationalism, the view that everything is a computation. I can see why someone would commit to “weak” computationalism — which I define as the view that computation can emulate brain function — but I truly don’t understand the justification for the “strong” version.
So I keep worrying at it, like a dog with a bone.
In any event, the question here is, if we agree an algorithm is a description of how to do something, how is any natural process like that? Where is that description? What reads it?
The theoretical metaphysical question aside, I quite agree the practical question is whether algorithms can provide the functionality in question. That’s an open question that’s gonna matter.
January 10th, 2020 at 9:56 am
Some minor notes.
Algorithms are ordered perhaps in memory but in execution they could vary based on input. However, they would generally (assuming no random jumps) execute in the same order with the same input.
However, I would think a “Mind Algorithm” might be self-modifying so any order could be disrupted based upon how modifications get applied. By self-modifying I am meaning in computer terms the actual insertion or deletion of new instructions on the fly. This is likely more closely to what brains and neurons do.
January 10th, 2020 at 11:01 am
“Algorithms are ordered perhaps in memory but in execution they could vary based on input.”
Absolutely. That’s the whole point of branching — code has multiple pathways through it, and the order steps are executed in is a very different thing from what we mean in saying an algorithm is an ordered set of states. In a loop, for example, the same instruction(s) can be executed repeatedly until some condition is met.
When we say an algorithm is an ordered set of states, we mean the algorithm itself — the list of instructions — has a required ordering. An instruction can’t say to branch to step #42 without the algorithm being in a order (and therefore having internal “addresses” of some kind, such as “step #42”).
I’ve posted a lot about “system states” (essentially the algorithm steps) versus “states of the system” (that algorithm being executed). A lot of it was reaction to the Chalmers notion of a CSA. One place to check it out is this post here.
“However, I would think a ‘Mind Algorithm’ might be self-modifying…”
I think (assuming one exists) it would have to be (I agree with your definition), and that would indeed change the ordering of that algorithm. (Just a little compared to how big it must be overall.)
“This is likely more closely to what brains and neurons do.”
Where do you see brains inserting or deleting instructions? Mike and I have touched on how brains do physical rewire themselves, but that’s not their primary mode of operation. Mostly they change through synapse potential changes — do you see those as inserting and deleting?
(FWIW, per the discussion Mike and I are having, I would tend not to see them that way, but more analogous with changes in memory content. But it’s all a little metaphorical in any case.)
January 10th, 2020 at 11:18 am
Learning. Cells that fire together wire together.
Whether that is exactly equivalent to a self-modifying algorithm could be debated but it is self-modifying.
January 10th, 2020 at 11:25 am
Agreed! (That said, whether it is “exactly equivalent” is exactly the debate topic here. I’m arguing it’s strictly metaphorical and can’t be taken literally. 😀 )
January 10th, 2020 at 11:36 am
Agree it’s metaphorical.
January 10th, 2020 at 11:50 am
BTW, did you guys see this?
https://science.sciencemag.org/content/367/6473/83
I don’t think it has much to do with consciousness per se even though some of the popular articles have tried to draw that inference.
January 10th, 2020 at 11:53 am
I didn’t see it. Neurophysiology is way more Mike’s area of interest and expertise than mine. I’m a software guy. 😀
January 10th, 2020 at 1:11 pm
I did see the news stories, but haven’t gone through the paper itself. The ability of a neuron to do XOR operations is interesting. The conjecture that it’s unique to humans seems dubious.
January 18th, 2020 at 2:26 pm
Yeah, XOR does seem a fairly simple concept at root: this or that, but not both.
I saw an article about this in Quanta today (I saw it today, it was published on the 14th).
It does speak to the complexity of an individual neuron and its synapses. An algorithm to simulate (just) its function would not be trivial by a long stretch.
An algorithm to simulate the system might need to have at least chemical resolution. It seems clear at this point a neuron network alone isn’t sufficient (but may be necessary). The system is more nuanced than that. If quantum effects turn out to be significant — not necessarily in consciousness, per se, but in the functioning of the brain itself — then a system simulation might need to be quantum.
(Which requires simulating part of the universe inside the universe — a trick with limits. Which also limits the simulation hypothesis: it’s not likely we’re a VR inside a VR inside a VR etc. That we can potentially simulate reality with quantum computers might be seen as evidence we’re not a VR. Or at least not deep inside nested ones.)
January 18th, 2020 at 4:36 pm
Yeah, if we’re talking mind uploading, simulating the brain’s detailed operations seem problematic. Doing it perfectly probably requires going down to the molecular level. (I still haven’t seen any evidence that quantum physics are part of it.) It may still be possible to model emergent rules of what happens at the cellular level, but as we’ve discussed before, even that would be problematic.
I think any successful upload would in fact have to be a port, with some reconstruction necessary for the new substrate. But that requires a far more detailed understanding of what’s happening than we have right now, or are likely to have within the next few decades.
But again, it remains an engineering problem that may be solvable in centuries. If it can happen in nature with modest energy (12 watts), it can be done with technology. Eventually.
January 18th, 2020 at 5:19 pm
“Doing it perfectly probably requires going down to the molecular level.”
When you consider stuff like receptors outside the actual synapse — which turn out to be significant in pain mediation (IIRC) — it seems stuff is going on at a molecular grain. Which means the whole system, functional or physics sim, is large.
[Now I want to know how many molecules are in a typical human brain. (Sounds like a Randall Munroe problem.)]
Wrt the quantum physics… here’s a question: how important would QP be in a simulation of a plant using photosynthesis? Would something important be missed if a molecular-level sim just assumed certain photosynthetic functions? Assume the point of the sim is to fully replicate all plant processes.
As you say, pragmatically speaking, molecular or quantum might not make much difference if both are out of reach computationally. Such simulations, even if possible, might be extremely expensive and might not operate in real time. (Egan often has characters living at different than real time — usually slower due to computation costs. Stephenson, in Fall sped time up or slowed it down in the VR depending on what his plot needed. It allowed a jump into a far VR future when some real world characters were still around outside. In both cases, speed depended on available computation power.)
“I think any successful upload would in fact have to be a port,”
Ha! At the least, an interesting sci-fi plot. Elsewhere I mentioned Piers Anthony and body-switching (Chalker, too). Those can be viewed as ports, and story elements often involve characters learning how their new body operates and behaves. (Chalker, in particular, seems to believe in biological destiny — thought follows form.)
But a sci-fi story about how different it is to be re-born as an AI… That’s fertile ground. (Not unplowed, though. I have memories of stories about people “waking up” as AIs although I can’t name one. McCaffery’s Ship series, maybe?)
“But again, it remains an engineering problem that may be solvable in centuries. If it can happen in nature with modest energy (12 watts), it can be done with technology.”
With the caveat it may require structural isomorphism, I totally agree. No question that’s an engineering problem we’ll eventually solve. We agree on believing that will work. I just wish I could live long enough to see them throw the switch.
As you know, I am skeptical when it comes to a radically different structural organization operating according to a different set of rules.
January 18th, 2020 at 6:19 pm
“how important would QP be in a simulation of a plant using photosynthesis?”
Obviously the quantum walk would have to be accounted for, particularly if “the point of the sim is to fully replicate all plant processes”.
Star Trek DS9 once had an episode where a character who was slowly dying gradually had pieces of his brain replaced to keep him going. He started getting robotic toward the end when they decided it was time to stop and let him die.
Greg Egan goes through the process of an AI character being born in one of his stories. John Scalzi has a similar sequence for one of his ghost brigade soldiers. Iain Banks has a character wake up post death in a simulation (with the words “simulation” showing up in her peripheral vision).
And of course, Westworld played with it. (Not to mention Black Mirror.)
January 18th, 2020 at 6:45 pm
“Star Trek DS9 once had an episode where a character who was slowly dying gradually had pieces of his brain replaced to keep him going.”
Sounds like a script writer knew about David Chalmers’ “fading qualia” argument. 🙂
AI point of view does seem a lot less common than ‘former human in a machine’ stories. James P. Hogan did a story about humanity’s first AI. We built it in space, just in case, and it still nearly wiped us out. Until it realized we were intelligent beings.
Part of that is told from the evolving AI’s point of view; part of it is told from the increasingly desperate humans’ point of view. If I’m not conflating it with some other story, it does begin with the creation of the AI — almost HAL’s origin story.
January 18th, 2020 at 7:00 pm
“Sounds like a script writer knew about David Chalmers’ “fading qualia” argument.”
Interestingly, that episode came out early in the same year that Chalmers’ paper on fading qualia came out. https://en.wikipedia.org/wiki/Life_Support_(Star_Trek:_Deep_Space_Nine)
Hmmm.
The only Hogan I remember reading was his gentle giant series. I recall it being pretty good, but can’t remember much about it. (Except for a character making what I took at the time to be an anti-religion statement, which my secluded southern upbringing didn’t expose me to often.)
January 18th, 2020 at 7:18 pm
Wouldn’t it be funny if Chalmers got the idea from DS9? 😀
Hogan, like Orson Scott Card, has problematic personal views, but I always enjoyed his work. As I’ve mentioned, I’m a sucker for hard SF. (The AI novel of his I was thinking of is The Two Faces of Tomorrow.)
Uncomfortable opinions aside, he was firmly science-minded, so it’s not surprising his characters would express anti-religion opinions.
January 18th, 2020 at 7:49 pm
On Chalmers, he might have been inspired by that episode, but as I think about it, I’m pretty sure his position is opposite of the show’s message. Star Trek, of course, often takes a human exceptionalist stance, so it shows the character slowly losing his humanity (or Bejoran-ness) as pieces of his brain are replaced. If I recall correctly, Chalmers’ point is that he shouldn’t have noticed any difference.
I’ve learned to ignore the odious opinions of authors, actors, or directors whose work I like, at least unless they turn their work into a platform for shoving those views at me. (A line Card eventually did cross unfortunately.)
January 18th, 2020 at 8:53 pm
Chalmers certainly raises the question of what ought to happen. If there is a difference, why, and at what point? And what happens if you switch back and forth? (Assuming the experiment is even possible. 😉 )
The Trek spirit lives on, of Kirk and his theme, “Yeah,… we humans… sometimes suck, but we’re… still better… than you… because we’re human!” (His shirt was probably ripped at that point.)
I generally agree about authors. Art should stand on its own. That said, there is some tension with the idea that knowing the history, context, and thinking, of the author informs the work. A paradox in that, in some sense, both those things are true.
January 19th, 2020 at 8:58 am
“If there is a difference, why, and at what point?”
The other question is whether it’s a difference the system itself can notice. For example, the internal experience of the new system might be radically different from the old one, but the new system will remember its experiences as the old one in terms of its new experience.
If zombies are possible, the new system might be a zombie, but wouldn’t compute that things were any different in the previous system. Even a partial change, the parts that are zombied wouldn’t notice and the parts that are original, assuming the proper signals are still being received, wouldn’t either.
All of which I guess was Chalmers’ point.
January 19th, 2020 at 10:23 am
“The other question is whether it’s a difference the system itself can notice.”
It’s the same question! Would you notice?
I believe we share a cynicism regarding thought experiments like these. They assume capabilities not in evidence, and which may not be possible. Formidable practical considerations aside, what does interfacing a computerized neuron with an organic brain involve? Obviously a need to connect to the synapses of lots of other neurons (both receiving and sending).
What about when replacing multiple neurons and two replacements connect? Do you go biological at the synapse? Or is the network increasingly fully artificial (as nodes connect) and only goes biological when it has to.
The thing about the dancing/fading qualia argument for me is that it preserves structure, so I’m okay with the notion biological neurons can be replaced by Positronic neurons. The signal structure is preserved and, so is synapse and neuron behavior within that structure.
All the argument says to me is that a Positronic Brain ought to work.
“…the new system will remember its experiences as the old one in terms of its new experience.”
Does that have to be the case? Could memories of an experience involve some different neurons than the one involved in having the experience?
Say a brain is having experience A (seeing a shade of red, or whatever). Switching the neuron(s) affects perception such that now it experiences A*. The memory of having experience A seems (at least potentially) different from the memory of having A*.
Assuming there’s a difference you can notice at all between A and A*. The experiment suggests you can’t, which rather moots the question.
January 19th, 2020 at 11:13 am
Yeah, the neuron by neuron replacement thought experiment is silly. It might be more plausible on a more macroscopic scale, if we can ever get brain implants working, but as usual, that messes with the intuitions of the thought experiment.
“Does that have to be the case? Could memories of an experience involve some different neurons than the one involved in having the experience?”
The brain uses the same circuits for immediate perception of an image, remembering that image, and imagining that image. So, if the copied mind works like the brain, they would be one and the same.
But it certainly doesn’t have to be like that. Once we’ve copied a mind, we can do all sorts of things. A copied mind might have human level perception, but with the ability to switch at will to a wide variety of machine level modes, which might be deficient in some ways but superior in others, such as seeing lots of extra colors, in the infrared, ultraviolet, etc.
But this question presupposes that we can exactly reproduce the experience of the original, and would be able to tell whether or not we had succeeded. So the “human mode” might to the system seem like what it remembered being human was like, but I don’t know if there would ever be anyway to know for sure, at least unless we had reproduced things down to the protein, or even molecular level.
January 19th, 2020 at 11:36 am
“The brain uses the same circuits for immediate perception of an image, remembering that image, and imagining that image.”
I know much of the same circuitry is used. Are you saying there is no difference anywhere between those three? I would have thought there were at least some differences.
“But this question presupposes that we can exactly reproduce the experience of the original, and would be able to tell whether or not we had succeeded.”
Yeah, that’s a whole topic on its own. The presumption begs the question, but even so, I think if we ever managed a system that seemed conscious under our Rich Turing Test, such a system would have to be conscious.
I think consciousness is “loud” and announces itself — I don’t think that can be faked. Its apparent presence, I think, signals its actual presence.
Put it this way: When an AI can jam a solo like Gilmour (or any other talented free-style musician), I would be very hard-pressed to deny it was conscious.
January 19th, 2020 at 11:48 am
“Are you saying there is no difference anywhere between those three? I would have thought there were at least some differences.”
On there are differences. The higher order circuitry of course would be very different. My brain is in a very different state remembering the sight of someone than it is actually seeing them in the moment. And the immediate perception is far more vivid, since there’s a data stream coming in to error correct against.
I think we could establish that the copied mind was conscious more easily than we could establish that its experience was the same as the original. But if a copy of Gilmour can jam like Gilmour, I’m not sure anyone will care, except for philosophers.
January 19th, 2020 at 12:02 pm
“The higher order circuitry of course would be very different.”
So then, assuming A and A* are somehow different experiences, it’s at least possible the system would notice a difference?
“I think we could establish that the copied mind was conscious more easily than we could establish that its experience was the same as the original.”
I agree. As you know, I see mind uploading or copying as the least likely, or at least last step in our abilities with minds. It requires the deepest understanding and all the technology.
I very much think we’ll see new-born AI long before we’ll see uploaded or copied minds. For one thing, new-born AI in Positronic form seems foreseeable to me. Duplicating a living brain in Positronic form requires the Positronic knowledge and technology, plus the ability to copy (rather than let it form connections on its own as new-born brains do).
Likewise, a VR AI (if possible) also seems easier with new-born than existing on the same counts. That extra step of getting our pattern into the machine is a killer!
January 19th, 2020 at 12:33 pm
“So then, assuming A and A* are somehow different experiences, it’s at least possible the system would notice a difference?”
It depends on what you mean by A and A*. If A is the memory and A* a new sensory experience, then obviously it could notice any differences.
But if A* is a memory of A, then how could the system ever notice the difference? And if, like the brain, the circuitry it uses to perceive, say, red, is the same as the circuitry it uses to remember red, then even if its experience of red has changed radically, it won’t notice the difference.
January 19th, 2020 at 1:04 pm
“It depends on what you mean by A and A*.”
Right, so what I mean is A is an experience without the artificial neuron(s) and A* is that same experience with the artificial neuron(s). It sounds like where those neurons are matters.
I took Chalmers’ experiment to imply the location is in the active perceptual circuitry. If the artificial neuron is switched in and out, we’d notice an immediate perceptual difference, if the artificial neuron acted differently. Chalmers says it doesn’t, so we’re not supposed to notice a difference.
The putative difference would be between our (accurate) memory of A in contrast with our suddenly altered immediate perception of A*. That’s just how I read it.
But if the neuron was in the shared circuitry… hmmm, is memory holographic enough that you might spot a difference? The change would no longer be consistent with the distributed memory? It’d be like a flaw in an optical hologram — I’m not sure what happens there. Is the flaw error-corrected away — overruled by the distributed information? Or does it create a small aberration in the image?
If all the right neurons were switched out (again assuming there is a factual difference between A and A*), the memory would be the memory. Yeah, how would you know otherwise? You’d need a meta-memory of the memory. (Which is what got me wondering if the holographic nature of memory might act as a meta-memory.)