Model Minds

mind modelLast week we took a look at a simple computer software model of a human brain. (We discovered that it was big, requiring dozens of petabytes!) One goal of such models is replicating consciousness — a human mind. That can involve creating a (potentially superior) new mind or uploading an existing human mind (a very different goal).

Now that we’ve explored the basics of calculation, code (software), computers, and (computer software) models, we’re ready to explore what’s involved in attempting to model a (human) mind.

I’m dividing the possibilities into four basic levels.

We’ll start with (and quickly leave behind) level #4. This level takes a thing we know is true — the fact of human consciousness — and makes an assumption that it’s the only thing that can be true (because, so far, that’s the case).

from GodMaybe minds come from god. Maybe there is some non-god form of dualism that’s true. Maybe this is a virtual reality and the “rules” only allow minds in living human bodies.

Maybe it’s a Tegmarkian universe, but the math only works in living people.

This level assumes there is some metaphysics — or possibly just physics — that restricts consciousness to “people” (where “people” includes intelligent aliens; potentially even elephants or other animals).

In particular, no non-living machine (of any kind) can be conscious, so AI will never accomplish that goal (let alone uploading).

Since that’s the end of the line on this one, we’ll leave it there.

§

Level #3 also takes a true fact — that biological human brains result in consciousness — and assumes it’s the only thing that can be true (because, again, so far that has been the case).

human brainSince the one example of consciousness we have is organic and biological, this level assumes those are necessary conditions (we don’t know they aren’t).

Time may also be a factor. Maybe it turns out that minds require lots of real-time experience to function properly.

Note that from this level on, we’re assuming physicalism (that this physical universe is the only universe — no metaphysics). That means consciousness is strictly “something that brains do.” Given this, replicating a mind is an engineering problem.

But here we assume the physics requires a living, organic, squishy biological brain that is grown and trained. It might be a matter of how bio-electronics works compared to metallic electronics. Maybe electrons flowing through wires just doesn’t cut it.

It’s hard to see exactly why a brain has to be squishy, but maybe the only way to get the necessary brain cell density, or the right micro-structures, or the massive number of interconnections, amounts to growing them (like crystals or a plant).

nanomachineOr constructing them with nano-machines. The distinction between biology and machinery can get fuzzy at this level. It sort of boils down to proportions of carbon, hydrogen, oxygen, and metals.

We’re assuming here that those proportions matter somehow.

At this point, I want to introduce two important criteria involving the consciousness we’re trying to replicate.

The first is that we humans experience qualia. There is “something it is like” to look at the color red. The question here is: What are the correlates in our model of the experience of redness? We should be able to identify that in any model we construct.

The second is that we humans have a sense of the «I» — the self narrator of our personal lifetime movie. Our identity as self-aware beings is based on the «I».

As Descartes said, «I» think, therefore «I» am.”

fMRIWith regard to qualia and organic brains, we are beginning to identify correlates between subjective experience and brain function.

In some cases, we’re able to say a great deal about what a subject is “thinking” based on what we see their brain doing.

For now, an understanding of the «I» eludes us rather completely. There are theories, but they amount mostly to best guesses — hypotheses worth investigating.

If we can clone or grow a brain, we may be able to create a new mind. Yet this isn’t far off from what parents do when they create children. And it’s possible the physics really does require both biology and time to create a functioning brain.

It seems unlikely uploading would be possible at this level. But perhaps artificial new minds — perhaps exceptionally powerful ones — might be possible. It may amount essentially to making a child very fast and under very fine control over the resulting brain.

§

Level #2 focuses on the fact that brains are highly connected networks but ignores any requirement for biology or density or other specific physical requirement. It assumes consciousness lies in the complexity of the network, not in any specific physical aspect of it.

brain networkAny sufficiently complex network resembling a human connectome and capable of processing information like a brain should give rise to consciousness.

As with the previous level, it’s possible brains require real-time training to function.

However, the mechanical nature of this level allows the possibility that, even so, such “real-time” training could be accomplished in very short actual time.

It may be that complexity of the network prohibits uploading or downloading and that only (perhaps very fast) “experience” can program it.

For the first time, we’re stepping away from having working examples that tell us “this much, at least, is possible.” We have no complex, non-organic, intelligent networks in our experience. We assume that in replicating the structure of the brain, we also replicate its function.

The key question is whether a brain machine that works closely enough to a human brain will produce consciousness. Given the assumption of physicalism, it’s hard to see why it wouldn’t, so despite the lack of working examples, there is a good chance this level could work.

install softwareNew minds, and super new minds, seem possible here. It’s a matter of building the right network. (The positronic brain of Isaac Asimov!)

The mechanical nature of the network should make it easy to interface to more standard forms of digital information.

Uploading an existing mind might be possible, although there are some significant engineering challenges:

Firstly, the need for a sufficiently accurate enough scan of the network of an existing mind. Secondly, how to apply that network to an existing physical one.

Perhaps, once the scan is accomplished, a mind is produced (a nice verb is woven) from that scan. Formidable, but potentially possible.

§

Level #1 assumes that mind is an algorithmic process. That it has a Turing Machine that represents it. This is pure assumption with no working evidence — or even model — that suggests it’s possible.

algo brainAt this level, there are two basic options: A software model can attempt to replicate the complexity of the physical brain.

Or it can model consciousness functionally in a way that is not related to the brain’s structure.

When we created our crude brain model, we took the first option.

No one really knows enough at this time to do much with the latter option. We have no working model of consciousness.

We do have systems that act (somewhat) as if they were conscious — or at least intelligent. They can pass a limited Turing Test, so to speak. Software programs such as Siri and Watson are steps in that direction.

But the Holy Grail of AI is replicating true human consciousness. Something that can say, with the same authority we can, “«I» think, therefore «I» am!”

And obviously this is necessary if we are to upload ourselves, something that would seem to be almost trivial at this level.

§

clock brainSo those are the four possible levels I see. The middle two are grounded in the one example we have (us), the first and last are speculation.

(The first is religion or metaphysics and is taking time off from these posts.)

The last one is based on the key assumption that mind is algorithmic. If it is, the rest follows.

[For the record, I see this level as being nearly as much wishful thinking as the first one about humans having souls. There is literally not one shred of hard evidence supporting this.]

Levels #3 and #2 aren’t very different, only on the matter of biology over machine. The former is obviously true in us. For the latter, I’m hard-pressed to come up with a good scientific reason a machine brain wouldn’t work.

number brainThe key point here is the very large gap between levels #2 and #1.

As we cross that line, the model changes from a recognizable physical replica to a code and data calculated simulation with no physical correspondence to the brain or its function.

A software model, even one that seeks to replicate the physical structure of the brain, is still a dance of numbers. Binary bits flowing in and out of the CPU. Simple math and logic implemented in metal and silicon.

It requires a great deal of faith to believe that this can result in subjective qualia and something with an «I».

That faith may someday turn out to be justified.

But I’m extremely skeptical it will. In the next couple of posts, now that I’ve laid all the groundwork, I’ll try to explain exactly why.

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

17 responses to “Model Minds

  • SelfAwarePatterns

    Wyrd, I like this breakdown. I think it clarifies some things.

    For copying a human mind, I think Level #2 is all we’ll actually need.

    Level #1 strikes me as discussing a modern engineering paradigm, as opposed to a fundamental aspect of nature. The key thing here, is that engineering paradigms can be expanded, altered, improved to handle things they can’t currently handle. Even mathematics can be expanded if need be (see Newton and calculus).

    An early 18th century futurist, looking at the state of aerodynamics in his time, might have concluded (correctly) that the then current paradigms would never provide controlled heavier than air flight. No one knew then what needed to be added, but it would have been a logical error for that futurist to conclude that no future paradigm would ever accomplish it.

    If Level #2 is possible, and I agree that there doesn’t seem any fundamental obstacle to it, but Level #1 isn’t possible, then what prevents us from improving our technology until we can accomplish everything we wanted to with Level #2?

    • Wyrd Smythe

      “I like this breakdown. I think it clarifies some things.”

      Thanks! It helped me break down the requirements in terms of what we can see and what we can’t yet.

      (Sort of the infamous ‘things we know we don’t know and things we don’t know we don’t know’ — although, to be honest, despite all the mocking, I knew exactly what he meant, and it was entirely coherent. The mockery of it was, I thought, a good example of all-agenda zero-analysis thinking. But that’s a whole other discussion! 🙂 )

      “For copying a human mind, I think Level #2 is all we’ll actually need.”

      Let me ask: When you say “copying” do you mean accomplishing machine consciousness (i.e. copying nature’s process of conscious bio-machines) or copying an existing human into a network of some kind?

      The second one ought to be feasible — presuming level #2 does work — but the engineering challenges are quite formidable. (I think there’s enough meat there for a post speculating on that engineering. I have a page of notes, which usually means enough for a post.)

      “Level #1 strikes me as discussing a modern engineering paradigm, as opposed to a fundamental aspect of nature.”

      I know what you mean here, but I’d quibble on a point or two. This series of posts is an effort to show (or at least touch on) the computer science that applies here, and in that sense there are some fundamental aspects of nature involved.

      At least as we understand them, which is, I think, your main point. Our paradigms are certainly subject to change. (Sadly, it’s been a while since that happened in a big way. Quarks in the 1960s; maybe some astrophysics things since. Unlike the Spanish Inquisition, we expected the Higgs.)

      [As a total aside, I’ve read blogs discussing a paper defining a table-top experiment that might prove gravity is quantum (something we thought required a Planck-level experiment). Failing to prove it isn’t definitive — like failing to find super-symmetrical particles hasn’t killed that theory — but proving it would be. I’ve been holding out a(n increasingly) faint hope for an Einstein-like smooth spacetime underlying quantized matter-energy. This experiment would, if successful, dash that last faint hope. OTOH, Luboš Motl’s blunt post exclaims that such hope is pure ignorance and folly (gravity’s quantum nature is necessarily inherent)! 😀 ]

      One thing I’d quibble on is flight. Since ancient times, we’ve known birds — which are heavier than air — fly, so we’ve had that example. (And, as you know, many early attempts had flapping wings! 🙂 )

      OTOH, there are no known examples of machine networks (level #2), let alone algorithmic consciousness. The jump from biological network we know exists and works to machine network that’s functionally the same isn’t a big one. At least we have the example of the biological network.

      The jump to software consciousness is without precedent. (I honestly believe that a belief in Kurzweil’s singularity is very nearly on par with a Christian belief in heaven. Neither one are based in any real facts. I say “very nearly” because there is at least some science on Kurzweil’s side.)

      “[If] Level #1 isn’t possible, then what prevents us from improving our technology until we can accomplish everything we wanted to with Level #2?”

      If “everything we wanted” is replicating human consciousness in a machine, then I think nothing. We’d have a Star Trek world with self-aware machine people, like Cmndr Data, but not conscious software (e.g. the ship’s computer wasn’t self-aware).

      UNLESS, of course, you can simply ask the computer to create a self-aware software entity with a simple command. Apparently if you ask it exactly right, it can create a real nemesis. 😀

      It never really made sense that Data couldn’t be duplicated (one can’t help but wonder about the cloning possibilities of transporter and replicator technology, but that whole area is one big paradoxical slops bucket they stepped squarely into). But maybe positronic brains are difficult to create, even in that era.

      They’ll have it down to brains you can buy at Radio Shack® in another 500 years.

      If the goal is for us to live in the networks, that might be trickier, but it does seem like an engineering problem. The gotcha might be, sure, possible in principle, but good luck actually pulling it off practically.

      • SelfAwarePatterns

        “When you say “copying” do you mean accomplishing machine consciousness (i.e. copying nature’s process of conscious bio-machines) or copying an existing human into a network of some kind?”

        Both. I see the second as a special, and admittedly more difficult case of the first.

        “This series of posts is an effort to show (or at least touch on) the computer science that applies here, and in that sense there are some fundamental aspects of nature involved.”

        I guess it depends on how much of a basic science we consider computer science to be. I think the trend of computer science departments being moved into engineering colleges, along with the rising popularity of the label “software engineer” is indicative that a lot of people are concluding that’s it’s an applied science (aka engineering).

        I’m agnostic on whether GR or QM is more fundamental. Both have mountains of experimental evidence behind them. I wouldn’t be surprised if both don’t eventually have to be modified.

        On flight, someone in the 18th century might have wondered if flight was something only birds could ever do. That’s obviously wrong now, but how different is it from wondering if consciousness is something that only brains can do?

        “The jump to software consciousness is without precedent.”
        Again, I think it’s a mistake to look at just the software here. I think it’s better to look at the whole hardware+software system. Some parts might require hardware (for instance, software can’t make sounds without the appropriate speaker hardware).

        We know that a software mind will need to be hooked to input-output hardware, at the least, and the modern versions of that hardware may be utterly insufficient to give a human mind (a sensory processing system) what it needs. Even if we’re just putting a connectome in a virtual environment, we’ll need to understand the peripheral nervous system, and its complex interface to the central nervous system, far better than we currently do.

        Totally agree on Kurzweil singularity “rapture of the nerds” thinking. Our generation will be lucky to just get improved life extension therapies.

        On Star Trek, yeah, technology on the show exists principally for plot reasons, with non-plot complications from it either ignored or hand waved away. BTW, did you hear that there’s a new Star Trek series on the way?

      • Wyrd Smythe

        “I see the second as a special, and admittedly more difficult case of the first.”

        It certainly includes all the problems of the former, and comes with it’s own set of problems (scanning, transfer), so totally!

        “I guess it depends on how much of a basic science we consider computer science to be.”

        I think you might be conflating computer engineering with computer science. The latter predates actual computers by many decades (if not centuries) and — as a branch of mathematics — is about as pure a science as there is.

        “I think the trend of computer science departments being moved into engineering colleges, along with the rising popularity of the label “software engineer” is indicative that a lot of people are concluding that’s it’s an applied science (aka engineering).”

        Isn’t an applied science still a science?

        Back in the 1970s, when I took my CS classes, they were indeed under the auspices of the Engineering department. That’s not uncommon; most people who study computers have engineering — not research — as their goal. It takes a university serious about turning out researchers to have a real CS program.

        As for “software engineer” (which has also been around a long time), I’ve resisted that label all my career (preferring “software designer”, “bit wrangler”, or “trainer of silicon life forms”). I think it disrespects real engineers, who have been through certified training and taken tests to demonstrate their ability, to apply that label to a software maker (especially some of them 😮 ). (Here’s my post about “Software Engineer” on my other blog.)

        We’re still a ways away from computer programming being true engineering!

        “[S]omeone in the 18th century might have wondered if flight was something only birds could ever do.”

        Absolutely. When trains were new, people did wonder if humans were capable of going that fast without damage. People wondered the same thing about being in orbit.

        The point is, if someone had posited that, “Nothing heavier than air can fly!” there is a clear counter-example. Therefore it’s possible to say, “Humans can’t possibly fly — look, no flying humans!” But you can’t say nothing heavier than air can fly. It’s not physically ruled out.

        At level #2, we at least have the correlation of a complex network. We know a human brain is one (let’s call that a bird that flies). The question is, can a machine brain work like a human brain (can a machine act like a bird), and it’s quite possible it can.

        Until we did solved the problem of flight, we didn’t know for sure we could do what birds do, but it turned out we could. (And I think we agree level #2 does show promise.)

        But level #1 is unprecedented. There are no “birds” at all at this level. (But it appears we agree about this, too.)

        ” I think it’s better to look at the whole hardware+software system.”

        But that means mind is not (just) an algorithm and that level #1 isn’t possible.

        It has nothing to do with the software needing a physical instance to run or needing to be connected to the real world to be useful. It has to do with what algorithms can do.

        “BTW, did you hear that there’s a new Star Trek series on the way?”

        You know,… I think I’m over Star Trek. I like to say that J.J. Abrams “killed” Trek, but the truth is that it was moribund long before Abrams nailed the coffin shut.

        I did pick up The Martian at Target the other day. Looking forward to reading that!

      • SelfAwarePatterns

        On computer science and computer engineering, I’m not conflating them. That said, I think the separation is a pragmatic one, that they’re two halves of a symbiotic system.

        Is software engineering grounded in mathematics? Sure? As is all engineering. But, as I noted above, even mathematics gets expanded to describe things that it couldn’t previously describe.

        Would you think that Level #2 could be possible, but impossible to model mathematically? If not, then why couldn’t mathematics, as well as any technology built with it, expand to encompass it? (Assuming of course that it can’t already describe it effectively; I tend to think it can.)

        On Star Trek, I know what you mean. While I didn’t mind the new movies as much as you did, it is kind of tedious seeing the future as we imagined it in 1966 continually dwelled on. I’m more excited by The Expanse starting in December on SyFy.

        I think you’ll enjoy The Martian.

      • Wyrd Smythe

        “[Computer science and computer engineering are] two halves of a symbiotic system.”

        No, I’m sorry, they’re really not. CS predates physical computers and is the science of abstract calculation and algorithms. It’s concerned with things like the Halting Problem, the Traveling Salesman, different kinds of sorting and searching strategies, encryption, minimum theoretical run times, P verses NP, and other topics like that.

        There’s a very famous quote (even considered a rallying cry because so many people don’t get this): “Computer Science is no more about computers than astronomy is about telescopes.” It’s attributed to Edsger Dijkstra (one of the “names” in CS) circa 1970.

        (I always wanted to add, “Which is to say that both are a little bit about those things, but they’re the tools, not the point.” There are branches of both fields that do concern themselves with the tools.)

        It’s claimed that Dijkstra said that, although the phrase has never appeared in his writing (it does sound like something he’d say — Dijkstra is the guy who wrote that famous paper, “The GOTO Statement Considered Harmful”) The known written source (who says he got it from Dijkstra) is due to Hal Abelson in Structure and Interpretation of Computer Programs (1985):

        “[Computer science] is not really about computers — and it’s not about computers in the same sense that physics is not really about particle accelerators, and biology is not about microscopes and Petri dishes…and geometry isn’t really about using surveying instruments. Now the reason that we think computer science is about computers is pretty much the same reason that the Egyptians thought geometry was about surveying instruments: when some field is just getting started and you don’t really understand it very well, it’s very easy to confuse the essence of what you’re doing with the tools that you use.”

        “Is software engineering grounded in mathematics? Sure? As is all engineering.”

        Yes, agreed; all engineering is. But computer science is a branch of mathematics.

        You could conceivably spend a rich career as a computer scientist without ever touching or seeing a computer (early ones did exactly that). But computer engineering? Not so much.

        “Would you think that Level #2 could be possible, but impossible to model mathematically?”

        No, I’m pretty sure there is a mathematical model for any real world physical process.

        “If not, then why couldn’t mathematics, as well as any technology built with it, expand to encompass it?”

        Because a software (mathematical) model can tell us a lot about a thing, but it’s not the thing. The software model of a falling object doesn’t hurt your toe.

      • SelfAwarePatterns

        “Because a software (mathematical) model can tell us a lot about a thing, but it’s not the thing. The software model of a falling object doesn’t hurt your toe.”

        I once had a box of software cartridges fall on my toe. Hurt like hell 🙂

        This seems like the crux of our disagreement. I know you see it as a crucial point, but to me, to use your own words, it’s trivially true and irrelevant. The question as I see it is, is it functionally the same, processing the same information and producing the same interactions with its environment? I can’t see any fundamental barrier to it effectively being so.

        As I’ve noted before, it will never be a perfect match, but it could be an effective one. Whether or not that functionally equivalent system is the same person will always be a philosophical matter.

      • Wyrd Smythe

        “I once had a box of software cartridges fall on my toe. Hurt like hell”

        Exactly! Had you modeled that event, it wouldn’t have.

        “The question as I see it is, is it functionally the same, processing the same information and producing the same interactions with its environment?”

        That’s a different question from the one I’m discussing here. As you say, it’s essentially a philosophical one.

        You’re talking about something that could pass an effective Turing Test. I’m talking about whether the system would actually be conscious.

        But I’m confused, because I thought it turned out we were on the same page after all. You’ve referred to needing special hardware, and agreed with me about the Singularity, so I thought we’d agreed mind didn’t seem likely to be algorithmic?

        And we agree that a physical network has a very good shot of working (that is, of being conscious).

        Could it be that we just think we disagree because you’re looking at this more from a practical application standpoint whereas I’m talking about the underlying theory and abstractions?

      • SelfAwarePatterns

        Yeah, I wasn’t too good at modeling anything in those days. (It happened c. 1980.)

        On agreement, I think you’re right. We agree that the mind is a collection of physical processes. (At least as far as can be determined.) And that those processes are generally not digital but analog.

        I think we do disagree on whether the term “computation” applies to analog computing, but that’s a matter of definition that I don’t have strong feelings about. It’s only real effect is whether the phrase “computational theory of mind” includes analog processes.

        For me, the practical application is ultimately what I’m interested in. You and Disagreeable Me seem more interested in the theoretical principles. For you guys it’s the destination. For me, it’s relevant, but not the final word.

      • Wyrd Smythe

        “I think we do disagree on whether the term “computation” applies to analog computing,…”

        The terms ‘compute’ and ‘calculate’ are vague enough that saying it one way or the other does leave out important details. The words aren’t important so long as the distinction is clear between digital and analog information processing (to use another vague term).

        Analog systems leverage continuous physical properties (such as electrical resistance) to model another system. And there are absolutely analog neural nets. (There seems a growing view that a neural net has to be analog, has to have continuous inputs.)

        A digital system processes discrete symbols in a series of steps (and this is the usual sense of calculation or computation, at least within CS).

        “It’s only real effect is whether the phrase ‘computational theory of mind’ includes analog processes.”

        The term Computational Theory of Mind (CMT) is an umbrella term that covers all theories of mechanistic minds, so definitely includes analog processes.

        The term Classical Computational Theory of Mind (CCMT) claims that mind is algorithmic in the strictly digital sense, that mind is a Turing Machine.

        FWIW, the Stanford Encyclopedia of Philosophy has a pretty good article on The Computational Theory of Mind.

        “For you guys it’s the destination. For me, it’s relevant, but not the final word.”

        I can’t speak for DM, but my interests certainly don’t end with self-aware AI! Applications interest me, too. On the low end (Siri, Watson), it’s really interesting programming! On the high end, it raises major philosophical questions about our own consciousness.

        But here and now the topic is replicating consciousness in a machine with a side dish of uploading existing minds. 😀

      • Wyrd Smythe

        I forgot to mention something. FWIW, the word “calculate” comes from “pebble” or “gravel” because those were used as some of the very first calculators. Hence the generally discrete nature of calculation.

      • Wyrd Smythe

        (I don’t know how you follow comment conversations, so if you’re not seeing new comments outside this thread, I’ve added a new one that might help get us on the same page. See below.)

  • Wyrd Smythe

    @SelfAwarePatterns: Here’s a metaphor that might put our views in perspective.

    Consider a (very accurate) slideruler and a calculator.

    You’re seeing that both of those can return an (approximately) identical answer, and therefore both are functionally identical as far as answers go.

    I’m seeing that those answers come about through very different processes and looking at those processes. In particular I’m noting the correlation between the slideruler and the thing it models verses the complete lack of correlation in how the calculator works.

    Here’s the key question: Is consciousness the answer or the process?

    If consciousness is the answer to the calculation, then a software simulation of the process that provides that answer should work.

    If consciousness arises from the process, then the differences between those processes may be hugely significant.

    I’m am strongly suspecting that the process is crucial. This series of posts is my attempt to explain why.

    • SelfAwarePatterns

      Hmmm. I think consciousness is part of the process, but that process is all information processing. Even if part of the physical processing of that information involves electrical interference or other weird things, to me that would just be part of the physical architecture.

      This is one reason why, assuming a rejuvenated Moore’s Law doesn’t rescue us, I think we will have to understand consciousness to succeed. We will likely only be able to do it with functionally equivalent processes. If we don’t understand what generates awareness, we’ll never be able to know for sure if the resulting system is conscious or only acting that way.

      Many people despair that we will ever understand consciousness. I’m actually pretty optimistic that we will, although it may not be for a long time.

      • Wyrd Smythe

        “…to me that would just be part of the physical architecture.”

        Yes, absolutely. What’s crucial here is that what emerges from that physical architecture supervenes on it. Without that physical architecture, it doesn’t emerge.

        “We will likely only be able to do it with functionally equivalent processes.”

        You don’t think we’ll ever build a brain machine? See, I think that’s the easy one, and the most likely to be accomplished first.

        I agree we will likely finally understand consciousness eventually. It would not surprise me at all if along with that understanding comes the realization that implementing consciousness as an algorithm was always a non-starter.

      • SelfAwarePatterns

        “You don’t think we’ll ever build a brain machine?”

        In principle, yes, I think we could conceivably build a brain machine understanding just the low level functionality. In practice, I suspect that, unless the brain machine were a total duplicate of the biological brain (putting us back in Level #3 territory), we’d find that the differences had all kinds of unexpected consequences.

        An understanding of the upper level architecture would let us know which changes we could get away with. Of course, if we didn’t already have that understanding, the effort to build such a brain machine would likely teach us a great deal.

      • Wyrd Smythe

        “I suspect that, unless the brain machine were a total duplicate of the biological brain (putting us back in Level #3 territory), we’d find that the differences had all kinds of unexpected consequences.”

        Level #3 requires biology, so that’s definitely shoving the bar upwards!

        My best guess is that a software neural net will be the first accomplishment. Projects are already underway, and (software) neural networks are a major focus of AI research. We know they can accomplish some things. (Google’s found it recognized images of cats! 🙂 )

        Mostly we just need a big enough system. IIRC, Google’s NN, which was big for the current level of tech, but very tiny compared to a brain, used 16,000 CPU cores!

        A software neural net operates at Level #1, so if such a network does turn out to be conscious, that’ll be huge! (It’ll mean my skepticism was misplaced, for one thing!) Regardless, as you say, we’ll learn from all this.

        As for physical systems, maybe nano-technology will help us build physical networks (biological or otherwise).

%d bloggers like this: