We started with mathematical expressions, abstract algorithms, and the idea of code — a list of instruction steps in some code language. We touched on how all algorithms have an abstract state diagram (a flowchart) representing them. Then we looked briefly at the stored-program physical machines that execute code.
Before we go on to characterize the complexity of a computer, I want to take a look — very broadly — at how the computer operates overall. Specifically, look at another Yin-Yang pair: the computer’s operating system versus its applications.
This has a passing relevance to the computer’s complexity.
We’ll start with a distinction I haven’t really mentioned yet: Hardware versus Software. It’s yet another Yin-Yang pair, and like many such pairs, it’s both simple and not as simple as it seems.
Hardware, generally speaking, is anything you can actually hold in your hand. (Or on a forklift, if necessary.) Software, generally speaking, is data, pure information.

Hardware…
But wait! Suppose you burn your software on a CD that you can hold in your hand. Isn’t that hardware now?
In a sense yes, which is why the distinction isn’t as simple as physical object versus abstract information.
And yet it is exactly that simple.
The CD is a piece of hardware, true enough, but the pattern of bits burned into it only represents the software. The actual software is an abstract state diagram — a process.
What’s more, even if every physical copy of the software was destroyed, if it had no physical representation at all, not even a PowerPoint flowchart, the abstract state diagram, the algorithm, still exists.[1]
So hardware is the metal and plastic and wood (and ceramic and cloth and whatever). Software is abstract information. (Obviously, software has to be reified, made concrete, in some fashion for us to use it.)

…Software.
Another way to look at this is that a hardware thing cannot be perfectly duplicated — no real world object can. But a copy of software is a perfect copy, truly identical in every way that matters.[2]
We looked at the basic architecture of modern computer hardware. Now we’ll look at the basic architecture of their software.
It starts with a fundamental division between system code (often called the operating system) and user code (often called applications). For many users, the hardware and the operating system comprise “the computer” to which they add their desired apps.
As we explore these, we can make an analogy with people. The computer hardware, obviously, compares to the human body, but, as it turns out, the software is a little harder to fit to the model.[3]
A computer’s operating system (O/S) is a suite of software that controls the computer itself. In particular, the O/S is usually responsible for managing the file system, I/O (keyboards, monitors, etc), and the execution of all software (including itself).
There are usually four major parts to any O/S.[4]

The starting line (of code)…
Firstly, there is usually some sort of ROM chip on the main processor motherboard containing extremely low-level system code, called BIOS.
The code here usually does just two things: It provides, as the name says, basic I/O functions appropriate to that physical system. It provides just enough code to boot the computer when it starts up.
CPU chips are designed to start executing code from a predefined place when they first start (when power is first applied or when a hard reset occurs).
The BIOS chip is set up so the system boot code is what the CPU does first thing.
That boot code usually is just enough to check the physical system for gross problems.[5] After that, it looks for the O/S loader, which might be in several places. The boot sector of the main hard drive is usually the first place it looks, but systems can be set to (for example) load from a CD-ROM drive — even an inserted memory stick — first.
The O/S loader code is the second major part. As with the system boot code, this is a small chunk of code (it has to fit in a disk boot sector) just smart enough to begin loading the operating system itself.

Slow, buggy, and bloated!
The O/S, the third (and by far largest) major part takes its sweet time booting, checking out the system, retrieving registry keys, configuring I/O ports, establishing internet connectivity, and doing lots of other mysterious operating system-y things while we stare at the screen.
One of the key things it does is install and start various services (aka drivers aka daemons), which are the fourth major part of the O/S.
Services are part of the O/S in controlling the machine and providing function, but distinct in often being from other parties. For example, a printer manufacturer will provide services that allow full use of all their printer features.
One of the biggest things an O/S provides is the virtual machine that user code — applications — run on. Computer hardware has a lot of variation, and the O/S provides a layer of abstraction that insulates application developers from having to write code that knows about all that variation.
Application developers write for the operating system, and the operating system worries about the details of the local machine. And even the O/S leaves some details to the BIOS code that came with the motherboard!

An app for that.
Computer applications are pretty much anything the user dreams up, from games to work tools. There are apps for science and mathematics as well as apps for music (writing and playing) and all forms of art (creating and enjoying).
Comparing this set up to humans is tricky. We’re born with some instincts and tendencies. (We’re social animals and thinkers; we like music and laughter.)
On the other hand, we’re born fairly tabula rasa. We can’t walk, swim, talk, or run, for years. Becoming a useful member of society typically takes decades (well, two at least) and a large investment in training. (It is said that it takes 10,000 hours to become truly proficient in anything.)
A great deal of what makes a person who they are is learned. At the same time, any parent will tell you children have distinct personalities from very early, and most people’s core personality doesn’t change much over their lifetime (absent major psychological or neurological trauma).
The nature or nurture debate is an old one, and a difficult one to solve (the two are deeply interconnected in normal people). I don’t particularly want to get into it here.

(Only) So many wires!
One thing we can say is that, just as the hardware of a computer constrains what it can do and how it does it, so too does our physical brain constrain how and what our minds can do.
I don’t know to what extent you can really compare a new-born infant to a computer with no software loaded (just the BIOS), but it seems not too far off the mark.
Infants have some very basic “I/O” ability, but they certainly aren’t ready to run any “applications” (it was a few years before even Mozart kicked it into gear).
We spend years “installing” their “operating system” so that they can begin the even longer period of learning useful applications. In theory, our early years in school teach us basics and, importantly, how to learn new things. At some point we begin learning things — skills — that interest us.
So perhaps the analogy is from infant to raw system with just BIOS. Not very useful, but filled with potential. Childhood is the process of installing the O/S, and once that begins to take, the process of installing applications begins.

It takes a really long time to install and configure a professional pitching app.
Instincts and (inborn) talents perhaps have some correlation with the hardware architecture of a system. Certain physical designs lend themselves more to some tasks than others. Someone with a “powerful” mind might be likened to a 64-bit processor or one with a high clock speed.
In closing, I can’t resist a comment about how operating systems have become defining features in the PC world. The Apple-Windows divide, for example, is pretty much a religious one.[6] Certainly an image thing — remember those commercials with the PC guy and the Apple guy?
One of the funniest bits I’ve ever read (to this day) about the operating system wars is by Neal Stephenson. It’s chapter two of his neat little non-fiction book, In the Beginning Was the Command Line. I highly recommend reading it to anyone in the computer field.[7]
Next time we’ll begin characterizing the complexity of a computer and, after that, the complexity of the human brain! Our ultimate goal is trying to figure out the requirements involved in creating a working software model of the human mind.
[1] Physicists believe information cannot be destroyed (which makes black holes a problem, but that’s a whole other topic).
[2] Assuming we don’t screw up the copy and introduce errors. A canonical example involves a photograph and a paper filled with numbers, both of which are faded, yellowed, creased, coffee-ring stained, and otherwise mildly damaged and aged.
Modern analog image processing techniques can do a lot with the photo to create a very good copy, but it will never be perfect. But the sheet of numbers can be copied to a pristine sheet and the result is, in every way, as perfect as the original was.
[3] The problem question is: If we compare the physical brain to a CPU — an engine for running software — then where is the software it runs? The answer seems to be: Wired directly into the construction of the brain.
This leads to a more serious question: To what extent does this construction work like a Turing Machine (whose program, incidentally, is also wired directly into its construction)? Any hope of understanding the putative software of the human mind depends on understanding this.
[4] A peculiarity is why we put a slash in “O/S” and “I/O” but not most other acronyms. Is it just because “OS” and “IO” look too much like real words, or is it just to make them more computer-y?
[5] Which leads to one of my favorite error messages of all time: Keyboard not attached! Press F12 to continue.
[6] Those of us in the Unix world saw both of those somewhat as pretenders and didn’t care.
[7] It is freely, and I believe legally, available online. It’s worth grabbing just for the second chapter (MGBs, Tanks, and Batmobiles).
October 27th, 2015 at 11:52 am
I think you’re right that speaking of software in the brain is problematic. Brain functionality doesn’t have any inherent distinction between software and hardware. That distinction is an engineering innovation that allowed us to construct general purpose digital computing devices, which the brain is not. It’s more like a application appliance, specifically a gene propagating appliance, with lots of subsystems, and is mostly analog rather than digital.
It probably won’t surprise you that I disagree with the tabula rasa (“blank slate”) statement. The data from behavior genetics studies show pretty clearly how much influence genetics have on who we are. Of course, genetic determinism is just as false as tabula rasa. The evidence is that it takes both nature AND nurture.
I also think you’re underestimating just how much pre-wired functionality newborns have. Much of it won’t be expressed until their brains develop further, but consider what they have right out of the womb. They can see and hear, requiring extremely sophisticated functionality. They pretty much immediately recognize the importance of human faces. They can interpret patterns of sensory inputs, determine that the pattern is not optimal for their wellbeing, and communicate it by crying. They know to suck when a breast or nursing bottle nipple is near their mouth. Studies have shown that they have many intuitive understandings about the world. These might seem like incredibly basic things to us, but try programming a robot to do them.
On the functionality that won’t show up until later, most newborn animals come into the world able to navigate it to some degree. Humans, and other primates to a lesser degree, are unique in being utterly helpless at birth. It’s thought that the size of the human baby’s head in relation to the mother’s birth canal size led, evolutionarily, to them being born early in their development, requiring more development outside of the womb, leading to the blank slate sentiment.
I think calling acquired skills “applications” is the wrong comparison. I think it’s more like patterns we’ve recognized and confirmed after receiving sufficient quantities of related sensory inputs (perhaps 10,000 hours, although I’ve always found that a vast oversimplification since some people learn far faster than others). Although, admittedly, “programming” and data in the brain often doesn’t have a clear distinction either.
Yikes this is a long comment. Sorry! It grew in the telling.
October 27th, 2015 at 4:31 pm
Don’t worry about the length. (I’m the last guy who can complain about that! 😀 ) It just means a longer reply, and I’ll break this into distinct threads to make it easier (??) to manage.
“I think you’re right that speaking of software in the brain is problematic.”
The thing is, if it turns out to be too true, if there’s too much difference, then software consciousness might not be possible. What if mind is not (just) information processing? What if there is no TM that represents consciousness?
“Brain functionality doesn’t have any inherent distinction between software and hardware.”
Yet such distinction is fundamental to how we define computing. If we cannot separate the brain ‘hardware’ from the brain ‘software’ then we have no hope of running such software on a different kind of hardware.
Turing equivalence requires we actually have codified software that runs on a defined hardware. That requires there really is such a thing as brain ‘software’ we can re-express in machine form.
“That distinction is an engineering innovation…”
Yes, and more than that. It’s fundamental to the idea of Turing equivalence. A lambda calculus reduces an algorithm to a form of math. A TM reduces it to a state graph. Both are abstractions of the fundamental concept of process or code (a subset of information).
One way to see this is in how computer science predates actual computers by quite a bit. (A mathematical proof, for example, generally is an algorithm.)
“It’s more like a application appliance, […] and is mostly analog rather than digital.”
I’d agree with that. The question I’m asking is to what extent a software model of that can recapitulate its function.
We can agree that a physical model — a literal brain machine — seems like it would work. If nature can make biological brain machines, and assuming physicalism, then it’s hard to see why a human-made brain machine wouldn’t work. It seems like just an engineering problem.
But a software model is a different kettle of fish. For one thing, it requires software, which we agree is (for now) a problem. Where is the brain software?
October 27th, 2015 at 6:37 pm
The usual upload idea is to posit running the uploaded brain in an emulation layer. I can run old Atari 2600 programs on modern computers in such an emulator. Typically the emulator consists of a virtual machine and the original software. The idea is that, in the case of the brain, rather than a separate VM and software, there would just be the combined virtual system, meaning the lack of software / hardware divide becomes irrelevant.
Now, the issue that I relayed to Disagreeable Me last week is that the host must have far more processing power than the old system, particularly if their architectures are radically different (as they most definitely are between brains and modern digital computers). If Moore’s Law peters out too early, the idea of the emulation layer might become untenable. The uploaded mind would just run too slow to be useful.
I think when Moore’s Law does run out, it’ll probably spur a lot more architectural experimentation than we’ve seen over the last several decades. Many of the new architectures might be substantially closer to those of brains, which might make the emulation idea easier.
But on the algorithm front, maybe our common ground is this. The copied mind will inevitably be different from the original in at least minor ways. I don’t see any way around that, at least unless someone invents a Star Trek like transporter that can make an molecule by molecule copy.
If the copied mind is running on digital hardware, that copy will always be an approximation of the original analog system. Which again, will always mean an imperfect copy. But I think perfection is a false standard.
What will be necessary is making an effective copy, one that convinces friends, family, lovers, etc, as well as the new mind itself, that this is the same person as the original. That will be very difficult, but I can’t see any law of physics that would ultimately prevent it.
No matter how it’s accomplished, there will always be people who never accept the copied mind as the original. The earliest copies may be disturbingly different, which might throw fuel on that fire. But over time, I’d expect the copies to get better. Whether the copied mind actually *is* the original will be an never ending philosophical conundrum. Of course, once a mind has made the transition, every copy after that should be exact.
October 27th, 2015 at 8:34 pm
“I can run old Atari 2600 programs on modern computers in such an emulator.”
There is a crucial difference. The Atari 2600 is a Turing equivalent device, so emulating it on another Turing equivalent device is a well-understood translation problem.
Running a human mind on a computer — any computer we can imagine today — requires that mind be expressed as an algorithm in a Turing complete language and run on a Turing equivalent machine.
“…the host must have far more processing power than the old system…”
Well… to function in real-time, certainly. Turing equivalence means that any machine that is Turing complete can do anything any other TC machine can do. Your Atari 2600 can simulate a Cray supercomputer.
Not very quickly, of course. 🙂
That’s the thing about algorithmic consciousness. If it’s possible at all, given enough memory, your Atari 2600 could run the algorithm. The speed of consciousness is not a relevant factor (except as it affects communication).
“The uploaded mind would just run too slow to be useful.”
I’ve read SF that involved either extremely slow consciousness or some kind of time-shifting scenario that made communication super slow (I don’t mean turn-around time, I… m..e..a..n… s..l..o..w…) Imagine building a super-intelligent super-AI but it took months for a short conversation!
If it were smart enough, it might still be worthwhile. Or maybe just interesting. Slow AI minds would at least still have some value in research.
“I think when Moore’s Law does run out, it’ll probably spur a lot more architectural experimentation than we’ve seen over the last several decades.”
You’re likely right. I’ve read articles (a while ago) about early research into data storage and computation with bacteria and DNA. Expensive and difficult, but once a real need exists…
“The copied mind will inevitably be different from the original in at least minor ways.”
Do you mean to copy the structure of the brain into a digital copy that can be run by brain software (which is exactly what I’ll be considering after the next couple of posts).
That assumes that running a software model of the brain’s structure results in consciousness, and that’s the big assumption I keep pointing at.
Either a software mind is a functional model (bearing no resemblance to the brain) or an attempt to recapitulate the brain’s structure in software. I can see only very weak grounds for assuming the latter results in consciousness. (The former requires we fully understand consciousness enough to re-implement it functionally. And no doubt we’ll eventually achieve something that at least seems conscious.)
“Of course, once a mind has made the transition, every copy after that should be exact.”
Indeed. But the assumption it can make the transition at all is a big one.
October 27th, 2015 at 4:33 pm
“It probably won’t surprise you that I disagree with the tabula rasa (“blank slate”) statement.”
I know you don’t, but be fair: I did say “fairly tabula rasa.” 😀
“The evidence is that it takes both nature AND nurture.”
Indeed, and it may be that we’ll see this more eye-to-eye if I explain some details about what I mean here. Firstly, the nature or nurture question mainly addresses personal differences. As I commented in the post, parents know that children have personalities from a very early age. I don’t think we have much argument in this area.
“I also think you’re underestimating just how much pre-wired functionality newborns have.”
It’s more that I’m discounting it because animals have most of the same faculties. I’m more interested in the things that set us apart, that make us human, and those things tend to be learned things.
You are absolutely right that humans are born with — at the least — a BIOS (and don’t underestimate modern BIOS). Given that, for example, they can’t talk or manage their bowels, it’s hard to grant they have much of an operating system.
“These might seem like incredibly basic things to us, but try programming a robot to do them.”
Hell, just try programming a robot to be a decent housefly. 😀
Again, you’re absolutely right. Writing software to do complex tasks is extremely difficult.
“…leading to the blank slate sentiment.”
I know about primate neoteny. (As an aside, another theory I’ve read is that, after nine months, the mother’s system is so strained the energy trade-offs become counter-productive, so the evolutionary balance falls at nine months. Another is that being born so tabula rasa — and soaking up the current social and cultural context around you — is an evolutionary advantage in a social, cultural creature.)
In my case the sentiment is based on the relative weights I put on the basic human animal programming that’s wired in (which is not considerable) compared to what we learn over the course of a life time.
Perhaps it’s more clear if I put it this way: The worst and best people I’ve known were not that way because of how they were born, but because of what they became as humans.
I don’t disagree with you about inborn instincts. I just think the acquired ones are much more important.
October 27th, 2015 at 4:48 pm
“I think calling acquired skills “applications” is the wrong comparison.”
That sounds interesting enough to be a separate thread. Go on…
“I think it’s more like patterns we’ve recognized and confirmed after receiving sufficient quantities of related sensory inputs…”
I’m not sure I follow exactly what that means enough to comment. We can agree (I think) that anything we learn is a memory. Repeated experiences usually make memories stronger. Are we saying the same thing?
“Although, admittedly, “programming” and data in the brain often doesn’t have a clear distinction either.”
There is some analogue to them, though. The Long-Term Potential (LTP) of synapses changes when we learn. (It’s actually kind of weird to think that every memory you keep is due to a physical change in your brain! If you remember Lord of the Rings, it’s because a bunch of synapses have been trained with that information.)
I am being somewhat metaphoric in calling learned skills applications. Mainly I’m trying to differentiate learned skills from innate abilities. The analogy I’m making is that a computer with just an O/S isn’t very useful. Neither is a human who has never learned anything.
Applications make a computer useful. Learning makes humans useful.
Perhaps, if I follow, you’re drawing an analogy with a software-implemented neural net learning a skill. All the software is in place. The “education” of the neural net changes the weighting of connections — provides data — that represent learning. The software exists and is “trained.”
I’m not sure how much that distinction really changes anything. An untrained neural net isn’t useful, a trained one is, so that really just changes the boundary from between O/S and application to O/S+app and trained app. That’s fine.
There is also that, once trained, that neural net and its weightings can be saved as a new application that can be installed on a blank O/S, so it’s sort of all the same thing.
Metaphorically, an application is a skill (however acquired) and system an innate ability. That’s how I’m using them here, at least.
October 27th, 2015 at 6:52 pm
Yeah, I think you’re right. We could get mired in a definitional debate on “applications” and “programming”, but it probably wouldn’t be productive. The line between data and programming is too blurry, especially in evolved systems where the distinction is one that only us humans make. In reality, it’s more a layered thing, with the outermost layers most modifiable and the innermost layers most hard coded and consistent.
Two thoughts on my memory of Lord of the Rings being physically implemented in synapses. First, and I know you know this, but when computers hold things in memory, it’s also a physical event. (It’s much more obvious if your computer is implemented using mechanical switches.)
Second, I read something the other day that the timing of neuron firing might also participate in information encoding. In other words, synapses might not be the only mechanism. (I tweeted an article from someone who believed it is *only* neurons, but he dissed his fellow neuroscientists, something I’ve learned to recognize as a quack flag, although the effect here was mild.) In case you’re interested:
October 27th, 2015 at 8:50 pm
” First, and I know you know this, but when computers hold things in memory, it’s also a physical event.”
Indeed, and it’s a much more specific event. Any (reasonable) computer can hold the entire text of LotR perfectly in memory. And one can point to the physical location of each word (each letter, each bit of each letter).
We remember it had Hobbits. And a Wizard. And some guys with swords. Our memory of it is hard to localize, plus it’s highly connected symbolically with other memories. (For example, for me, it has high school connections because that’s when I first glommed on to it.)
“I read something the other day that the timing of neuron firing might also participate in information encoding.”
Definitely. I’ve mentioned this to you before in connection with my mind-as-laser analogy. That neurons encode information in their rate of firing is fairly big bullet point in my ultimate argument that mind might only arise from physical brain machines, not software ones.
When I talk about possible temporal components of brain and mind, this is exactly the sort of thing I’m talking about. What if a software model of that firing doesn’t do what a physical model does?
October 28th, 2015 at 8:57 am
Wyrd, I’m going to consolidate responses here because I perceive things are converging.
On Turing completeness, neuron firing thresholds, and all the rest, I just don’t see the stark difficulties you seem to perceive here. To me, these might present engineering problems, but not fundamental barriers. To be sure, these engineering problems shouldn’t be underestimated, but they don’t seem like, say, the speed of light barrier which nothing in nature has been observed to overcome.
But I think the core difference between us in this area is that I think that consciousness is information. Everything I’ve read about neuroscience reinforces that view. I’m not aware of any scientific evidence that implies that it’s anything more than that. I know many non-neuroscientists (theoretical physicists, psychologists, philosophers, mystics, etc) have a diversity of theories that say otherwise, but the lack of evidence, as well as the fact that virtually no neuroscientist finds them credible, disinclines me to spend time on them.
If I understand you correctly, you think it’s plausible, even likely, that consciousness is more than information. I know you have reasons for reaching that conclusion. Maybe a productive direction would be if you discussed those reasons?
October 28th, 2015 at 10:02 am
“To me, these might present engineering problems, but not fundamental barriers.”
Are you certain we understand the theoretical and engineering details enough to be convinced of that? That seems to imply we could actually do it, but my understanding is that no one knows how.
“But I think the core difference between us in this area is that I think that consciousness is information.”
Sort of,… yeah, but it’s more subtle than that.
I’m very skeptical consciousness is strictly algorithmic. It may be a form of information that arises (only) from a the configuration and functioning of a physical brain (organic or otherwise).
A software model of those physical processes may not replicate them in a way that gives rise to that same type of information. I’m skeptical it can.
“If I understand you correctly, you think it’s plausible, even likely, that consciousness is more than information.”
My argument is about whether a software model of mind gives rise to consciousness like a physical brain (organic or otherwise) does.
As I’ve asked before: Can you name any other physical real-world process for which a software model gives rise to the same physical phenomena?
Software models of airplanes can’t fly. A software model of a laser doesn’t lase. A software model of a radio doesn’t emit RF. You can’t walk over a software model of a bridge.
Software models don’t act like the physical things they model — calculation does not give rise to physical properties.
Assuming that a calculation gives rise to consciousness and experience assumes facts not in evidence. We have no working example we can point to and say: “There! See? Clearly just a calculation and clearly consciousness comes from it.”
Further, this requires that the brain be some form of Turing Machine. Except it doesn’t really look anything like one. So assuming consciousness arises from a calculation is an assumption (and I think a big one).
“Maybe a productive direction would be if you discussed those reasons?”
I’ve been discussing those reasons with you in pretty much every discussion we’ve had on the topic for a while now. This whole series of posts is an attempt to lay it out in great and gory detail.
October 28th, 2015 at 11:40 am
I think this is circling back to the same argument. Software can’t do anything without its hardware. The real question then is, can the software+hardware system, in its totality a physical system, do what another physical system is currently doing? How close does the new physical system have to be to the old one to accomplish the same thing?
I don’t think it’s controversial to say that our current hardware can’t do it. We’ll have to develop new hardware, possibly new software paradigms. These are engineering problems. Do we currently know how to do it? No, because we don’t understand brains well enough yet. But history hasn’t been kind to those who assume we can never solve an engineering problem.
On aspects of consciousness that haven’t been ruled out, there are always an infinite array of possibilities of what might exist beyond evidence, that the existing evidence doesn’t outright contradict. Per history, the chance that the possible but unevidenced thing we crave or fear falls into the tiny sliver of things that will end up being real, is remote.
October 28th, 2015 at 1:53 pm
“Software can’t do anything without its hardware.”
Absolutely true, but it misses a point. [Bold emphasis mine above and below.]
“The real question then is, can the software+hardware system, in its totality a physical system, do what another physical system is currently doing?”
Yes! That is exactly the question! What’s missed, I think, is what is meant by “do”.
A model that simulates an airplane doesn’t fly. Even a model that simulates flying that model of an airplane doesn’t fly. But if you connect the latter usefully to hardware that does fly (that has innate properties capable of flight), then the software model can fly.
But it’s the innate properties of the hardware that allow that flight.
“We’ll have to develop new hardware,”
Unless you just mean “higher performance” you’re converging on my point. Computation is hardware agnostic! (As described in earlier posts in this series. If we disagree, we should go back to those to discuss it.)
If mind is just software than C-T says it is necessarily true that any (Turing equivalent) machine can run it. (Your Atari could run the mind algorithm given the right programming and access to the model.)
But if it takes special hardware, then mind is not (purely) algorithmic.
In the past, you’ve said something along those lines yourself, so it’s like we do agree mind isn’t entirely algorithmic. There is a line from “new hardware” through Church-Turing to “not an algorithm.”
Per the flight model, it’s saying a software model could operate hardware with the innate property of consciousness, but the consciousness is clearly in the (mystery) hardware, not the software.
October 28th, 2015 at 4:04 pm
Wyrd, would you consider an analog computer to be doing computation? How about an analog neural net? Just curious.
We might be in agreement about the brain (if not our terms), since everything I read about it indicates that it’s not a digital processor (at least in most of its functionality), but an analog one (more a massive parallel cluster of them), with smoothly varying synapse strengths and neural potentials, in contrast to the binary states of transistors, switches, etc.
A digital computer can approximate the workings of an analog one. It’s effectiveness in doing so will depend on the resolution of that approximation, which I perceive to be a factor of its capacities and performance. But it will always be an approximation. Manufacturing variances will ensure that one analog processor is never perfectly identical to another. Most of the time those variances aren’t significant, but they’ll always exist.
If human consciousness is intolerant of those variances, then that could be an obstacle, but given all the natural variances both between humans and for the same human over time, and the fact that brains often continue working more or less effectively after minor damage (such as the permanent effects of drug use), I tend to think the mind isn’t that fragile.
Of course, if the hardware needs to be an antenna for an immaterial soul, then all bets are off. I think you know my thoughts on that 🙂
October 28th, 2015 at 4:56 pm
“Wyrd, would you consider an analog computer to be doing computation? How about an analog neural net?”
No, and no. They are not calculators. The analog computers I’m aware of are more like electronic slide rulers. Electronic potentials stand in for — and are direct analogues of — whatever is being “calculated”. (I got into more detail about this in Calculated Math post.)
An actual, physical, neural net, in my mind, has a far greater chance of working than a software model. As I’ve said several times, assuming physicalism, there’s no reason why a physical brain machine (one that replicates the interconnectivity of the brain physically) shouldn’t work.
“A digital computer can approximate the workings of an analog one.”
Yes, but once again we hit what exactly is meant by “the workings.”
A digital computer can certainly simulate what an analog computer is doing. It can simulate the activity of neurons and so forth. It can produce data that match (more or less) the data produced by an analog process.
But to illustrate the point, let’s say something about the way current flows through your physical analog computer causes it to hum a little. The digital simulation won’t hum.
If it’s a really good simulation, there may be a point in it where the flow the numbers is such that you can point to that data and say, “Ah, ha! That’s the hum!” But the data doesn’t actually make a hum.
A digital simulation does not work the same way as the analog situation it’s simulating, so expecting the same byproducts to arise from that operation doesn’t make sense.
“I tend to think the mind isn’t that fragile.”
I agree with that pretty much, but consider how powerful the effect of very small amounts of certain chemicals can be. Even our own natural chemical balance affects our mood, perceptions, and thinking. A good knock to the head can cause unconsciousness or brain damage.
Perhaps mind, overall, is robust, but its exact expression depends greatly on small things. My dad has Alzheimer’s and my mom has said she detected changes in his personality decades before the disease became apparent.
October 28th, 2015 at 6:25 pm
It’s sounding like we’re closer on this than we might have thought. I think this is one reason why I usually resist the term “computation” for the brain, preferring “information processing”, since, at least in my mind, it encompases non-calculator processes. (Of course, a certain point of view insists that everything in the universe is calculation. I’m sympathetic to this view, but ultimately agnostic on whether it’s true.)
On the hum analogy, I would see that as effectively just another part of the hardware architecture, like a sort of vibratory communication bus, whose effects on the rest of the system, and eventual outputs, would have to be modeled. It might raise the difficulty, but for a model already needing to take into account the effects of dopamine, serotonin, oxytocin, endorphins, etc, it doesn’t seem insurmountable.
Sorry to hear about your dad. Alzheimer’s is a cruel disease. I’ve had friends and relatives who had strokes. Even in cases where they seemed to make a complete recovery, their personalities were sometimes noticeably different.
But if you ever look at physical comparisons between healthy brains and brains suffering from Alzheimer’s or post-stroke damage, the actual damage is pretty stark. After seeing it, I find it actually kind of amazing that someone with that much damage can function at all.
October 29th, 2015 at 11:09 am
“It’s sounding like we’re closer on this than we might have thought.”
Sure! Other than our views on the probabilities of these things happening, there’s no reason we should disagree on the underlying computer science involved!
“…preferring “information processing”, since, at least in my mind, it encompasses non-calculator processes.”
I can see that. Do you end up having to define the term ‘information processing’ much? Seems like it’s a broad enough idea for people to see it differently. For instance, does a tree process information? You can certainly make an argument it does. (I suppose one can also make an argument that’s a silly way to look at trees, but whatever.)
“Of course, a certain point of view insists that everything in the universe is calculation.”
Yeah. And if you go so far as to take Tegmark’s view, then the mind obviously has to be computation because everything is! Like you, I’m pretty agnostic on that.
“On the hum analogy,…”
Right, and as I said, the model might indeed account for the hum, but it won’t make an audible hum like the physical unit does. (It’s just another version of the laser analogy, really. A software model does not give rise to the same physical phenomena that a physical process does.)
“Alzheimer’s is a cruel disease. I’ve had friends and relatives who had strokes.”
Yeah, my mom had several. A lot of rehab restored her speech pretty good (but never got perfect). Sometimes vocabulary was a problem. Fortunately, no real change to personality. She did end up in a wheelchair for anything other than short distances.
“But if you ever look at physical comparisons between healthy brains and brains suffering from Alzheimer’s or post-stroke damage, the actual damage is pretty stark.”
In strokes, definitely, but I’m not sure about Alzheimer’s. In the early stages, it’s very hard to diagnose from aging or other issues. It takes specific cognitive testing, medical history, and even PET scans to rule out other causes. Alzheimer’s starts at a very low chemical level, and its early effects are subtle.
Advanced stages are a different story when the destruction of neurons becomes more obvious. It can be diagnosed positively postmortem by an analysis of brain tissue.
For years, during my weekly phone call to the folks, I’d have essentially an identical conversation with my dad, who I could tell despite how Alzheimer’s patients cover themselves, often had no idea who he was talking to. Or even where I was. He’d ask something like, “How’s the weather where you are?”
Even when I would tell him I lived in Minnesota, I could tell that had no meaning for him, despite that he’d spent much of his life living in the Midwest. Cruel disease, indeed.
But it is often surprising how much damage the brain can take and still function. All those strokes must have done a lot of damage in my mom’s brain, but she retrained other parts of her brain to take over.
OTOH, a few micrograms of certain chemicals can change our mood, consciousness, personality, and perceptions. It’s like evolution gave us a lot of protection from physical damage, but the brain is still a complex machine with some significant vulnerabilities!
October 28th, 2015 at 10:22 am
It occurs to me there’s a key point where I’m not sure where you stand. If you’d be so inclined and kind, jump to the bottom for a question and new thread.
October 28th, 2015 at 10:24 am
@SelfAwarePatterns: Question: When you talk about the mind as software, are you envisioning a replica of the physical brain implemented as a software model, OR a functional model with no required resemblance to the physical brain?
October 28th, 2015 at 11:52 am
I think both are possibilities. The replica path seems like it would require an enormous amount of processing power that we may never have. (If it requires fidelity to the atomic or subatomic level, it might be impossible even in principle, although I think that’s unlikely.) Regardless, the copy would always be an approximation of the original, although perhaps an effective one.
The functional model requires a lot of confident knowledge about how the mind works, which many people think we might never be able to acquire. Personally, I think we can acquire it, although it will probably take centuries.
Most people who talk about uploads talk in terms of the first, but if I had to place bets, I’d say the functional model is model is more likely to be how it ultimately happens. But it will make some people even more suspicious that the copy is not the original person, no matter how convincing it might be.
October 28th, 2015 at 2:16 pm
“The replica path seems like it would require an enormous amount of processing power that we may never have.”
Yeah, that’s definitely a consideration. (But if mind really is information flow, it can flow successfully on less powerful machines. It just takes longer.)
I agree it seems dubious we’d need to model it on a quantum or atomic level. It is possible neuron cellular micro-structures matter in some way. As we’ve touched on, they’re more than simple on-off switches! How much more… [shrug]
(Since we can at least think about the requirements here, this is the approach I’ll be considering in the next posts.)
“Regardless, the copy…”
Certainly at a quantum level! (I’ve pondered the idea of some sort of hi-rez NMI “photo” of a brain where a quick scan could result in a crapload of data. Maybe it takes hours, or days, to pull the connectome (or whatever) from that data, but it would be a snapshot (in theory) of the person’s mind at that moment.)
Mostly here I just wanted to say that implementing mind in software and uploading existing human minds to software are two distinct problems. The latter depends on the former, but they do have separate issues. (For example, as you say, making a good copy.)
Uploading is a harder problem merely in virtue of encompassing the software problem plus the problem of copying and transferring that copy.
“The functional model requires a lot of confident knowledge about how the mind works, which many people think we might never be able to acquire. Personally, I think we can acquire it, although it will probably take centuries.”
Agreed on all points, really. (For me there’s a huge IF floating in the background, but otherwise… 🙂 )
(I’d only add that a functional model goes further in assuming mind is algorithmic. It might be. I’m not saying it’s not possible.)