Transcendental Territory

transcendent mindLast time we considered the possibility that human consciousness somehow supervenes on the physical brain, that it only emerges under specific physical conditions. Perhaps, like laser light and microwaves, it requires the right equipment.

We also touched on how Church-Turing implies that, if human consciousness can be implemented with software, then the mind is necessarily an algorithm — an abstract mathematical object. But the human mind is presumed to be a natural physical object (or at least to emerge from one).

This time we’ll consider the effect of transcendence on all this.

This is not the religious kind of transcendence, this is the mathematical kind. A special property that π and e and many other numbers have.[1]

We start by considering three Yin-Yang pairs with regard to numbers.

§

Yin-Yang-1The first is the finite versus the (countably) infinite.

On the one side, precise numbers that match their quantities. “Dave donated a dozen dimes!” “Twenty-two teens turned twenty.” “Only ate one waffle.”

On the other side, a row of three dots (…) or a lazy eight (∞). “The road goes on forever.” “My curiosity is endless.” “Close your eyes and count to infinity.”

But the thing about countable things is that they’re countable.

More to the point, computers can count things really good.[2]

§

Yin-Yang-2So the second Yin-Yang pair is the countable versus the uncountableCantorville — the discrete versus the continuous.

On the one side, everything from the first pair, the countable things, even infinite ones. Here numbers are cardinals; they stand for quantities of discrete objects.

On the other side, the real numbers, the smooth and continuous. These numbers are magnitudes; they stand for points along a number continuum. They are a different kind of number.[3]

Calculating with real numbers offers some challenges, especially with regard to chaos. Calculation necessarily rounds off real numbers, so there is a loss in absolute precision.

We’re only at the second level, and already calculation is in trouble. To the extent calculation involves discrete symbols (i.e. digital calculation), we can’t calculate with real number values, only their approximations.

§

Yin-Yang-3The final Yin-Yang pair is real numbers versus the transcendental numbers.

Again, on the one side, everything from so far (including the complex numbers). Sane numbers tamed with algebra.

On the other side, wild mysterious numbers with some vaguely magical properties.

Firstly, their decimal expressions never form any repeating patterns (as with rational numbers).

Secondly, there is no algebraic expression that specifies their value (as with ordinary real numbers).

That latter property results in the “game” demonstrated in this YouTube video. It’s way worth watching (seriously, please do watch it, at least the first half):

I knew about algebraic roots, but I never realized they implied the game here.[4] It’s a neat way to look at it, and it got me thinking about transcendent numbers. (Which invokes Euler’s Identity, hence the Beautiful Math post.)

One can definitely make a case that God invented the integers, that the countable correlates with physical reality whereas the real numbers (let alone the transcendentals) are abstract inventions.

The problem is that God presumably invented circles, too, and little old π is one of those things you notice if you look at circles. It’s just the ratio between the diameter and circumference.[5]

There is something magical about the transcendentals that sets them apart. The name is certainly evocative.

§

The question I want to ask is whether consciousness could be, in some sense, transcendental (and just to reiterate, I don’t mean that in the spiritual sense).

If so, does that present a problem with regard to an algorithmic theory of mind?

(The presumption being that transcendental calculation is somehow a problem, so there really are two thesisesthesisispoints to demonstrate here.[6])

§

There is a Yin-Yang situation regarding numbers like π. On the one hand, no perfect circles exist, so π never actually occurs anywhere physically. On the other hand, its true value underlies every circle and sinuous process!

circlesLook at it this way: All the inaccurate real-world instances that involve π are inaccurate in their own way.

The average of all those inaccuracies converges on the true value (proof it does lurk beneath all physical circles).

While no individual circle is transcendental, circles certainly are.

So why might a human brain be a transcendental process?

The answer partly may lie in the sheer complexity and scale of the brain.

Not only are the parts complex, there are hundreds of trillions of them! Perhaps transcendence emerges from the multitude just as it does with circles.

synapse photoSynapses are hugely complicated in their own right (and amazing). Is it possible that a full model of a synapse is complex enough to be subject to chaos?

That’s not at all a stretch.

If so, that means each synapse is just a little unpredictable (mathematically).

The synapse knows what it’s doing, but for us to determine that precisely may be effectively impossible (like the three-body problem or weather prediction).

The network of the brain is also highly complex and vast. Neurons all operate in parallel and talk to each other in variable frequency pulse trains. It’s even easier to imagine that chaos plays a role here. (It’s harder to imagine it wouldn’t!)

So it’s possible the parts transcend calculation and even more possible the whole network does.

§

Z80The obvious question is: A CPU has many billions of transistors, why can’t that multitude be transcendent?

Another form of the question is: A software model can model trillions or quadrillions of (virtual) parts; why isn’t that multitude transcendent?

The answer is that, potentially, it could if those parts, or the network of those parts, had the same indeterminacy as does the operation of the human brain.

But so far computer technology works very hard to remove all indeterminacy from all levels of computer operation! It’s considered noise that degrades the system.

There is research into the idea of introducing noise or uncertainty into algorithms, and it’s possible that may bear fruit some day.[7] (It’s not the same as “fuzzy logic” which is just logic over a value range.)

DivideAs it stands now, algorithms are fully deterministic at all levels. There is nothing that’s allowed to transcend.

Keep in mind that if hardware is the only possible source of indeterminacy or transcendence (as is the case in the physical world brains inhabit), then conscious is not algorithmic.

It can’t be if it supervenes on hardware!

In order for consciousness to be strictly algorithmic, any indeterminacy or transcendence must come from the software steps. And as we’ve seen, those amount to: Input numbers; Do math on numbers; Output numbers.

Where is the transcendence?

§

signI’ve been wondering if Turing’s Halting Problem or Gödel’s Incompleteness Theorems might play any role in this. It’s possible to read their conclusions as addressing transcendental territory.[8]

In the Turing case, no algorithm can transcend the algorithmic context such that it can solve the halting problem.

In the Gödel case, no axiomatic arithmetic system can transcend its context such that all true statements in the system can be proved in the system.

Either way there’s chaos theory telling us that some calculable systems are so sensitive to input conditions that any rounding off of real numbers degrades the calculation.

This all seems to suggest (to me, anyway) that real world processes, while wildly mathematically “inaccurate” on their own account, converge on mathematically ineffable transcendence given sufficiently large numbers.

Think of it as actually doing quadrillions of steps in an infinite mathematical series. How close to its real transcendent value would π be after 500 trillion steps?


[1] This all started out as a flight of fancy, but the more I think about it, the more it fits.

[2] Even if it takes them forever, just like it would you.

[3] Transfinite mathematics involves multiple levels of infinity, but I think the countable versus uncountable one is the foundation. (I’m not convinced the others exist meaningfully.)

[4] Which, if you didn’t watch the video, is that you can reduce any algebraic expression to zero using only addition (and subtraction), multiplication (and division), or exponentiation.

(The video at the bottom has some extra bits they didn’t include in the main video. It’s the same one linked to at the end of the main one.)

[5] It’s when you look closely at π that you realize how weird it is. See the Pi Day post for how far down the rabbit hole that goes!

[6] AKA: “theses” 🙂

[7] One problem is that calculating random numbers is nearly impossible. It takes some real world source (semiconductor noise is a good one) for true randomness.

(The difficulty of calculating random numbers is just another illustration of the limits of discrete math and algorithmic processes.)

[8] Cantor is clearly addressing the countable-uncountable divide, and it’s possible Turing and Gödel are as well.

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

36 responses to “Transcendental Territory

  • SelfAwarePatterns

    I always knew that Pi and e were trouble, from the minute I laid eyes on them. 🙂

    Carl Sagan reportedly had a message from a deistic god buried extremely far into the digits of Pi in his book Contact (which sounds much better than the movie).

    On determinism, neuroscientist Michael Gazzaniga, in his book ‘Who’s In Charge?’ points out that there is substantial evidence is that the brain is mostly deterministic. He notes that this makes sense when you think about, evolutionarily, what the brain is for, which is to make movement decisions for an organism based on sensory inputs. Rampant indeterminism would destroy any evolutionary advantage in that function.

    Is the brain *fully* deterministic? No one knows. In truth, due to chaos theory dynamics, it may never be known. The question is whether a fully deterministic system could approximate its workings. Again, no one knows for sure, but the fact that the brain is at least mostly deterministic gives me hope. Ultimately, the only way we’ll know for sure is if someone succeeds, or after the brain has thoroughly been mapped and understood, fails anyway.

    • Wyrd Smythe

      “Carl Sagan reportedly had a message from a deistic god buried extremely far into the digits of Pi in his book Contact (which sounds much better than the movie).”

      Yes, a raster pattern of a circle buried deep in the digits of pi. When Ellie meets the aliens they tell her that even more complex messages are buried in other transcendental numbers.

      I wrote about this last Pi Day in Here Today; Pi Tomorrow and quoted the relevant passage from the book. (I like both the book and the movie.)

      The funny thing is that Sagan was right. Sort of. Transcendental numbers can have the numerical quality of being “normal.” Pi has shown to be normal! That means, somewhere in the string, every possible finite sequence occurs. So there definitely is a raster pattern of a circle in the digits of pi.

      There’s also every GIF, JPEG, PNG, and every other image format, image ever created or potential. Also all the images of just random gibberish. And every novel, magazine or other form of printed material ever. In every language. In every variation of typos and whatnot. And every audio file. And so on.

      {{I recently went and grabbed a 10-million digit file of π so I could play around with digit distributions. I just started mucking about with it, but check this out:

      0:   999440 (0.099944, +0.000056)
      1:   999333 (0.099933, +0.000067)
      2:  1000306 (0.100031, -0.000031)
      3:   999964 (0.099996, +0.000004)
      4:  1001093 (0.100109, -0.000109)
      5:  1000466 (0.100047, -0.000047)
      6:   999337 (0.099934, +0.000066)
      7:  1000207 (0.100021, -0.000021)
      8:   999814 (0.099981, +0.000019)
      9:  1000040 (0.100004, -0.000004)

      This is a digit frequency histogram. The first number is the digit. The second is the number of times it appeared in the 10-mega digit string. The third number is the percentage; we’d expect 10% if π is normal (and that’s what we got). The fourth number is the difference from 10% — not much! It’d be interesting to get a lot more digits and see if the differences approach zero.}}

      “Rampant indeterminism would destroy any evolutionary advantage in that function.”

      Makes sense. (A world where that wasn’t true sounds like something Greg Egan would write. He likes to turn things on their ear. He has one, which I’ve not read, where SR works the opposite. The faster you go, the more the external world slows down. A civilization is threatened with a supernova, so they send out a fast ship of scientists to make a long loop through space (that takes the scientists generations) so they can solve the problem and return “shortly after they left” comparatively speaking.)

      Absolutely agree with your last paragraph.

      The point of this post, really, is that question about whether the brain is fully deterministic. My guess is that it’s not, although that would seem to require quantum behavior. Chaotic behavior is deterministic, but utterly unpredictable. Chaos destroys calculation, so it’s really hard for software to be chaotic, but physical systems can be.

      (As I pointed out, we work really hard to keep it out of computers!)

      While we disagree about the likelihood hard AI, perhaps you can at least appreciate why I think it’s such a big leap from what we know is possible.

      This post, and the previous one, are the point I’ve been headed to all along.

      • SelfAwarePatterns

        Sometimes I wonder, if the stuff of spacetime is quantum in nature, as periodically gets pondered in science articles, whether that means we’d eventually hit the end of Pi digits. Or if this is just a case where our mathematics, built upon observed patterns at the level of reality we live in, is just different than the fundamental layers.

        I can appreciate incredulity toward hard AI, and I’ve written myself several times that I think we’ll have to understand human minds in order to accomplish it. (We don’t need that understanding to have very intelligent systems, just for ones we’d consider “conscious”.) But to me the possibility logically follows from what we currently know.

        Now, it’s possible that something we *don’t* currently know will prevent it, but until / unless we encounter that something, I think regarding it as impossible is unjustified. But I’m an empiricist, so I fully admit that we won’t know for sure until either someone accomplishes it, or demonstrates that it’s impossible in principle.

      • Wyrd Smythe

        “Sometimes I wonder, if the stuff of spacetime is quantum in nature, as periodically gets pondered in science articles, whether that means we’d eventually hit the end of Pi digits.”

        Mathematically speaking, no. Pi goes on forever. It’s possible to actually derive this from the properties of a circle, which is why the ancient Greeks knew something was very weird about π. (As the guy in the video mentions, people died over this stuff! It was that weird and offensive to consider.)

        In any physical world we can imagine, Planck level is a limit, so, yeah, there is some ultimate precision of π on that basis.

        {We believe it’s impossible to inspect the world below the Planck level. It takes energy to look at small things (hence CERN), and if you use enough energy to look as small as sub-Plank, that much energy in that small a space creates a black hole and whisks away anything you could see. Kinda defeats the purpose! 😀 }

        {{I think I mentioned my hope (wish) that spacetime by Einstein-smooth. It seems definite that matter-energy are lumpy, but I hold out a faint hope spacetime isn’t. I know a theoretical physicist who calls that hope “idiotic.” He’s probably right, but the jury is still out for the moment. 🙂 }}

        “But I’m an empiricist, so I fully admit that we won’t know for sure until either someone accomplishes it, or demonstrates that it’s impossible in principle.”

        Very much likewise!

        From where I sit, there seem strong (but not definitive) arguments against software AI, and I’ve tried to lay those out in these posts.

        Equally, from where I sit, I see little that argues against a physical (non-biological) network that replicates the brain’s structure.

        Really, the whole point is the gap I see in those two. We have a long, long way to go to establish clear connections between calculation and consciousness.

  • Wyrd Smythe

    An interesting aspect occurred to me about how computers work very hard to remove possible sources of transcendence. They are engineered to treat one entire voltage range as logical one and another voltage range as logical zero. There’s usually some forbidden territory between where the system will treat it arbitrarily.

    But as part of their processing, computers throw away vast amounts of tiny “irrelevant noise” that would degrade their behavior. It would make them inaccurate.

    At the very least, this is hugely different from how human brains work. Neurons communicate analog signals with timed pulses, and analog systems are capable of transcendent behavior, especially ones with 500 trillion “moving” parts.

  • Steve Morris

    Interesting post, Wyrd. Lots of IFs though. The brain as a chaotic system? Possibly. No reason to assume that it is, or that this rules out modelling its behaviour though.

    Are you familiar with the use of cellular automata to model turbulence in fluids? Simple rules can give rise to complex non-linear behaviour even in digital systems, i.e. digital computers can model chaotic behaviour of non-linear systems despite the fact that they are not transcendental.

    Engineers are smart. Unless the science says categorically “no” I wouldn’t rule anything out.

    • Wyrd Smythe

      “The brain as a chaotic system? Possibly. No reason to assume that it is, or that this rules out modelling its behaviour though.”

      Given that the brain is a complex analog physical system with lots of parts, I think the odds of it being chaotic are closer to “probably” than “possibly” but it does remain to be seen.

      “[D]igital computers can model chaotic behaviour of non-linear systems despite the fact that they are not transcendental.”

      Indeed. The Mandelbrot is an example of a simple chaotic system easily modeled with an algorithm. You can zoom in on sections of the Mandelbrot to any precision you’re willing to calculate.

      What’s not clear is how you can model chaotic analog systems with infinite precision, and chaos theory (as I understand it; it’s always possible I don’t) says a digital model will always diverge. Improved precision just delays that divergence.

      Fundamentally, there is an inescapable difference between analog and digital. The latter can get awfully close, but never quite there. (My premise is that matters.) At some point you’re down to Planck level, and it’s all “digital.” But now Heisenberg is a problem. 🙂

      “Unless the science says categorically “no” I wouldn’t rule anything out.”

      Absolutely! (Although, honestly, I have finally ruled out all possibility that I’ll marry Lucy Liu or Lisa Edelstein.)

  • Steve Morris

    MIke, “Sometimes I wonder, if the stuff of spacetime is quantum in nature, as periodically gets pondered in science articles, whether that means we’d eventually hit the end of Pi digits.”
    I think that if that happened, then the circumference of a circle would become scale-dependent at the Planck scale. It’s rather like the way the length of a coastline is scale-dependent and the question “how long is the coastline of this island?” is meaningless.
    In any case, pi has a definite value:
    https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80

  • Steve Morris

    Wyrd, intriguing thought if the mind really isn’t a Turing machine. Does that imply that it is capable of doing things that a Turing machine can’t, such as solve the halting problem?

    • Wyrd Smythe

      I’ve wondered that, and I think it might imply that, yes. There’s a potential tie-in to Gödel as well in that Incompleteness suggests there are intuitive truths impossible to prove formally. It’s possible a mind can intuit halting — intuition has been shown to be surprising accurate, at least in some cases.

      (Wielding Gödel philosophically is a questionable. It applies strictly to axiomatic arithmetic systems, but if mind is algorithmic then it’s mathematical, so Gödel might have some bearing.)

      The Traveling Salesman is thought to be a problem in NP and thus intractable to digital calculation (it’d doable, the algorithm is known, it just takes longer than the age of the universe). Yet it seems that bees may solve the problem naturally.

      The brain may turn out to be a kind of analog computer (think of the dynamic motion equations you solve playing, say, raquetball), but like most analog computers doesn’t work by crunching numbers.

      • Steve Morris

        Interesting thought. I actually have a half-finished sci-fi novel that’s a mash-up of religion, an ancient secret society, and and some Turing-related ideas. It’s like Dan Brown meets The Matrix! Your series of posts relates directly to the underlying premise of the novel, although in my novel I have turned a lot of ideas inside out and upside down. I think it’s still scientifically rigorous, though. May even finish writing it one day.

      • Wyrd Smythe

        Cool, go for it! (Does that mean I’ll get passing mention in the credits? 🙂 )

      • Steve Morris

        I could name one of the characters Wyrd 🙂

      • Wyrd Smythe

        Or even just Smythe would do. [Back in my Special Relativity series, I floated an idea about how FTL “ansibles” might work (without obviously violating Einstein… there might still be something lurking… it’s still not clear to me that FTL communication between points in the same reference frame has to be ruled out… I understand it’s supposed to be ruled out, but I’m not sure I understand exactly why.) Anyway, if anyone wanted to use that idea, I just asked they call it “Smythe Waves”… 🙂 ]

  • Disagreeable Me (@Disagreeable_I)

    Hi Wyrd,

    As stated on Self Aware Patterns, I’m not really seeing an argument here.

    There are transcendental numbers, where transcendental just means that they cannot be written as an algebraic expression.

    There are uncomputable numbers, which are a superset of the transcendental numbers (not including e and Pi by the way) which are just numbers for which there exists no algorithm to enumerate the digits to whatever precision we might desire. Any number with randomly chosen digits is such a number. It’s tricky to really define what specific uncomputable numbers might be, because a definition might be tantamount to an algorithm. One example is the so-called Chaitin’s constant (more a family of constants), which seems to have a clear definition but the definition is not particularly useful because using it to find the value of the constant would require being able to solve the halting problem.

    So, yes, there are some pretty weird numbers. I wouldn’t go so far as to call them even vaguely magical.

    I’m not seeing the connection to the mind and consciousness, though. It seems to me that you’re conflating transcendental in the sense of non-algebraic with transcendental in the sense of, I don’t know, generic mysteriousness, mysticism, pre-rational intuition and that sort of stuff. That’s a clear equivocation to me. These are entirely different uses of the term and it is not at all legitimate to draw the inferences you are making in my view. You might as well conclude from the existence of irrational numbers that we are all crazy.

    You’re also drawing in other ideas which I don’t see as particularly related to transcendental numbers, like chaos and complexity.

    You may have something, I don’t know, but you haven’t really joined the dots for me and so I’m afraid it comes across to me like a mish-mash of disparate ideas that don’t really belong together. Perhaps you’re aiming more for poetry than argument and I’m missing the point.

    • Wyrd Smythe

      Hello DM-

      Welcome to my blog… 🙂

      “It seems to me that you’re conflating transcendental in the sense of non-algebraic with transcendental in the sense of, I don’t know, generic mysteriousness, mysticism, pre-rational intuition and that sort of stuff.”

      Yet I said, “This is not the religious kind of transcendence, this is the mathematical kind.” And later, “(and just to reiterate, I don’t mean that in the spiritual sense)”

      I’m talking strictly about the impossibility of calculating with such numbers in any system that processes discrete symbols.

      “There are transcendental numbers, where transcendental just means that they cannot be written as an algebraic expression.”

      Exactly so. They require infinite series which is the same as saying infinite calculation. How can you calculate with numbers that go on forever? You can’t.

      “I wouldn’t go so far as to call them even vaguely magical.”

      Okay, your call. I’m not the one that named them “transcendental” and though.. The mathematicians who discovered them, and who were very excited by them, did. (Did you note the enthusiasm Mr. Pampena showed in the video over these numbers?) That they’re so weird is what makes them a little magical (obviously a metaphor, since none of us believes in magic).

      (As I’ve said, it seems more like an ungrounded, unfounded, unsupported by any shred of evidence belief that “information processing” will give rise to self-awareness is the belief in the magic of numbers.)

      “These are entirely different uses of the term and it is not at all legitimate to draw the inferences you are making in my view.”

      Except that I’m not using it that sense at all, so you’ve apparently missed what I am getting at. In a word: incalculability.

      There are things you simply cannot calculate with a system that processes discrete symbols. The analog and discrete worlds are different. The break from countable to uncountable numbers is bad enough. Chaos enters the picture at that point. The break between algebraic numbers and non-algebraic numbers is even starker. Now we’re dealing with numbers we can’t even write down!

      “You’re also drawing in other ideas which I don’t see as particularly related to transcendental numbers, like chaos and complexity.”

      Yes. Other aspects that also support my argument and make this one stronger.

      “[Y]ou haven’t really joined the dots for me”

      Fair enough. We see the world differently, so naturally we have a different sense of it. 🙂

      • Disagreeable Me (@Disagreeable_I)

        Hi Wyrd,

        You say you’re not using it in the religious sense, but the leap you’re making seems to belie that. But, OK, you say this is about calculability so let’s continue…

        > I’m talking strictly about the impossibility of calculating with such numbers

        Don’t you run into the same problem with your everyday rational numbers? Most of these cannot be exactly represented in binary. The best you can do is to model the fraction explicitly (e.g. recording numerators and denominators) and defer actually rendering this into a single value. But you can do that with pi and e also. You can record your values in terms of pi or e.

        So, again, I don’t think the existence of transcendental numbers has any relevance here.

        > How can you calculate with numbers that go on forever? You can’t.

        You can do so by getting answers to whatever precision you require, as well as having an algorithm that will continue to fetch more precise digits as you need them. I think where we differ is that I don’t think you ever really need infinite precision for any purpose. Having an algorithm to get as many digits you want is in all cases sufficient.

        > The analog and discrete worlds are different.

        Not that different in this respect! You couldn’t calculate with this stuff with analog systems either, because analog systems are inherently imprecise. Due to the impossibility of measuring any quantity to infinite precision, you’ll get no more accurate a reproduction with an analog computation than you will with a discrete one. Indeed you’ll probably be less accurate. Try to find a precise value of pi with a compass and a tape measure and you’ll see what I mean — you will get much better results with a discrete algorithm.

        I would also hazard that due to quantum uncertainty it is incorrect to assume that there actually is a precise value to measure, in many cases.

        > Now we’re dealing with numbers we can’t even write down!

        But we can! There’s more than one way to represent a number. Apparently, you’re not very concerned with the inability of a decimal representation to precisely capture a number like one third, perhaps because we can also represent it as 1 over 3. But we can also represent pi and e in ways other than decimal expansion. One way to do so is with infinite sum notation.

        And of course you’ve also got the symbols themselves. Nothing wrong with writing down e as “e”. Conceptually it’s no different than writing 1 as “1”. Would you say we can’t write down 1?

      • Wyrd Smythe

        “Don’t you run into the same problem with your everyday rational numbers?”

        To some extent, yes, of course. As you say they can be represented as “a/b” and it is possible to design machines that work with algebraic symbols.

        So, yes, absolutely! There are limitations with what can be calculated with numbers. That’s the point!

        “But you can do that with pi and e also.”

        No, there is no algebraic formula for pi or e.

        “I think where we differ is that I don’t think you ever really need infinite precision for any purpose.”

        We know that’s a mathematically false assertion.

        “You couldn’t calculate with this stuff with analog systems either, because analog systems are inherently imprecise.”

        In so far as my thesis is that “calculation is limited” I agree, of course. Indeed, you cannot “calculate with this stuff with analog systems either.” That’s the point. Calculation is limited.

        Essentially you’re pointing out that there are problems calculating with non-transcendental numbers, let alone transcendental ones. This is absolutely true.

      • Disagreeable Me (@Disagreeable_I)

        Right, so calculating with all kinds of numbers leads to situations where absolute precision in decimal or binary representations is not possible. Transcendental numbers are not particularly special in this regard, so I don’t see what they have to do with anything.

        Neither do I see what this limitation of decimal/binary representation has to do with consciousness. Consciousness must be robust with respect to small disturbances. It cannot possibly rely on absolutely precise state because absolutely precise state would be disturbed by environmental interactions.

        > We know that’s a mathematically false assertion.

        Well, no, because “purpose” isn’t really a mathematical concept. I’m just saying there is no reason you would ever need an absolutely precise binary or decimal representation of a number. Or at least I can’t think of one. When modelling real systems uncertainty of measurement is of far greater concern than anything to do with transcendental numbers anyway.

      • Wyrd Smythe

        “Transcendental numbers are not particularly special in this regard, so I don’t see what they have to do with anything.”

        It strikes me that they’re a little extra special. They were named transcendental because they seem to go beyond the normal algebraic numbers we use to describe most of reality. And yet we find them lurking everywhere.

        “Neither do I see what this limitation of decimal/binary representation has to do with consciousness.”

        Then you don’t see it. All I can say is that I see the limitations of calculation as being a potential limit with regard to calculation of self-aware consciousness.

        “Consciousness must be robust with respect to small disturbances.”

        Indeed. In fact, it may even supervene on them. Very subtle effects in analog systems can turn out to have subtle effects.

        Nearly all physical systems share a fundamental property of “least action.” Undisturbed soap bubbles are spheres; water seeks a level; light refracts.

        We have a hard time calculating the three-body problem. Calculation gives us approximations that eventually turn out to be wrong (because of chaos). But the physical natural system operates through least action and the whole solar system of myriad objects solves the N-body problem perfectly.

        Nature follows it own physical laws down to the quantum level, so it effectively solves “unsolveable” (i.e. uncalculable) math problems through the agency of physical properties.

        “I’m just saying there is no reason you would ever need an absolutely precise binary or decimal representation of a number.”

        I think we’d love to be able to predict the weather with precision! But it’s not clear that’s possible, even in principle, with a discrete calculation.

        I understand what you’re saying. I’m saying we don’t know, and it’s possible discrete calculation won’t work in calculating mind. (Assuming minds are at least as complex as weather.)

  • Wyrd Smythe

    Note to self: There seems to me no requirement that a Tegmarkian has to also believe in a computational theory of mind. There is plenty that is both mathematical and incalculable, so a belief in an underlying mathematical foundation need not imply mind is software running on some form of Turing Machine.

    A real-time analog network of mathematical relationships can be both purely mathematical and not possible to calculate, even in principle.

  • Philosopher Eric

    Hello Wyrd,

    Given your excellent commentary over at Mike’s, I’ve taken the suggestion that you left for James of Seattle to check out this series of posts. (https://selfawarepatterns.com/2019/01/06/is-the-singularity-right-around-the-corner/#comment-25641 ) Very impressive! My sense is that we see things quite similarly in this regard. And indeed, I’m not entirely convinced that Mike, Steve, or DM see things all that differently — that is if variable term usages could somehow be accounted for. My own epistemology formally obligates the reader to accept the definitions of the writer in the attempt to comprehend what’s being proposed. So perhaps I’m a bit more flexible? Or hopefully at least for sensible proposals. And indeed, late 2015 was a while back. If any of them continue to believe that they dispute the message of these posts, I’d appreciate hearing about it. (I chose this particular one since they’re each on the comment board and so should be notified.)

    If you recall I did stop by your blog after that “Trump as president” hoax thing turned out to not be a hoax. From there apparently my attention was diverted to other sources of education. Hopefully today I’m a bit more prepared for discussions with you than I was back then however.

    Let’s begin this with a question. Of course we refer to our various varieties of “glorified pocket calculators” as “computers”. These are teleologically fabricated, or designed and built by us to serve us. But what exists in nature that may effectively be analogized with our computers? And note that I do appreciate the response of “nothing”. That’s surely the case when defined strictly. But at least when not so strictly defined, I’m able to come up with some analogies which seem quite effective. (I consider analogies to exist as essentially the only medium by which the human makes sense of things, and thus these associations are not made idly.) Mike is quite aware of my “four forms of computer” discussion, though given this series of posts I’d love your thoughts about what else in nature might effectively be said to “compute”?

    Let me also mention what I seek beyond your general insights regarding these matters. Apparently it’s possible for complex ideas to be grasped in a “lecture level” capacity, where they’re understood as spoken and might even be recalled later. But they might also be grasped in a far more advanced “practical level” capacity. I believe that to gain such an understanding for anything reasonably complex, one must test an initial conception of an idea against situations where it may practically be implemented. For example math and physics problems effectively refine vague lecture level understandings of math and physics students by practically showing what a given concept both does and does not mean in quite specific ways that go beyond what lectures are able to provide.

    If I’m able to interest you, my hope is that you’ll use your general grasp of my ideas to “solve” various specific problems from this perspective, and so gain a practical grasp of how my ideas work. Such an understanding could be demonstrated by predicting the sorts of things that I’d say, for example, about a given blog article. Then once mastered you should be able to assess the strengths and weaknesses of my various ideas in general. Essentially I’d like to know where improvements are still needed. And indeed, perhaps you have some original models that I could try to gain a practical understanding of?

    For the moment however, what in nature might effectively be analogized with our computers?

    • Wyrd Smythe

      Welcome back, Eric.

      “For the moment however, what in nature might effectively be analogized with our computers?”

      For me this is something of a “tree falls in the forest” question in that the answer comes mainly from precisely defining the question.

      To be clear, I take your question to ask what in nature acts like our computers. The other way around is common: computers model nature in many ways; weather models, for instance.

      In terms of digital, stored-program computers, I don’t think we see the exact Von Neumann architecture in nature, but even so it can depend on how broadly we define that architecture.

      One could draw parallels in how DNA expresses genes (or other biochemical processes). DNA could be seen as a stored program of sorts. A transcription enzyme could be seen as a “CPU” executing its instructions.

      The mechanism is chemical and analog, not electronic and digital, but one could make the argument bio-chemistry “computes.”

      Mike and I had a discussion recently about the distinction between (what I call) calculation versus evaluation. He sees them more under an umbrella of computation.

      (I wrote a couple posts detailing the distinction I saw. If interested, see: Calculated Mathand Coded Math.)

      What we generally do not see is any sense of nature performing an algorithm such that it could perform a different algorithm. We do see things proceed according to their physics, which to me seems a different thing.

      DNA transcription always works the same way. Chemistry is an entirely determined physical process. I don’t label that as “computing,” but others do. It is a matter of definition.

      My problem with the broad umbrella is that it’s too broad. Concepts such as “computing” or “information processing” or even just “process” are so general one has to ask what isn’t a process, or information, or computing.

      So to make sense of it all, I use more restrictive definitions and generally avoid the general terms entirely.

      “Evaluation” is a physical process, common in nature; “calculation” is a multi-step algorithm performed by some engine according to some program. I think that seeing it in nature requires, perhaps, a bit of poetry.

      • Philosopher Eric

        Wyrd,
        I appreciate how committed you are to definition. Yes the term “sound” can be defined in a number of ways, and therefore asking if a falling tree makes sound is not a complete question until the term becomes defined. Furthermore from my own epistemology the reader is obligated to use the writer’s explicit and implicit definitions in the attempt to understand. Thus if you were to tell me that vibrating particles constitute “sound”, I’d be obligated to accept this to assess your general point. In the end if you’re unable to say anything that I consider sensible about that however, then I might judge you poorly. For the most part I consider it useful to define “sound” as something phenomenal, which is to say, an input to the conscious variety of function. Here the tree by itself cannot produce sound. But if you were to tell me about a machine which produces “sonic waves” in order to help process certain materials by means of vibration energy, well that process surely wouldn’t depend upon anything phenomenal. Thus I could endorse such a definition for “sound”.

        I’m pleased that you’ve gotten into what I currently consider reality’s first form of “computer”, which is to say genetic material. Yes this refers to something that’s entirely chemical and analog. I can’t think of any “recipes” before the function of genetic material that may reasonably be termed computational. Indeed, from this perspective I’d say that we could build an effective definition for the long troubling term “life”. (Would you say that “recipe” is a better way to term genetic function than the “algorithm” term which I’ve been using? Or perhaps you have another suggestion?)

        Then the second form of computer that I see isn’t quite as controversial. This is the neuron based central organism processor. Apparently neuroscientist have observed the essential building blocks for all computation in these biological machines, which is to say “and”, “or”, and “not” gates.

        So if we include the technological computers that we build, we’re now up to three of the four forms of computer that I’ve found useful to distinguish. The final one is consciousness itself. Apparently it’s possible for a non-conscious central organism processor (or “brain”), to produce a punishment/ reward dynamic for something other than it to experience. I define the sentience here as “consciousness”.

        Still I don’t consider a functional conscious form of computer to exist at this point. Here sentience will merely be an epiphenomenal trait that’s carried along with organism function in general. But I believe that this dynamic was able to evolve to become the purpose driven form of computer by which you’re able to read these words, as well as experience existence in general. Here consciousness exists as an output of a non-conscious brain. The processor (“thought”) interprets conscious inputs (senses, valence, and memory) and runs scenarios about how to make itself feel better by means of muscle function output.

        And why was standard non-conscious central organism processing insufficient? I suspect that under more open environments there were often too many potential contingencies to effectively program for. Note how much trouble our non-conscious robots have trying to function under more open environments. Apparently evolution was able to get around this difficulty by fabricating good to bad personal experiences, and thus purpose driven forms of life.

        There’s an excellent chance that I’m going too deep too fast here however…

      • Wyrd Smythe

        I’m having some difficulty following you, so at this point I mainly have questions…

        “…reality’s first form of ‘computer’, which is to say genetic material.”

        I think we are in sync here, although I do feel there is some poetry in calling the processes associated with DNA “computing.” (One of my questions is, how do you define the term?)

        That said, there is a distinct stored program and execution engine aspect to DNA function that I find fascinating.

        (And an evolutionary leap that I think begs for explanation. Big part of what’s so fascinating. How the hell did RNA evolve? I’ve yet to hear a reasonable account. Mostly a lot of hand-waving about “self-replicating clays.” But last I heard they’ve yet to find a pathway for natural synthesis of one of the four amino acids required.)

        If you wish to classify the operation of DNA, RNA, and related enzymes, as a form of computing, especially for purposes of discussion, I’m clear on how and why.

        “Would you say that “recipe” is a better way to term genetic function than the “algorithm” term…”

        I don’t think I would. Recipe is such a general word that using it is, I feel, a recipe for misunderstanding.

        A recipe often has stronger focus on the ingredients than the process — quite the opposite of an algorithm, where the concept of “ingredients” may not even apply.

        They do have in common (usually) a set of steps to their process, but “algorithm” far more carries the necessary meanings of computation (the requirements for saving state, run-time selection, and recursion).

        (Incidentally, it’s the difficulty in applying those three requirements to DNA that causes me to say calling it “computation” is a bit poetic, because I define computation in terms of TMs or lambda calculus.)

        “Then the second form of computer that I see isn’t quite as controversial. This is the neuron based central organism processor.”

        Just so I’m clear, we’re talking about brains here, yes? This might be more controversial than you expected. As I’ve said in this post series, I’m in the camp thinking, “The brain is nothing like a computer.” It’s kind of the point of the series. 😀

        For purposes of discussion, I’ll accept that you see brains as another natural machine you classify as “Computer, Type II.”

        And certainly I’m fine with Type III, the metal and plastic machines we make and call computers.

        I do have questions: Do you mean only “digital” versions of these machines (i.e. discrete symbol processors), or are analog computers included under this umbrella? Does the scope of “digital” include an abacus as well as the latest super-computer? If I execute a computer algorithm on paper, is a computer still involved? Is hardware separate from software?

        “The final one is consciousness itself.”

        I’m afraid I find myself lost from about this point on, because I don’t understand what you’re saying. Are you positing consciousness as distinct from brain operation? Are you positing dualism?

        Or are you saying consciousness is just what being inside a working brain “feels” like? (I stopped contributing weeks ago, but am still reading a 600+ (and still growing) comment thread mainly debating the claim that the “hard problem of consciousness” isn’t hard at all, but just an engineering problem. Some of what you wrote seems reminiscent of that, so I ask.)

        Whereas I can see how one might classify DNA, brains, and computer hardware, as three types of “computer,” I cannot understand how consciousness itself can be. I don’t follow the logic. (As you say, “too deep too fast.”) I’m not even sure I’m understanding exactly what you’re saying Computer, Type IV, is.

        Assuming I do follow, four types, DNA, brains, human-made machines, and consciousness, what about them? Those four things all seem quite different to me.

      • Philosopher Eric

        It’s certainly understandable that you’d have difficulties following everything here Wyrd, given that there are some radical ideas are included. So thanks for your on point questions. And you do seem to understand a good bit more than most even initially. Of course you’re aware that some go straight into challenges before grasping what they supposedly dispute. To me it seems far more productive to ask questions.

        One of my questions is, how do you define [a computer]?

        I’ve found this useful to define as something which takes input information (possibly chemical), processes it by means of logical operations, and thus produces output function. So when fruit is added to a juicer to produce juice, does this qualify? Well to me logic based nuances aren’t sufficiently reflected here. I don’t see “If… then…” sorts of steps where, for examples, peaches might be treated far differently from carrots given their separate natures.

        Conversely when a substance enters a cell and interacts with genetic material in a way that outputs associated novel proteins and such, to me it does seem that logical steps must be occurring based upon the nature of what’s inputted to the system. Beyond the “factory” component to this, a “stored program” dynamic seems apparent (as you’ve mentioned). (I like “algorithm” here much better than “recipe” as well.) Before genetic material, can you think of anything in nature that functioned so dynamically? And if “computer” seems too poetic for you personally to endorse in general, can you think of a more appropriate analogy?

        (The evolution of genetic material is something that impresses me as well. But here’s a bit of logic that satisfies me somewhat at least: If an organic material were to naturally replicate itself somehow when exposed to the proper conditions, then this stuff would obviously become less rare as future iterations also replicate themselves. And if there were any subtle changes to such replication over time (as we’d expect), then the changes which hinder future replication should tend to die while the ones that promote such replication should tend to become more prominent. Thus here we have “evolution”. But as bizarre as it might be to us that such a process would end up producing organisms which harbor genetic material, shouldn’t things have gone something like this? What alternative might there be, or at least presuming naturalism?)

        Then regarding the second computer, or the central organism processor which is commonly referred to as “brain” I think it’s best to begin basic — the human brain should be far too evolved to provide us with a good place to start. I don’t know if you recall Mike’s first series of posts on Feinberg and Mallatt’s “The Ancient Origins of Consciousness” (https://selfawarepatterns.com/2016/09/12/what-counts-as-consciousness/ ), but I was fortunate enough to meet him while he was doing these posts. They’ve helped fill some blanks regarding my own theory.

        Consider life before the Cambrian Explosion. Single cellular organisms would do what they do based upon the nature of what their genetic material and general circumstances built them to be. So here such organisms would have some central direction.

        Apparently it was adaptive for multicellular organisms to evolve as well. So in multicellular life which harbor countless individually governed cells that each play their own body part roles, notice that there isn’t yet a central organism processor. No “brains”.

        Now imagine the evolution of a “nerve”, which is to say something that incites unique output function when it detects something that it’s set up to detect. (Of course the organism would already be functional based upon the genetics of individual cells, though now we move into instruction concerning the whole structure.) Then once various nerves evolved to provide their information to a single location rather than directly incite individual output sources of function, it’s here that I think the potential emerged for algorithmic processing of input information to produce “computation” as I’m defining the term. Thus information from all sorts of nerves could be factored together, whether to regulate mechanisms in the body like a heart, or to help decide a direction to move in next. Note that plants don’t have central organism processors and yet function brilliantly for what they do. For organisms in more “open” environments however, I suspect that central organism processing was adaptive. And this is non-conscious function just as our robots aren’t conscious.

        On your point that your posts argue that it’s not useful to define brains as computers, my agreement was given the way that you’ve defined the term. Brains most certainly aren’t “Turing Machines”. But then I hope for the same concession regarding definition for my own arguments. I seek useful analogies, and because I consider them essential to build understandings in general. Why do educated people seem so much more educable? Perhaps because education brings more potential to learn through analogies.

        Since this one is going long I won’t continue on to consciousness in general. We can work on that sort of thing at a more appropriate time. But I will at least provide some abbreviated explanations for your remaining questions.

        Do you mean only “digital” versions of these machines (i.e. discrete symbol processors), or are analog computers included under this umbrella?

        Analog computers are most definitely included. I don’t consider evolution to use “symbols” at all, and therefore I don’t consider it to create any digital forms of life.

        Does the scope of “digital” include an abacus as well as the latest super-computer?

        No I wouldn’t call the abacus digital. And could a human using one function as “computer”? Or the human that writes out computer operations on paper? Well not given those tools specifically, though hopefully we’ll get into the conscious form of function soon enough.

        Is hardware separate from software

        To me that seems like a useful distinction. And I don’t consider anything non-conscious to “write software” (not to mention the rarity of this sort of activity among conscious life).

        Regarding consciousness, I’m certainly no dualist. I really should set the foundation here before saying too much about what I mean by the term however. And indeed, even from my own definition I’m far less certain that “consciousness” qualifies as a computer. But I do have an extensive model for you to consider regarding such function once we’re ready.

      • Wyrd Smythe

        “It’s certainly understandable that you’d have difficulties following everything here Wyrd, given that there are some radical ideas are included.”

        No, that’s not it. I eat radical ideas for breakfast. 🙂

        “I’ve found this useful to define as something which takes input information (possibly chemical), processes it by means of logical operations, and thus produces output function.”

        That is so general you may, indeed, have to consider the juicer a computer!

        The only distinction between the processes going on in the juicer and those going on with gene expression is the degree of complexity. If you consider the juicer at the same low-level we’re considering genetic machinery, we find similar (albeit simpler) chemical interactions.

        “I don’t see ‘If… then…’ sorts of steps where, for examples, peaches might be treated far differently from carrots given their separate natures. “

        The formal CS term for “If-then-else” constructs is “selection.” The formal definition of “computing” also requires the ability to save state and to either recurse or iterate (or at least the dreaded GOTO).

        To me, in DNA, these seem metaphorical, at best (not existing at worst 🙂 ), but how I see it isn’t the point. I accept for this conversation that you do.

        “…to me it does seem that logical steps must be occurring based upon the nature of what’s inputted to the system.”

        It depends on what’s meant by “logical steps” I guess. If I electrolyse water, is getting oxygen and hydrogen the result of logical steps? Are chemical reactions “logical steps?”

        A “logical step,” can mean, informally, “doing the obvious thing required by the circumstances,” or it can have a more precise mathematical definition. For example: A AND B =: X where X is true or false depending on the truth of A and B.

        “And if “computer” seems too poetic for you personally to endorse in general, can you think of a more appropriate analogy?”

        “Machine.” It’s not an analogy, it’s a descriptive term. These things are machines.

        I believe you (and Mike, I think) would define any machine as a computer. (Whereas I don’t.)

        The argument for is that any machine can be said to follow a “program” (implemented by the machine’s hardware) and to execute “logical steps” of that program. One can view the hardware as the stored program, the CPU, and even the system state and iteration, rolled up in one.

        The argument against is the high degree of metaphor involved and the lack of distinction between the “computing” parts. Essentially, under such a broad definition, everything becomes a computer, and the term loses its descriptive power.

        FWIW, I greatly favor strong distinctions and definitions for words because I so appreciate their descriptive power. If I speak of algorithms and computers, no one has to ask me what I really mean. What I mean is what is formally meant by those terms.

        “(…If an organic material were to naturally replicate itself somehow when exposed to the proper conditions,…)”

        (The problem lies in the vagueness of “naturally replicate itself” which assumes the thing we’re trying to get to. Plus that there seems no natural path for synthesis of one of the four necessary amino acids (the G, I think). The other three can all occur in the organic soup of early Earth sparked by lightning.)

        “Now imagine the evolution of a ‘nerve’,…”

        You don’t need to sell me on the evolution of organic brains. 😀

        We have, as examples, everything from the brainless jellyfish with the most rudimentary of nervous systems, to worms, to bugs, to various mammals, and to us. Much of that evolutionary path is visible in creatures today.

        While I won’t agree the brain is a computer, as with DNA, it’s definitely a unique and interesting machine. I have no problem with that classification.

        “Why do educated people seem so much more educable? Perhaps because education brings more potential to learn through analogies.”

        You’ve mentioned analogies several times, and I’d like to tender my 1/50 of a buck.

        Analogies are great tools for entry-level knowledge. I’d equate them with what you called ‘lecture level’ understanding. The real understanding, the ‘problem-solving level,’ comes with understanding the details. In the sciences, it often means understanding the math, at least a little. It always means being fully conversant with the details.

        There are myriad reasons educated people can learn. They’re often drawn to knowledge and have a thirst for understanding. They’ve often learned the most important lesson: How to learn. And gotten a faith in themselves that they can learn. Many are autodidacts, actively seeking knowledge.

        Analogies are nice, but they’re no substitute for real understanding, and they can often lead you down an inaccurate road. Especially in the sciences, one must be very wary of analogies.

        “Analog computers are most definitely included.”

        But an abacus is not? Why not? (To me, an abacus is far more a “computer” than an analog computer.)

        “I don’t consider evolution to use ‘symbols’ at all, and therefore I don’t consider it to create any digital forms of life.”

        We may have different definitions of “symbol” then. The DNA process (in my view) definitely uses symbols. There are the four amino acids, symbolized C, G, A, & T, and those combine into three-group symbols used to create proteins.

        Unlike digital computers as we know them, there’s a lot of analog stuff going on with the chemistry and various potentials, but the DNA system is definitely symbolic processing. That’s part of what makes it so fascinating. There’s a literal code inside each of us that defines our physical being.

        I had asked about the distinction between hardware and software:

        “To me that seems like a useful distinction. And I don’t consider anything non-conscious to ‘write software’ (not to mention the rarity of this sort of activity among conscious life).”

        How does this connect with your view that DNA is a computer? Where is its software? Who wrote it?

        If software is so rare, how can there be so many things that are computers?

        “Regarding consciousness, I’m certainly no dualist.”

        Okay. I guess I’ll have to wait to see what you think consciousness is. 🙂

  • Philosopher Eric

    Wyrd,
    Let me ask you why you think some of your other friends have had difficulties accepting the theme to these posts? I’d enjoy their response to my own assessment of that, but in the end I suspect that they think that you’ve gotten a bit too greedy. Here you’ve taken the “computer” term, and then defined it such that it can essentially only exist as a specific variety of intelligently designed machine.

    Conversely I have no problem accepting the theme to these posts. Why? Well beyond that you didn’t say anything that I consider suspect, this is given the devotion that I have to my first principle of epistemology. It permits the theorist to be perfectly greedy regarding any definition at all. And I can’t blame them (or you) for seeking some kind of “truth” to various humanly fabricated terms. Unfortunately we’ve inherited the convention of asking what is computation, time, life, consciousness, good, and so on. I consider the resulting implicit perspective to exist as academia’s most widespread flaw. Instead I think theorists must formally be afforded the opportunity to construct their terms as they see fit in the quest to convey their positions, whether insightful or idiotic. In order to better found science, I believe that this institution needs to develop various generally accepted principles of metaphysics, epistemology, and axiology.

    (Unfortunately many philosophers today jealously guard their domain as a fundamentally speculative form of contemplation, or even “art”. When someone implies the need for more I’m sure that you’ve heard them shout accusations of “Scentism!”. My four principles of philosophy have nevertheless been developed to potentially found the institution of science more effectively than it is today.)

    I’ve given your “machine” suggestion some thought, and upon reflection I’ll stick with “computer”. The machine term as commonly used simply does not get to what I’m referring to. Originally I think it was meant to describe more complex sorts of things that people create. Here a door is not a machine, nor a person, nor a star. But mechanical typewriters and juicers do represent machines from this perspective. I agree that the function of DNA is far more complex than the juicer, but that’s not my point. In the juicer I don’t see logical steps such that one input substance may thus be treated quite differently from another in a logic based capacity. In DNA I do. And I see this as well in the vastly more simple digital timer. Thus I’ve come to consider “computer” to exist as a better analogy. Of course it’s all physics in the end, though the question here concerns classifications such that they make sense to the human. For example we need to classify physical dynamics in all sorts of ways. But per my EP1, as my definition it’s your obligation to grasp and accept whatever distinction I’m making in the attempt to understand the nature of my arguments.

    I see genetic material as reality’s first form of computer. Next there is the central organism processor since there exists the potential for logic based algorithmic functional output when various forms of input information come together in one place. Thus from around the Cambrian Explosion things could be said to function somewhat in the manner that our robots do. Then chronologically the conscious form would be III. And then quite recently there was the emergence of type IV, or the technological form which provides this analogy its example.

    My own computer definition does not concern “selection”, “save state”, “recurse”, “iterate” or “goto” that I know of, since I don’t have a functional grasp of them. But electrolyzing water does not seem useful to me to refer to as a logical step in itself based upon the nature of what’s inputted. This might however be an output of computer function. Perhaps peach input would incite such treatment while carrot input would not? The “reaction” term may effectively be associated with “output” by definition.

    By the way, since you’ve gotten into “logical steps” I’m curious what you think about how neuroscientists in general seem to have decided that they observe “AND”, “OR”, and “NOT” gates regarding neuron function? (I wonder why Mike, Steve, and DM didn’t get into this? They’d ask far more proficiently and are certainly welcome to join in this capacity or another.) Do you believe that neuroscientists have essentially been seeing what they want to see? Or perhaps you believe that neurons harbor the supposed constituents to all logic based function, but that this doesn’t get close enough to your own definition?

    One thing about my last reply is that it was submitted well after my bedtime (at 1:43 am!). Though by then I couldn’t quite see straight, I didn’t want to wait yet another day. But beyond random mistakes there was one consequential one. I wrote “organic” in a spot where I meant “inorganic”. So for belated repair, before the emergence of genetic material, all that existed here was “inorganic” (regardless of any lightning based reactions and such — I’m defining all that as “inorganic” as well). If anything on this planet were to be copied somehow such that the copy could then be copied, copies that promote this sort of copying would grow more common than other iterations. So the logic here is that this process must have eventually resulted in what we see today as genetic material. It’s an assessment that could be made about the evolution of “life” anywhere. Tautologies can sometimes be helpful.

    Where I used the term “natural”, I should have been more specific. Apparently we don’t yet know each other quite well enough there. A more descriptive term would be “causal”. In this regard I’m as strong a determinist as they come.

    I believe you (and Mike, I think) would define any machine as a computer. (Whereas I don’t.)

    Now there’s a statement that could get you into some trouble! 🙂 No that’s not the case for me as I’ve described above, and I’m quite sure that Mike would say that the computer is a relatively recent human invention though we’ve been building machines for quite a while.

    I’m pleased with your distinction between “analogies” and “lecture level understandings”. Exactly. We take an initial perspective of “it’s kind of like…”, and then hone them into working understandings as we continue exploring. Right. And given the significance of this dynamic, the theorist will need to choose his or her analogies wisely.

    I don’t consider an abacus to be a computer in itself because it doesn’t effectively “do” anything. It just sits there. But if you mean that a human can use one to preform computations, well I do agree with that. I’ve yet to address the conscious form of computer from which to complete such computation however.

    Regarding symbols, yes there again we’re getting too far ahead. I consider the human to have long ago evolved a symbolic form of conscious processing (or “thought”) which has proven very powerful. Thus we use our symbols to help us grasp things, such as genetic function. Surely evolution itself, however, remains “the blind watchmaker”. Look mom, no symbols. 🙂

    Regarding software, to me that instrument needs to be entirely left for the form of computer which a language equipped creature builds. As you’ve said, DNA seems to function as a program, and apparently even though there’s nothing “soft” about such function. I’d say the same regarding neuronal function. Evolution doesn’t build things and then add software in order to institute various apps in those structures, as we do. Here it seems to be “hardware all the way!”

    Well darn, once again I’ve not gotten into consciousness. Hopefully soon. But I’ll never tell you what consciousness is. That would violate my first principle of epistemology. No such definition should exist. Instead I’ll provide what I consider to be a
    useful
    definition. Surely our soft sciences will at least some day develop such an understanding? It may be, however, that philosophy will first need to develop some generally accepted principles from which to better found the institution of science. Of course I’m ready there as well.

    • Wyrd Smythe

      I’m breaking this into two replies. This one touches on topics directly related to your four types of “computer.” The second is various sidebar topics I didn’t think directly related.

      “In DNA I do.”

      What specifically do you perceive DNA does in terms “input substance” and “logical steps”? I’m asking about your precise understanding of the DNA “computer” (because it will help me understand why you include some things and not others).

      I accept your category “Computer, Type I” (I have from the beginning). I’m exploring your notion of it so that I can fully understand it.

      Likewise, I accept your other three categories, and am exploring what their membership functions are. Three have fairly obvious membership functions: DNA, organic brains, modern computing devices. You have yet to cover your view on consciousness.

      “I don’t consider an abacus to be a computer in itself because it doesn’t effectively ‘do’ anything. It just sits there. But if you mean that a human can use one to preform computations, well I do agree with that.”

      But doesn’t a computer just sit there unless a human uses it? What about an electronic calculator? What about the Babbage engine?

      You said analog computers are computers per your definition. How about a slide rule?

      “Regarding symbols,…”

      I’m not sure we’re on the same page on what I meant by “symbolic processing.”

      I meant a system that uses discrete symbols, opposed to some form of analog processing. A discrete symbol processor includes digital computers, digital music, abacuses, even thermostats. In contrast are analog computers, such as slide rules, resistive networks, and tubes of liquid.

      The general operation of DNA expressing a gene does use symbols, as I mentioned. The reason I asked about symbols is that the membership function for your “Computer, Type IV” (chronologically) isn’t clear to me. It does include analog computers (including slide rules?), but not a symbolic processor like an abacus.

      The “just sits there” criteria isn’t clear to me; don’t they all? Alternately, aren’t they all human-made devices that do what they’re designed to do, and human interaction gives that behavior meaning?

    • Wyrd Smythe

      “I suspect that they think that you’ve gotten a bit too greedy.”

      What is “greedy” about formal definitions of terms?

      Talking about calculation in any form is talking about Computer Science, the formal study of calculation. That study predates actual computers. (There’s a common phrase for new CS students: “Computer Science isn’t about computers any more than astronomy is about telescopes.”)

      Of course someone can define terms their own way, but I feel it adds an unnecessary translation layer. The whole point of formal definitions is to enable precise, transparent communication of ideas.

      “Here you’ve taken the “computer” term, and then defined it such that it can essentially only exist as a specific variety of intelligently designed machine.”

      No. Let me be clear about what I mean: A “computer” is something that “calculates.” Modern use implies a device, but the term dates back to the 1600s where it meant a person who “calculates.” During WWII, Bletchley Park employed many such “computers,” and NASA employed them into at least the 1970s.

      The important definition is “calculate,” and it’s well-defined in computer science. Which, as I said, predates actual computers by centuries.

      “I have no problem accepting the theme to these posts.”

      By “theme” do you mean my conclusions or just the terms of my arguments?

      I believe Mike and others accept the terms; they just don’t agree with the conclusions. Which is entirely fair. And expected; we disagree on a key premise. Naturally we come to different conclusions.

      “It permits the theorist to be perfectly greedy regarding any definition at all.”

      So you’ve mentioned. When it comes to science, I don’t agree. There is a vocabulary, both of words and concepts, well-equipped for technical discussion. It isn’t so much a matter of “truth” but of a common and precise language.

      But as I’ve said all along: I’m willing to try to follow your definitions, so I’d rather just talk about the content of your views. Trust me to keep up or ask questions.

      “Here a door is not a machine, nor a person, nor a star. But mechanical typewriters and juicers do represent machines from this perspective.”

      Okay. (Out of curiosity: How about the hinges and latch on the door? How about the fusion engine in the heart of a star?)

      You don’t think a person can be a “well-oiled comedy machine”? 😀

      Although, seriously, the metaphor “body as machine” is well-established and common for good reason.

      “My own computer definition does not concern ‘selection’, ‘save state’, ‘recurse’, ‘iterate’ or ‘goto’ that I know of, since I don’t have a functional grasp of them.”

      They are fundamental to how calculation is defined. This blog post of mine might help.

      “I’m curious what you think about how neuroscientists in general seem to have decided that they observe ‘AND’, ‘OR’, and ‘NOT’ gates regarding neuron function?”

      Those are fundamental logical operations that can be seen in how neurons sum inputs and fire or don’t fire. Electronic logic gates have few inputs and are strictly binary, whereas neurons have many inputs and analog aspects. (I wrote this post about logic gates if you’re interested.)

      It may not have been mentioned because it’s so basic. It’s what neurons do. They are a (super-complex) logic gate (with analog properties).

      “So for belated repair, before the emergence of genetic material, all that existed here was ‘inorganic’…”

      Okay. (For the record, “organic chemistry” is formally defined as chemistry with carbon compounds. You will need to explain your personal definition when you talk about this.)

      “I’m as strong a determinist as they come.”

      Okay.

      “I’m pleased with your distinction between ‘analogies’ and ‘lecture level understandings’.”

      You understand I saw them as similar, right? And generally inadequate, or entry-level at best, for science and technical discussion?

      “Here it seems to be ‘hardware all the way!'”

      Yes. What I was getting at is that when talking about natural “computers,” if we mean they “calculate” then we mean they have “software” but that it’s embodied in the “hardware” architecture.

      To the extent nature creates anything one can call a “computer” it creates Turing Machines (TM) — mechanisms with a single purpose. Modern electronic computers are Universal Turing Machines (UTM).

      The key distinction is that a UTM loads separate “software” and is general purpose. That’s what we mean by “von Neumann architecture” and “stored program.”

      Note that “calculation” is defined in terms of a TM. The UTM, let alone von Neumann architecture, is just a refinement.

And what do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: