Imagine the watershed for a river. Every drop of water that falls in that area, if it doesn’t evaporate or sink into the ground, eventually makes its way, through creeks, streams, and rivers, to the lake or ocean that is the shed’s final destination. The visual image is somewhat like the veins in a leaf. Or the branches of the leaf’s tree.

In all cases, there is a natural flow through channels sculpted over time by physical forces. Water always flows downhill, and it erodes what it flows past, so gravity, time, and the resistance of rock and dirt, sculpt the watershed.

The question is whether the water “computes.”

It can be viewed purely as a matter of definition. Very much like the noisy forest tree, it all depends on what you mean by “sound.”

Or by “computes.”

Computer science offers one definition, a rigorous one: A computation is anything that can be described by a Turing Machine (TM).

Period.

In order to call something a computation, there must be a TM that describes that computation. (Period.)

At a first approximation, under computer science, the following are essentially identical: computation, Turing Machine, Lambda calculus, Finite State Machine (FSM), flowchart, algorithm.

(The ringer in that list is the first term. It’s the only one people debate.)

I prefer the CS definition of computing because it removes a lot of ambiguity about what can (in the strictest sense) be said to be a computation. In particular, it removes debate over that first term.

It does beg for a word to mean what rivers do, or what orbiting things do, or even just what thrown balls do. Or airplane wings, or thermometers, or audio amplifiers,…

Unfortunately, “calculate” is just a synonym for “compute.” A word I like better is “evaluate” (but it doesn’t trip off the tongue well).

Physical systems evaluate the conditions of their environment and react in a directly causal proportional way.

Two key words there are direct and proportional.

§

There is a view that broadens the meaning of computation such that (what I’d call) evaluation is included.

On this account, depending on how broad the view is, various things that evaluate and react, or that process inputs into outputs, are computing and are thus computers.

So — by definition — planets compute their orbit, water computes its path to the sea, and a transistor radio computes music from radio waves. A slide rule is also a computer. So is an old-fashioned camera.

Therefore, a full-adder logic circuit computes its output.

§

If one takes a broader view of computation, that conclusion does follow.

If one takes the CS view of computation, it does not.

Simple as that! 😀

Today’s post goes over why CS says not, but ultimately, it comes down to one’s definition of computing.

I favor the CS definition because it removes ambiguity, but also because I see the broadening as begged by computationalism on the desire to see the brain as a “computer.”

(Because, if the brain is a computer, it shows computers can be conscious.)

I think computationalism can still approach consciousness by simulation, emulation, or replication. None of those require the brain be a computer. They just seek to model a physical system.[1]

A conflict comes from the proposition that, if the mind can be run as an algorithm, it necessarily must be one already. Which suggests the brain has to be some kind of computer.

But the brain doesn’t look or work anything like a computer, so the definition must be broadened.

§

Definitions aside, I want to show you a few interesting things about modeling a full-adder with software.

I think the full-adder turns out to be an interesting example, because physical ones are a reification (an implementation) of an intentional abstract mathematical object.[2]

Because the full-adder is a mathematical abstraction, there are two basic approaches: model the abstraction; model a simulation of a physical reality.

As the examples show, simulating physical reality is always more complicated than modeling an abstraction.

For a sense of what I mean, compare modeling the functionality of a simple four-function calculator to modeling a simulation of all its physical behavior.

§

The full-adder is easier to discuss, and has crucial differences that, under the CS definition, make the calculator definitely a computer while the full-adder definitely is not.

To understand why, we can look at a physical implementation:

Essentially, the circuit is a network of transistors (inside the logic gates), and current flows not terribly unlike water flows downhill. Whether it follows a given path depends on the physical forces in play.

(In a logic circuit, in some cases, beavers have built a dam across some paths and blocked the water. But sometimes water comes from another direction and knocks the dam away.)

Here’s another implementation of the same logic:

This circuit is interesting because it uses just one gate. It makes the idea of simulating it easier; we just need that gate.

Both logic circuits implement the logic table shown at top. That table is one way of expressing the full-adder abstract logic. Here’s another:

Those two logical expressions also express the full-adder abstraction. They are one way to generate the table. (The ⊕ symbol means XOR; the ∧ symbol means AND, and the ∨ symbol means OR.)

Logic Circuit #1 (above) directly reifies those two expressions. There is a one-to-one correspondence with the expression and the circuit.[3]

§

Crucially, current flows like water through the gates (reacting to inputs), and the logical expressions have an immediate value (for any given inputs).

The truth table up top also shows that, for any given inputs, there is an immediate output.

No secondary calculations are required is the key point. The system reacts directly (and proportionally) to physical forces (in an analog way).

In a sense, “tugging” on an input physically causes an output to “wiggle.”[4]

§

When it comes to modeling a mathematical abstraction, such as a full-adder, we can model the abstraction, or we can model a physical reification.

Generally speaking, a software model of an abstraction will be less complex than a software model of a physical system.

A key reason is that it’s usually easy to model the entire abstraction in a “perfect” way (that is, with full fidelity). With a physical model, we have to choose what level of the physics to model.

The lower the level, the more complex the system (usually), so the crucial trade-off is: How low do we have to go to faithfully emulate the system?

The full-adder is simple enough that we can have a reasonably simple software model for the logic gate circuits above. To replicate the properties of a full-adder, we don’t need to simulate the physics of the circuit so much as its logic.

§

To make this more concrete, I wrote three posts for my computing blog:

The first one features five different software models (code routines) of the full-adder abstraction.

The second and third each feature simple software models of physical full-adders, specifically Logic Circuits #1 and #2 (above).

Note that, because the physical instances reify an abstraction, their design severely constrains their operation. That’s why they operate in “digital” fashion.[5]

This makes them much easier to model than a truly analog system! (In truth, the two examples really model the logic diagrams rather than physical circuits. Modeling the actual electronics is even more complicated.)

§

The main thing to notice is how short the examples in the first post are. These model the abstraction directly.

The first one uses a truth table and just looks up the answer. The second one implements the logical expressions at a code level. The third and fourth model the mathematics of adding bits.

The fifth one is a little unusual. It implements the state model of the full-adder. The state table makes it a bit bigger than the rest, but the code is still very simple.

Writing about the two gate simulations took a post each because, even modeling the relatively abstract gates, the model is more involved. For one thing, the network of gates has to be modeled in addition to the gates.

Further complicating things, there is a need to model the real-time flow of current (or abstract logic) and the gates evaluate their inputs.

§

On the CS view, a full-adder is not a computation, because, as an abstraction, it evaluates to an immediate answer, and, when reified, has analog physical flow characteristics, not computational ones.

But a calculator is, because there is no expression that evaluates to an immediate expression, because its operation requires steps, and because its operation requires a specifically designed structure (the algorithm).

The objection may be that the full-adder has a bunch of algorithms that implement it, so why isn’t it a computer?

One answer: Because its fundamental definition isn’t algorithmic.

Another answer: Being able to model something with an algorithm doesn’t make the thing being modeled algorithmic. (E.g. Weather.)

§

The real bottom line here is that:

1. On the CS view, a full-adder isn’t a computer.
2. The brain is very similar to a full-adder in operation.

Certainly much more similar than it is to a computer!

Stay computational, my friends!

[1] “If you build it, they will come.” There can be a kind of cargo cult thinking to the idea that a system that appears conscious will be conscious.

[2] As far as I know, there are no full-adders in nature. I’d love to know if I’m wrong about that. DNA, maybe? Quantum physics?)

[3] The expression for Logic Circuit #2 is, of course, much more complicated:

For the full expression, recursively expand the z, y, and x, terms in the bottom two expressions.

[4] Similar to how tweaking a sensor neuron in your toe physically causes a matching neuron in your brain to light up. There are no logical process steps occurring.

[5] (technically: saturated transistor fashion).

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

#### 12 responses to “Full-Adder “Computing””

• SelfAwarePatterns

I think repeated use of the word “period” is very convincing Wyrd. You should use it more often 🙂

The idea of systems evaluating things is interesting. We can design simple evaluation systems (such as the ubiquitous example of a simple thermostat) without being too tempted to see it as computation. But it seems like anytime we need to concentrate a lot of evaluation into a small package, it ends up being a computer.

So yes, brains are evaluation systems, but so, it seems, are computers. Indeed, when I look at the details of how a computer calculates, I see a series of evaluations, evaluations that are engineered to be binary, but still evaluation. Which, since computers are themselves physical systems, makes sense.

Both nervous systems and computers can been as concentrated evaluation, although I’ve used the term concentrated causality before. That’s the trouble with alternate descriptions for nervous system activity. It’s very difficult to find an effective one that can’t also be applied to computers.

I don’t think we should find this surprising. After all, computers were developed to help with, and when possible take over, cognitive tasks.

• Wyrd Smythe

“But it seems like anytime we need to concentrate a lot of evaluation into a small package, it ends up being a computer.”

But not all small packages, surely. Photocells and all manner of sensors including various chemical and molecular sensors come in tiny packages but aren’t (CS) computers.

We do find computers hugely useful, no question. (I’ve always said digital was one of humanity’s “discover of fire” level changes.)

“So yes, brains are evaluation systems, but so, it seems, are computers.”

Sure, in a general sense, but not as I’m using the term. I’m distinguishing between algorithmic systems (computers) and evaluation systems (physical, analog, proportionally casual). I’m using a specific meaning of “evaluation” not a general one. (As in, “He evaluated his chance of hitting the ball.”)

So, no, computers absolutely are not evaluation systems. But the brain is.

“Indeed, when I look at the details of how a computer calculates, I see a series of evaluations,”

Yes, exactly. A series of steps — steps not in any way related to the original analog inputs. A process completely decoupled from the physical causality.

How is that not a very different process?

“After all, computers were developed to help with, and when possible take over, cognitive tasks.”

Well, for a very weak definition of cognitive, maybe.

• SelfAwarePatterns

“How is that not a very different process?”

So, I’ll admit that I probably don’t understand the strictly specific way you’re using “evaluation” here. But each logic gates “evaluates” whether it will propagate its incoming signal(s). As does each neuron. In both cases, signals are selectively propagated, often in recurrent conditions. So we have sequence, selection, and iteration.

Yes, traditional logic gates only operate on a couple of incoming signals, whereas neurons have thousands. But that’s more a limitation of technology, and engineered discreteness. (I know you don’t think analog computation is computation. Sometimes this feels like arguing whether Pluto, a dwarf planet, is a planet, a debate I usually try to avoid.)

What if we qualified the term? Like analog computation, neural computation is different than digital computation? Or is the word “computation” simply too objectionable?

• Wyrd Smythe

“So, I’ll admit that I probably don’t understand the strictly specific way you’re using ‘evaluation’ here.”

Those old “phones” we made as kids, with string and two empty tin cans. Air vibrations from our voice made the can vibrate, which made the string vibrate, which made the other can vibrate, which made the air on the other end vibrate. A chain of proportional physical causality. Analog.

A cell phone converts those air vibrations to numbers, which are passed through various systems in various forms (various protocols) until, ultimately, at some other phone (even played back as a voice mail), the numbers are converted back to air vibrations. A series of numerical steps, no chain of physical causality. Digital.

Or an old record player. The grooves physical move the stylus, which generates voltage in coils, which is amplified and sent to speakers. Again, a chain of proportional physical causality. Analog.

Versus the CD player. A laser head reads binary numbers off the disk, which are processed as numbers until, at the end, a DAC turns the numbers into voltages for the speaker.

Evaluation: proportional physical causal chain. Computation: crunching numbers.

How are those not utterly different?

“But each logic gates ‘evaluates’ whether it will propagate its incoming signal(s).”

Certainly. The point of this article was that logic gates are, as you suggest, analog devices. Highly biased to insure hysteresis in their operation, but just signal-processing transistors, nevertheless. (So are neurons.)

But in a computer, logic gates are latched by clock signals. All the analog flow behavior is designed out. Signals wait for the clock to latch them into the next stage.

I’m going to talk more about this in tomorrow’s post, but in the simulation for the all-NOR circuit, when going from input [0,0,0] to input [0,1,1], applying the inputs serially, goes through over 100 transitory states before the output [1,0] is stable. A computer waits for that stable output, only caring about the result of the full add.

But what if consciousness depends on those transitory states? As with the NOR circuit, transitory states would be in play in the brain as signals trickle through. (Which goes back to an issue I raised a while back: How does one even define “brain states” in order to think about an FSM representing the mind?)

Further, more importantly, the numerical processing in the computer has no connection with any analog inputs or outputs the system processes. My pushing a key does not “wiggle” something inside the computer. The computer is repeatedly scanning the keyboard to see if keys are pressed. (Tweak a toe neuron, and it “wiggles” something in the brain directly.)

“Sometimes this feels like arguing whether Pluto, a dwarf planet, is a planet”

The tree in the forest, Pluto the planet, computation. These seem all easily resolved once we define what we’re talking about. (The confusion on these issues is a big part of why I mistrust interpretation as a tool for analysis.)

The tree is easily answered by what you mean by sound. Pluto, per the IAU definition, is clearly a dwarf (planet). In the minds and hearts of many (very much mine), it is now and forever a planet, if for no other reason than it’s grandfathered in. It’s just plain mean to take away its planet status after teaching all those school children “My Very Educated Mother Just Served Us Nine Pizzas.” Now they’ll be forever wondering what Mother served!

(If one supports the IAU definition for Pluto, should one accept the CS definition of computation? Does the expert community directly associated with a topic get to decide? 😉 )

“What if we qualified the term? Like analog computation, neural computation is different than digital computation? Or is the word “computation” simply too objectionable?”

I’m fine with that. I’m a lot less concerned with words (labels) than the concepts behind them. My issue with brain “computation” is the conflation, so distinguishing them is exactly what I’d like.

As we see from neural network behavior, once a system gets seriously complex it can do complex things. I just think it’s important not to conflate what the brain does with what (digital) computers do. They are worlds apart, and it’s only from the outside they seem so similar.

• SelfAwarePatterns

“A series of numerical steps, no chain of physical causality. Digital.”

I’m assuming you don’t mean this literally since a digital systems is still a physical one. There is a chain of physical causality going into it, through it, and out of it.

Certainly there is a translation. But there is in nervous systems too. Translations between how sensory organs process information and the axons going to the brain (transduction), and translations between chemical synapses and action potentials. Translations abound in both types of systems.

And even though the brain has analog aspects, it’s also widely regarded to also have digital ones too. And the signalling operates within pretty narrow limits. It has a definite protocol feel to it, with action potentials firing when the gradient is at -55 mV, spiking to 40 mV, before settling back to the rest -70 mV.

The brain also appears to operate according to a clock of sorts, although the tempo varies. We call them brain waves, the synchronization of which may be coordinated by the claustrum. Information discrimination in a neuron comes down to whether it’s firing more or less often then the wave.

On Pluto, my attitude is that it is what it is. It’s a very different body from the other things we call a “planet”, but then the rocky planets are very different from the gas giants. Ultimately I tend to think of anything spherical that orbits the sun as at least planet-like, but that also includes Ceres, Eris, and similar objects.

Eric and I had a conversation a while back about the power of prefixing someone else’s definition of a word, so you can accept it for discussion purposes without appearing to endorse it as the one true definition.

• Wyrd Smythe

“I’m assuming you don’t mean this literally since a digital systems is still a physical one.”

I mean it literally, but for some reason you don’t seem to see what I’m pointing at. Please give some thought to those comparative examples I gave.

Are you really arguing the chain of physical causal events — voltages that vary in concert with the sound — from record stylus to speaker is no different from the stages of numerical processing — which has no connection with the sound — between CD and speaker?

Bishop is right (and I think you agree on this point) that computation is observer-relative in what the data means. The truth table was an unfortunate example. A far better one would be looking at a series of bytes as either ASCII or EBCDIC or one of the many different schemes invented for Asian languages. (Those bytes could equally be video, sound, executable code, or garbage, but even being told it’s just text doesn’t help.)

The bit patterns of a computation have no connection to their semantics. In an analog system, the data is the semantics.

“Translations abound in both types of systems.”

I agree. Stylus movement to voltage to speaker-driving current to air vibration. Or pressure to a tactile neuron to the signal along its length to the chain of events triggered in the brain. All these are flow, and flow can have translations.

(I was just watching a PBS thing the other night that mentioned how one of the great inventions was turning back-and-forth piston motion from early steam engines to circular motion that could accomplish so much more — like driving vehicles or industrial engines. That’s another translation.)

Translation is not a determining factor. That analog data is semantic (proportionally related to what caused it) while numeric data is not is a determining factor.

“And even though the brain has analog aspects, it’s also widely regarded to also have digital ones too.”

While the reverse, other than I/O, isn’t true for actual computers. They are entirely, and decidedly, digital. (A big problem for further miniaturization is that, at those scales, it’s getting very hard to maintain that digital behavior. Not doing so is catastrophic, of course.)

“The brain also appears to operate according to a clock of sorts, although the tempo varies.”

I think that electrical activity may be important, but it’s not the clock-latching behavior I described. Synchronization of physical systems isn’t uncommon at all. Fireflies synchronize. When a stadium of people start stomping, clapping, or chanting together, that turns out to work by similar principles.

But to mis-quote Obi Wan, “Those aren’t the synchronizing clocks you’re looking for.”

“Ultimately I tend to think of anything spherical that orbits the sun as at least planet-like, but that also includes Ceres, Eris, and similar objects.”

Heh, yeah, that was part of the problem, wasn’t it. We’d end up with a hundred planets, and what acronym would the school children have to learn then?

“Eric and I had a conversation a while back about the power of prefixing someone else’s definition of a word, so you can accept it for discussion purposes without appearing to endorse it as the one true definition.”

Eric and I had a similar conversation; I’m familiar with his view.

• SelfAwarePatterns

I understand your analog points. I hope you understood mine that they don’t apply to the nervous system. The signalling there is not proportional in the way you discuss to the stimulus, or to the muscle action.

Although in both digital computers and brains, the system can decide to ramp up, with increasing clock rate in a computer, and adrenaline driven increases in firing rates in the brain.

On ASCII and EBCDIC, a neural pattern in the brain, in isolation from any of its connections, has no inherent meaning. In principle, a radically different peripheral nervous and glandular system could give them completely different meanings than what we see in our brain. In both cases, their meaning derives from their relation with their I/O systems and overall environment.

On the problems with digital miniaturization, another issue is power consumption. I didn’t realize until that article on neuromorphic computing the advantages an analog-digital hybrid has in that area.

• Wyrd Smythe

“The signalling there is not proportional in the way you discuss to the stimulus, or to the muscle action.”

Ah, man, it’s like you keep focusing on bits of syntax and completely ignoring the semantics of what I’m saying. 😦

Okay, just to make sure I do understand you, I’m hearing you say tweaking a neuron in my toe and wiggling a neuron in my brain isn’t analog because neurons fire or not? (Let’s ignore that firing rate can vary and that some believe even the shape of the pulse carries information.)

Here’s one more analogy to add to the stack: Do you see a difference between my signalling you via a rope between us that I can tug, and my sending you a text message? Flow versus steps.

I’m saying the toe tugs a rope attached to the brain. It doesn’t send it a text message (e.g. a chemical messenger released into the blood stream, possibly intermediate processing, but eventually letting the brain its got mail).

“…increasing clock rate in a computer…”

Or more gas to an engine. Systems can be sped up. That’s not a strong argument that they work the same.

Again, the point about clocking is the latching behavior.

“On ASCII and EBCDIC, a neural pattern in the brain, in isolation from any of its connections, has no inherent meaning.”

I agree. Data patterns without context are just noise. (Like a ZIP file without its dictionary.)

“In both cases, their meaning derives from their relation with their I/O systems and overall environment.”

Yes, but not entirely external. The data also interacts with itself internally according to system rules. One can make a lot of deductions by watching what the data does.

(Some of the weirder attack methods use really strange ways of doing just that. There are methods of deducing things about databases just by detecting the size of the (encrypted) queries and responses.)

That applies to interpreting the AND table versus interpreting an actual AND gate in some logic circuit. Given some degree of context, even internal context, it’s possible to deduce things about the system. The larger the context, the more can be deduced.

On a brain level, we can see that signals converge on neurons, we can see synapses work as connectors, and that some inhibit while others excite, and we can see that neurons connect to other neurons.

So I think there is meaning within the system to be discerned.

“On the problems with digital miniaturization, another issue is power consumption.”

Especially when you consider that modern chips work by short-circuiting. Needs a whole new approach; spiking seems an aspect of that. (Although, with CMOS gates, there’s no current flow to speak of, so maybe the spiking has other important aspects? Haven’t looked into that much.)

• SelfAwarePatterns

Yeah, I fear we might be looping ever tighter here.

A signal from your toe, as it propagates up the axon to the spinal cord and to the brain, doesn’t vary in intensity. It propagates the way it is going to propagate at the standard speed and intensity it always propagates.

Intensity can be signaled by how often it transmits, but I don’t see why that couldn’t be true in digital systems too. (Indeed, I got hundreds of database error emails this morning from a malfunctioning application at work, obviously signaling that something was intensely wrong.) It can also be signaled by the number of transmitting axons, which again I can see happening in digital systems.

Now, if the toe receives damage, that does cause nociceptor nerves to fire, which run parallel with the somatic ones. Their signals propagate faster and receive priority processing in the CNS, for obvious reasons. But even in their case, the action potentials propagate at the speed and intensity they’re going to propagate, with again, intensity being signaled by the frequency and number of axons.

But the variations in feeling aren’t because the toe tugs harder or softer on the nerve axon.

Interestingly, the signals from the somatic receptors are subject to habituation, which means they gradually fire less frequently for the same repeated stimulus, which is one of the reasons you’re not constantly aware of your clothes touching your skin, or the pressure of your body against the chair. Of course, change the stimulus and the frequency kicks up again.

• Wyrd Smythe

“Yeah, I fear we might be looping ever tighter here.”

My hope ever is: either spiraling in on an understanding of separate axioms or (even better) spiraling in on some kind of consensus or agreement.

“But the variations in feeling aren’t because the toe tugs harder or softer on the nerve axon.”

I agree, without reservation, with every word of your comment! (Except maybe the looping. 🙂 )

The only point I’m making here is this: It’s not about the toe tugging harder or softer. It’s about the toe tugging, period.

I think you got hung up on proportional. In this case it applies to each transformation along the way. Physical pressure causes some direct, immediate response in the neuron. The neuron starts, or changes, its firing. Signals move up the chain (the “rope”). Ultimately a brain neuron “knows” something about that toe neuron.

The toe tugged a rope. The brain neuron wiggled.

Exactly as: The record tugged on the stylus and the speaker wiggled.

Completely different from: Reading a very large set of numbers and processing them in a mathematical way to produce a stream of other numbers that tell a DAC to make a speaker wiggle.

Do you remember what we said about interpretations and the energy it takes to extract them from the object? By trying to argue these things aren’t very, very different, aren’t you expending a lot of energy to support an arguably untenable interpretation?

Seeing them as significantly different seems the lower-energy interpretation to me.

• SelfAwarePatterns

“aren’t you expending a lot of energy to support an arguably untenable interpretation?”

Actually I’ve been thinking the same thing about you, and I haven’t devoted a month to posting about it 🙂

Honestly, computationalism in a broad sense to me is simply the easy interpretation. If I resisted it, I would find that reading neuroscience required endless rationalizations and intellectual back flips.

• Wyrd Smythe

Right, and that’s fine, and I know we’re opposed on computationalism. But that’s not what we’re talking about right now. We’re talking about the difference between digital and analog (and nothing more).

When I talk about a high-energy interpretation, I specifically (and only) mean one that conflates those two. (Because seeing them as similar requires a lot of explanation.) What the basis for saying seeing them as different requires more energy?

This thread is filled with examples I’ve made where the descriptions of two processes are very different. Their differences are apparent. Isn’t “they are different” a low-energy interpretation?

(We’ve already agreed the brain is an analog computer, so this need not be any kind of sticking point in that regard.)