The Age of Fire is a key milestone for a would-be technological civilization. Fire is a dividing line, a technology that gave us far more effectiveness. Fire provides heat, light, cooking, defense, fire-hardened wood and clay, and eventually metallurgy.
The Age of the Electron is another key technological milestone. Electricity provides heat and light without fire’s dangers and difficulties, it drives motors, and enables long-distance communication. It leads to an incredible array of technologies.
The Age of the Algorithm is just as much of a game-changer.
A primary characteristic of these Ages is the sweeping technological change involved, the broad range of new technologies that arise from a single new tool.
The three I mentioned don’t form an exclusive list by any stretch. We might, for instance, equally consider the Age of the Plow or the Age of Human Flight. I mention the first two only as crucial stepping stones to the third.
Algorithms enable computation, and to some extent what I’m calling the Age of the Algorithm is more properly called the Age of the Computer. But a computer without an algorithm is just a boat anchor, and the focus here is on algorithms.
That said, Age of the Computer is the more appropriate, since the true dividing line is the electronic machines that perform complex algorithms at high speed and with complete accuracy. In reality, the idea of an algorithm predates computers considerably. An algorithm is simply a set of instructions for doing something (long division, for example).
So while it really is the electronic calculators that have changed the game, without algorithms they’re just metal, plastic, and sand. Besides, Age of the Algorithm appeals to my alliterative appreciation.
Two important phrases from the above are “high speed” and “complete accuracy” — these are the real game-changers.
A “computer” used to mean a person who did math — presumably someone who was well-trained and good at it, because it’s easy to make mistakes doing math by hand. The very first computing machines were purpose-built to generate accurate tables for logarithms, trigonometry (sines and cosines), and navigation.
General purpose computing, and the programming languages that enabled it, catapulted these systems into the stratosphere. Our ability to compute complex and elaborate functions, accurately and quickly, is a step into a world as new as the one with electric lights or the one with fire.
As the computing industry matured, another important attribute changed the game: the small size of calculating devices. Digital watches became a thing. Now we carry extremely powerful computers around in our pockets (along with large digital libraries, video cameras, display screens, online access, GPS, and advanced communications).
Quite some distance from the first human-controlled fire.
Back in 2019 I wrote a series of posts about virtual reality. (I’ve written even more extensively about algorithms.) Then 2020 derailed so many things, and that series has been sitting on a siding ever since.
Picking up the thread, a lot centers on this diagram:
Which seeks to compare levels of organization of two implementations of consciousness — one we know exists and one we can currently only speculate about.
I’m referring to the box in the upper right, the one labeled “Mind” — we have no idea whether an algorithm can produce a mind. Put another way, we have no idea if our minds are the consequence of an algorithmic process. The question is one of the more contentious ones among those who study consciousness.
For the record, I’m in the Very Skeptical camp. I’ll need to see it to believe it. Currently I file the idea as non-physical (and possibly wishful thinking).
At issue here is what I perceive as a causal disconnect — because algorithms — in the Computer level of the right-hand stack. Not everyone agrees there is such a disconnect.
A complicating factor is that we don’t understand exactly how the left-hand stack produces a mind. There are many theories, and even stronger beliefs, but so far no one actually knows.
That lack of knowledge makes it challenging to determine whether a computation can result in a mind. When it comes to computers, there are many things we can simulate, but also many things we cannot. A way it’s often put is that, ‘no matter how good a water simulation is, it still won’t get you wet.’
(I have frequently drawn a parallel with lasers, which we can simulate with great precision, but no such simulation produces laser light. The key question is whether mind emerging from the brain is like laser light emerging from a lasing material.)
I have a sense this will take multiple posts to cover, so here I’d like to focus on the diagram and explain the two stacks and their organization.
The idea is that each layer of the two stacks has a perceived correlation with the matching layer in the other stack. The left stack organizes a biological being into a hierarchy of levels; the right stack does the same with a computer. Each level represents the whole, but looked at with differing levels of detail.
Each layer emerges from, and supervenes on, the layers below it.
At the very bottom, basic physics, which includes quantum and classical physics. At this level, there is very little difference between the two — physics is physics. I suspect this is where the conflation begins, with the perception that everything is just physics.
True, but using physics to simulate something is different than using physics to build something. (Again, simulating water versus getting wet.)
The second layer, still within the realm of basic physical reality, is what we might think of as the higher level physics necessary for any biology or any electronic device. Chemistry has always been with us. Electronics came into being in the Age of the Electron.
[As an aside to those who hold that quantum mechanics plays no role in the brain, or in biology or botany in general, we often forget that chemistry is quantum. We’re just so used to chemistry that we don’t think of it that way anymore.]
The third layer starts to differentiate basic physics into the building blocks for building brains and computers, but note that all biology uses cells, and all electronic devices use circuits. Nothing in this third layer really speaks to brains or computers.
This layer is the source of considerable contention. Many see them as comparable — that neurons are just a kind of logic gate.
I couldn’t disagree more. (I’ve argued extensively against the view.)
To me, conflating them is like conflating classical computing bits with quantum computing qubits. They are alike only in the vaguest sense. Neurons are as far beyond logic gates as qubits are beyond bits.
If one insists on comparing the brain to a computer, a far better parallel would be to see each neuron as a computational device. The brain, then is a massively linked network of such computational devices. (I don’t have a reference, but I recently saw someone arguing that the brain might be comparable to the entire internet.)
The problem is that neurons, while they do have an “on” and “off” state (firing or not firing), seem to carry considerable analog information in the pulse timing and duty cycle of the “on” state. I’ve even seen claims that the pulse edges — their rise and fall timing — may carry information.
One is, I think, far better off to view the neurons as analog signal processing nodes (or complex computational devices) than logic gates. I once read a paper by a neurophysicist who wrote that the synapse (which is just a component of a neuron) is the most complex biological machine in the body.
So no: Neurons are not (at all) like logic gates.
What probably adds to the conflation is that they do appear at the same organizational level. They are the basic building blocks of a brain, just as logic gates are the basic building blocks of a computer.
But since the thesis here is that brains are not computers, it’s a given that neurons are not logic gates.
Which brings us to the penultimate organizational level, the brain and the computer. Given that computers were sometimes called “thinking machines” it’s not hard to see why so many conflate them. Our pursuit of Artificial Intelligence only strengthens that belief.
And it is intriguing that our greatest success with AI, such as it is, has been in very crude and reduced approximations of the brain’s neural network. Such networks have shown amazing promise for many kinds of categorization and identification tasks (and more).
But they still aren’t “thinking machines” — not even close. They have no sense, common or otherwise, and are trivially fooled.
The top level remains a mystery, both in how mind emerges from brain, and (more so) how and whether it can emerge from computation.
Given that we don’t know how mind emerges from brain, it’s almost silly to even be talking about simulating it with a computer. It’s a case of guesswork, of blind leading blind.
As mentioned above, based on past discussions, a key point of contention involves what I perceive as a causal disconnect in the Computer level. (The virtual reality posts I wrote in 2019 were leading up to discussing this point.)
This post is long enough that I’ll only introduce it here.
There is a chain of causality or purpose that extends from the lower levels of the left-hand stack all the way up to the brain (and, one assumes, the mind). The higher the level of organization, the more specific the system is to its purpose. Neurons, with their synapses, dendrites, and axons, are very specific to brain function.
In turn, the brain has a single causal purpose — the analysis and modeling of the body and environment. In more advanced brains, it enables thought and consciousness. It does nothing else. It has no other purpose. (In fact, brains place considerable cost burden on their owners.)
Computer hardware, however, does any task that can be determined by its software. There is no necessity that it do anything at all (when turned off, it doesn’t). Computers are explicitly general purpose devices. That is a big part of their value.
More to the point, there is a vast causal gap between the operation of the hardware levels and what the software does. A key point in computation is that the hardware doesn’t matter. One can perform any computation with pen and paper. Or an abacus.
The most powerful computers aren’t really doing anything more. They just does it really, really fast. Never forget they are just super calculators.
Stay algorithmic, my friends!