The notion of emergence — because it is so fundamental — pops up in a lot of physics related discussions. (Emergence itself emerges!) A couple of years ago I posted about it explicitly (see: What Emerges?), but I’ve also invoked it many times in other posts. It’s the very basic idea that combining parts in a certain way creates something not found in the parts themselves. A canonical example is how a color image emerges from red, green, and blue, pixels.
Also often discussed is reductionism, the Yin to the Yang of emergence. One is the opposite of the other. The color image can be reduced to its red, green, and blue, pixels. (The camera in your phone does exactly that.)
Recently I’ve been thinking about the asymmetry of those two, particularly with regard to why (in my opinion) determinism must be false.
As far as this blog goes, the road to here starts back in 2014, when I posted Determined Thoughts. That post explored the physicalist notion that reality is fully determined — something quantum physics says is wrong anyway; quantum randomness magnified by chaos should eliminate determinism.
In the post, with regard to what determinism means for free will, I suggested three options:
- We have souls, a spirituality, that lifts us above mere physical existence.
- The mind somehow transcends the machinery of the brain.
- In some fashion, quantum physics plays a role in our consciousness.
At the end I concluded the first is a matter of personal faith (and metaphysical) and the third, from what we can tell, doesn’t seem to be the case. But I felt the second involved some open questions, and it’s that thread I’ve been following since.
Four years later, in 2018, I posted Free Will: Compatibilism, which more explicitly explored the idea of free will. By then I had a formulation about how option two above might be true. I was thinking in terms of our ability to imagine:
Suppose a mind is a very noisy, yet finely balanced, highly complex system with lots of feedback (that keeps it balanced). The noise constantly presents random idea fragments, and those few that resonate with the moment-to-moment state of the mind get amplified, while the rest vanish like virtual particles.
I compared it to picking a single voice out of a crowd. In general, brains seem to have a facility for focusing on something of interest, so this facility may work internally as well as externally.
Early this year I was fascinated by an article about the background noise of our brains (see: Brain Background). I especially like the jazz band metaphor; it seems right on several levels.
A question regarding free will is: If it doesn’t exist, why does it feel like it does? A similar question regarding determinism is why it feels like it’s not true. Our intuition could certainly be false, but maybe not. Maybe reality isn’t fully determined.
If it’s not, then of course brains, the most complicated physical things we know, need not be deterministic, and of course free will is a possibility. The question becomes: How can reality not be deterministic when its physical laws (other than quantum randomness) are all deterministic?
They’re so deterministic they famously — supposedly (but I think it’s a myth) — run backwards just as easily as forwards. (Oh, yeah? Lemme see you unbreak an egg shell. No bullshit about “just the right forces” — how do you seamlessly fuse broken egg shell?)
Yet according to (what I see as an abstract view of) physics, each moment is fully determined by the histories leading up to that moment. Even granting the randomness of quantum physics, the belief remains that that classical world is fully determined.
I’ve always doubted that, but it has been hard to see exactly why not. When I posted Determined Causality in 2019 I had only the inklings of a mechanism, but since then I’ve seen others with similar inklings.
One idea (see the 2020 posts Rational vs Real and Number Islands) is that the real number system — an uncountable infinity — is an abstraction we made up whereas rational numbers — a countable infinity — might be how Mother Nature actually counts. The rational numbers are granular compared to the continuum of real numbers, and this might better accord with the quantum nature of reality.
As Kronecker said, “God made the integers, all else is the work of man.” (And rational numbers are just an extension of the integers that provides for division.) Is it possible reality isn’t real, but merely rational?
On the other hand, quantum math uses real numbers, and the granularity of rational numbers, while true, is misleading. For any two rational number numbers, no matter how close together, there is always another rational number between them. (And, of course, numbers between those and so on.) At the same time, the set of real numbers is still hugely (uncountably!) larger than the set of rational numbers.
Certainly numbers like pi and e make one wonder. They seem very much based on reality, and they’re not only real but transcendental. The real numbers seem… real.
(Even worse, quantum mechanics implies that the complex numbers are real!)
I’ve come to realize that the kind of numbers don’t matter, but their precision does.
Unless reality has literal hidden depths, large-scale determinism over time is impossible. It cannot be the case that conditions at the Big Bang are responsible for which soup you picked for dinner.
The reason is that doing so requires precision beyond physical limits.
Here’s a simple example to start us off. Let’s start with integers and use the number 1024. Let’s further imagine that it specifies something that takes place after 50,000 steps.
As an integer, the numbers 1023 and 1025 respectively immediately precede and follow, so we’ll also consider what happens to them after 50,000 steps. The point of these two numbers is they represent the smallest possible decrease or increase to 1024.
If we assume the 50,000 steps have a multiplicative nature, then:
- 1023 × 50,000 = 51,150,000
- 1024 × 50,000 = 51,200,000
- 1025 × 50,000 = 51,250,000
While the starting positions are as close together as possible — different only by one — the final positions differ hugely — in fact by 50,000.
Which makes perfect sense. Multiply a difference of 1 by 50,000 and you get a difference magnified to 50,000.
More to the point, this means that at the start there is no way to specify nearly all the possible final positions. The full set of possible starting conditions only leads to a very sparse set of final positions among the vastly larger set of possible ones.
Jumping to the real numbers doesn’t help us under the presumption physical objects can’t have properties with infinite precision.
That’s the key assumption I’m making. The various properties of particles and systems are not (cannot be) specified with infinite precision.
This doesn’t seem a big ask. If anything, it seems more the way we’d expect given that nothing in the physical world is infinite. The Planck scale limits of reality give further credence to the idea of physical limits in the small scale. (How infinite the universe itself might be is an open question.)
The Planck Length (1.616×10-35 meters) gives us a possible starting point. If one assumes some limit to the precision of any physical value, then as with multiplying 1 above, that limit is likewise expanded to a sparse set of outcomes.
As a simple example (using one-trillion (one-million-million) steps because quantum interactions are fast and furious):
- 1.616254×10-35 × 1012 = 1.616254×10-23
- 1.616255×10-35 × 1012 = 1.616255×10-23
- 1.616256×10-35 × 1012 = 1.616256×10-23
The point is that, unless the original value had hidden depths of precision, what precision it had was magnified one-trillion times. The possible outcomes for a very sparse set among the vastly larger one of all possible configurations.
It doesn’t matter what the precision limit is, just that it exists (and surely it must). Whatever that limit is, amplifying it means a sparse set of island outcomes in a huge ocean of ones impossible to specify (and thus predict).
There is no way around this I can see.
Since those other outcomes can occur, some other aspect of the mechanism — perhaps quantum randomness — steers the system through the phase space to outcomes that can’t be predicted. Perhaps quantum randomness fills in the digits of precision as states evolve.
Heisenberg Uncertainty might also play a role, both in participating in the precision limit and in filling in digits of precision as systems evolve.
I’ll note that, while chaos is mathematically deterministic, this assumes infinite precision. Mandelbrot zooms, for instance, depend on numbers with arbitrary (and very large) precision.
Speaking of which, it is certainly the case that any physical computation has limited precision, although (subject to resource constraints) that precision can be arbitrary and as large as desired. If reality is a computation of some kind, one assumes it would also have precision limits. As I mentioned, the Planck scale seems to suggest it.
Tying this back to emergence and reductionism, there is an asymmetry to them in that it’s easy to take something apart (reduce it), but given all the possible combinations of parts, very challenging to deduce a specific outcome.
Knowing an outcome, it’s easy to work back and see how the parts combine in a certain way, but those same parts could be arranged in many other ways. It’s somewhat analogous to needing infinite precision to specify any particular outcome.
Similarly, knowing the present, it’s easy to work back and see how the past leads to it. It may seem it required extraordinary coincidences to create a given moment, but all moments are that way. We can only reduce the present once it has emerged.
Recently, based on a suggestion from Michael, I tried an experiment in reversing time in physics.
I made some entropy simulations for my recent Entropy Isn’t Fundamental! post, and Michael wondered if reversing the particle velocity vectors of the final state, and then running the simulation the same length of time, would run the particles backwards to their starting positions.
I tried it…
…and got better results than I expected. I suspect at least some of the error comes from how I handle collisions, which introduces tiny jumps, and I’m not sure how symmetrical the treatment is in reverse.
That the particles come as close as they do is interesting and makes me wonder if collision handling isn’t the full problem. Here again the precision of the floating point numbers used would definitely have an impact. The final velocity vectors likely don’t have the precision to accurately specify the correct final position.
(Which once again makes me wonder about the supposed conservation of information. I’ve yet to get an answer to what, if any, symmetry principle it’s based on, and I question whether it’s a law at all.)
It would be interesting to pursue, but I’d have to work out the collision handling thing for it to have any value. I will say it’s downright spooky seeing the apparently random movement of particles suddenly start to coalesce and move towards the corner. It’s a bit eerie that came from a set of starting positions four minutes earlier.
I’ll note that the only way to derive those amazing reverse vectors is by running the simulation forwards to some stopping point. There’s no other way to calculate them. They, per the point of this post, cannot be pre-determined.
On a closing note: We have emergence and emergent behavior, but its flip side we call reductionism rather than reduction because the latter could be a way of cooking a sauce. (Or many other things — reduction is one of those words with an absurd number of uses, hence the famous Greek phrase, “Reductio ad absurdum.”)
Likewise determinism, because determine and determined have important day jobs regarding deciding and persisting. (Some people are determined to determine whether determinism is an accurate view. You see the problem.)
Stay precise, my friends! Go forth and spread beauty and light.