The notion of emergence — because it is so fundamental — pops up in a lot of physics related discussions. (Emergence itself emerges!) A couple of years ago I posted about it explicitly (see: What Emerges?), but I’ve also invoked it many times in other posts. It’s the very basic idea that combining parts in a certain way creates something not found in the parts themselves. A canonical example is how a color image emerges from red, green, and blue, pixels.
Also often discussed is reductionism, the Yin to the Yang of emergence. One is the opposite of the other. The color image can be reduced to its red, green, and blue, pixels. (The camera in your phone does exactly that.)
Recently I’ve been thinking about the asymmetry of those two, particularly with regard to why (in my opinion) determinism must be false.
As far as this blog goes, the road to here starts back in 2014, when I posted Determined Thoughts. That post explored the physicalist notion that reality is fully determined — something quantum physics says is wrong anyway; quantum randomness magnified by chaos should eliminate determinism.
In the post, with regard to what determinism means for free will, I suggested three options:
- We have souls, a spirituality, that lifts us above mere physical existence.
- The mind somehow transcends the machinery of the brain.
- In some fashion, quantum physics plays a role in our consciousness.
At the end I concluded the first is a matter of personal faith (and metaphysical) and the third, from what we can tell, doesn’t seem to be the case. But I felt the second involved some open questions, and it’s that thread I’ve been following since.
Four years later, in 2018, I posted Free Will: Compatibilism, which more explicitly explored the idea of free will. By then I had a formulation about how option two above might be true. I was thinking in terms of our ability to imagine:
Suppose a mind is a very noisy, yet finely balanced, highly complex system with lots of feedback (that keeps it balanced). The noise constantly presents random idea fragments, and those few that resonate with the moment-to-moment state of the mind get amplified, while the rest vanish like virtual particles.
I compared it to picking a single voice out of a crowd. In general, brains seem to have a facility for focusing on something of interest, so this facility may work internally as well as externally.
Early this year I was fascinated by an article about the background noise of our brains (see: Brain Background). I especially like the jazz band metaphor; it seems right on several levels.
§
A question regarding free will is: If it doesn’t exist, why does it feel like it does? A similar question regarding determinism is why it feels like it’s not true. Our intuition could certainly be false, but maybe not. Maybe reality isn’t fully determined.
If it’s not, then of course brains, the most complicated physical things we know, need not be deterministic, and of course free will is a possibility. The question becomes: How can reality not be deterministic when its physical laws (other than quantum randomness) are all deterministic?
They’re so deterministic they famously — supposedly (but I think it’s a myth) — run backwards just as easily as forwards. (Oh, yeah? Lemme see you unbreak an egg shell. No bullshit about “just the right forces” — how do you seamlessly fuse broken egg shell?)
Yet according to (what I see as an abstract view of) physics, each moment is fully determined by the histories leading up to that moment. Even granting the randomness of quantum physics, the belief remains that that classical world is fully determined.
I’ve always doubted that, but it has been hard to see exactly why not. When I posted Determined Causality in 2019 I had only the inklings of a mechanism, but since then I’ve seen others with similar inklings.
§
One idea (see the 2020 posts Rational vs Real and Number Islands) is that the real number system — an uncountable infinity — is an abstraction we made up whereas rational numbers — a countable infinity — might be how Mother Nature actually counts. The rational numbers are granular compared to the continuum of real numbers, and this might better accord with the quantum nature of reality.
As Kronecker said, “God made the integers, all else is the work of man.” (And rational numbers are just an extension of the integers that provides for division.) Is it possible reality isn’t real, but merely rational?
On the other hand, quantum math uses real numbers, and the granularity of rational numbers, while true, is misleading. For any two rational number numbers, no matter how close together, there is always another rational number between them. (And, of course, numbers between those and so on.) At the same time, the set of real numbers is still hugely (uncountably!) larger than the set of rational numbers.
Certainly numbers like pi and e make one wonder. They seem very much based on reality, and they’re not only real but transcendental. The real numbers seem… real.
(Even worse, quantum mechanics implies that the complex numbers are real!)
§ §
I’ve come to realize that the kind of numbers don’t matter, but their precision does.
Unless reality has literal hidden depths, large-scale determinism over time is impossible. It cannot be the case that conditions at the Big Bang are responsible for which soup you picked for dinner.
The reason is that doing so requires precision beyond physical limits.
§
Here’s a simple example to start us off. Let’s start with integers and use the number 1024. Let’s further imagine that it specifies something that takes place after 50,000 steps.
As an integer, the numbers 1023 and 1025 respectively immediately precede and follow, so we’ll also consider what happens to them after 50,000 steps. The point of these two numbers is they represent the smallest possible decrease or increase to 1024.
If we assume the 50,000 steps have a multiplicative nature, then:
- 1023 × 50,000 = 51,150,000
- 1024 × 50,000 = 51,200,000
- 1025 × 50,000 = 51,250,000
While the starting positions are as close together as possible — different only by one — the final positions differ hugely — in fact by 50,000.
Which makes perfect sense. Multiply a difference of 1 by 50,000 and you get a difference magnified to 50,000.
More to the point, this means that at the start there is no way to specify nearly all the possible final positions. The full set of possible starting conditions only leads to a very sparse set of final positions among the vastly larger set of possible ones.
§
Jumping to the real numbers doesn’t help us under the presumption physical objects can’t have properties with infinite precision.
That’s the key assumption I’m making. The various properties of particles and systems are not (cannot be) specified with infinite precision.
This doesn’t seem a big ask. If anything, it seems more the way we’d expect given that nothing in the physical world is infinite. The Planck scale limits of reality give further credence to the idea of physical limits in the small scale. (How infinite the universe itself might be is an open question.)
The Planck Length (1.616×10-35 meters) gives us a possible starting point. If one assumes some limit to the precision of any physical value, then as with multiplying 1 above, that limit is likewise expanded to a sparse set of outcomes.
As a simple example (using one-trillion (one-million-million) steps because quantum interactions are fast and furious):
- 1.616254×10-35 × 1012 = 1.616254×10-23
- 1.616255×10-35 × 1012 = 1.616255×10-23
- 1.616256×10-35 × 1012 = 1.616256×10-23
The point is that, unless the original value had hidden depths of precision, what precision it had was magnified one-trillion times. The possible outcomes for a very sparse set among the vastly larger one of all possible configurations.
It doesn’t matter what the precision limit is, just that it exists (and surely it must). Whatever that limit is, amplifying it means a sparse set of island outcomes in a huge ocean of ones impossible to specify (and thus predict).
There is no way around this I can see.
§
Since those other outcomes can occur, some other aspect of the mechanism — perhaps quantum randomness — steers the system through the phase space to outcomes that can’t be predicted. Perhaps quantum randomness fills in the digits of precision as states evolve.
Heisenberg Uncertainty might also play a role, both in participating in the precision limit and in filling in digits of precision as systems evolve.
I’ll note that, while chaos is mathematically deterministic, this assumes infinite precision. Mandelbrot zooms, for instance, depend on numbers with arbitrary (and very large) precision.
Speaking of which, it is certainly the case that any physical computation has limited precision, although (subject to resource constraints) that precision can be arbitrary and as large as desired. If reality is a computation of some kind, one assumes it would also have precision limits. As I mentioned, the Planck scale seems to suggest it.
§ §
Tying this back to emergence and reductionism, there is an asymmetry to them in that it’s easy to take something apart (reduce it), but given all the possible combinations of parts, very challenging to deduce a specific outcome.
Knowing an outcome, it’s easy to work back and see how the parts combine in a certain way, but those same parts could be arranged in many other ways. It’s somewhat analogous to needing infinite precision to specify any particular outcome.
Similarly, knowing the present, it’s easy to work back and see how the past leads to it. It may seem it required extraordinary coincidences to create a given moment, but all moments are that way. We can only reduce the present once it has emerged.
§
Recently, based on a suggestion from Michael, I tried an experiment in reversing time in physics.
I made some entropy simulations for my recent Entropy Isn’t Fundamental! post, and Michael wondered if reversing the particle velocity vectors of the final state, and then running the simulation the same length of time, would run the particles backwards to their starting positions.
I tried it…
…and got better results than I expected. I suspect at least some of the error comes from how I handle collisions, which introduces tiny jumps, and I’m not sure how symmetrical the treatment is in reverse.
That the particles come as close as they do is interesting and makes me wonder if collision handling isn’t the full problem. Here again the precision of the floating point numbers used would definitely have an impact. The final velocity vectors likely don’t have the precision to accurately specify the correct final position.
(Which once again makes me wonder about the supposed conservation of information. I’ve yet to get an answer to what, if any, symmetry principle it’s based on, and I question whether it’s a law at all.)
It would be interesting to pursue, but I’d have to work out the collision handling thing for it to have any value. I will say it’s downright spooky seeing the apparently random movement of particles suddenly start to coalesce and move towards the corner. It’s a bit eerie that came from a set of starting positions four minutes earlier.
I’ll note that the only way to derive those amazing reverse vectors is by running the simulation forwards to some stopping point. There’s no other way to calculate them. They, per the point of this post, cannot be pre-determined.
§ §
On a closing note: We have emergence and emergent behavior, but its flip side we call reductionism rather than reduction because the latter could be a way of cooking a sauce. (Or many other things — reduction is one of those words with an absurd number of uses, hence the famous Greek phrase, “Reductio ad absurdum.”)
Likewise determinism, because determine and determined have important day jobs regarding deciding and persisting. (Some people are determined to determine whether determinism is an accurate view. You see the problem.)
Stay precise, my friends! Go forth and spread beauty and light.
∇
June 5th, 2021 at 3:15 pm
(Yes, I’m kidding! Referring to it as Greek is the giveaway.)
June 5th, 2021 at 7:53 pm
It’s an interesting and perhaps unanswerable question. But my instinct on this topic goes something like this: I don’t think the reality we experience is pre-determined, even if tracks through reality may be deterministic. I’m inclined to believe an odd combo of all three of your possible objections to determinism that goes like this: whenever we observe what happens, a causal structure to produce those events is generally available for inspection, BUT, there may be a great many “threads” through the possible spaces of reality that have a consistent causal structure and the freedom may well be some ability to “switch tracks.” And that means the quantum nature of the universe is related to, or says something important about, the existence of these various tracks even if it doesn’t say anything about mind itself.
That’s all very fun to think about, but perhaps unprovable.
As to the precision issue, it’s a good objection, but only if the universe is a computation. Would an analog universe that wasn’t an abstract computation have this issue?
Regarding the strange brew of quantum indeterminacy plus chaos, I’m not sure if that’s a great answer for breaking determinism. Only because I’ve heard scientists describe QM as deterministic. I think what they mean is the evolution of the wave equation isn’t random–it has a definite causal structure. The hard part (I know you know this!) is that the single reality we observe is one of several permitted possible outcomes, and we have no way to predict which we will observe. If reality is not multiple–meaning things like MWI are wrong and something like collapse theories are correct, then I’d say determinism breaks down in the sense of being able to predict the future from known initial conditions. Because in that case, to your point, small changes due to differing quantum event outcomes would have very large effects over time.
It’s amazing how much is contained in these questions!
June 6th, 2021 at 9:53 am
Regarding the first paragraph, I don’t quite follow. Do you maybe have a “for instance” to help me wrap my mind around it? (I’m really big on examples!)
“As to the precision issue, it’s a good objection, but only if the universe is a computation. Would an analog universe that wasn’t an abstract computation have this issue?”
Yes, I believe so.
There is, firstly, that in many views an analog universe is still a computation — just not an abstract numeric one — not a numeric simulation. I’ve often said that, in some sense, reality computes itself as it goes along. (I can’t recall where I read this, but another phrase I read and liked was that quantum-level interactions are a computation that’s running a classical world simulation.)
More importantly, even if one takes a view that distinguishes digital numeric computing from this physical analog form of “computing” (and in some circumstances I very much do take that view — as so many of my posts attest to) there is still that physical properties would have precision limits. If one were in deep space far from any influences, and one tried to aim a projectile, there is some limit to the precision in the forces one could apply to the task.
Classically we care about two things: position and momentum; where something is and where it’s going with how much force. Launching a projectile (from a bullet to a rocket) involves aiming and pushing (we know where it is; we just need to give it good momentum). Aiming boils down to angular resolution — how small of an arc. Very tiny angles magnify over distance, and at some point the angle is just too small (below Planck length and/or subject to Heisenberg Uncertainty). The impulse we apply to the projectile, likewise, at some point submits to systemic limits. There’s just no way for reality to contain the needed precision. (And even the projectile’s known starting position is subject to Heisenberg.)
Per Peter Morgan’s comment below, ultimately it may simply be physical limits that matter more than some philosophical notion about how nature counts. It may be that the real numbers don’t apply only because reality is so messy. (That said, I think there is a philosophical debate possible over the uncountable real numbers versus the countable rational numbers.)
“…the single reality we observe is one of several permitted possible outcomes, and we have no way to predict which we will observe.”
As I’m sure you know, we can predict probabilities. While a random unmeasured photon would have, on average, a 50% chance of passing through a polarizing filter, a photon that had been measured has the probability of cos(angle)^2 of going through a filter set at angle to the first measurement.
For example, if the second filter is set the same as the first, an angle of zero, cos(0)^2 is 1.0, and there is a 100% chance the photon will pass through the second filter. If the second filter is set 90° to the first, cos(90)^2 is 0.0, and there is zero chance the photon passes. For angles in between, the probability is cos(angle)^2. But even if angle is only 1° and there is cos(1)^2=0.999695, a 99.9695% chance the photon passes through the second filter, there is still a 0.0305% chance it won’t, and in that sense, absolutely, we cannot predict what will happen.
As for something like the MWI, I believe it would also be subject to these precision limits. I see no reason the limits of Planck and Heisenberg wouldn’t also apply.
June 6th, 2021 at 6:38 pm
Regarding the first paragraph, I don’t quite follow. Do you maybe have a “for instance” to help me wrap my mind around it?
To say it another way, each “world” in MWI, which would feel like the only world its inhabitants know, could be a “track” through spacetime that is apparently predetermined. But there are many of these “tracks” and so maybe what free will is, is our ability to choose the tracks we inhabit. This creates a scenario in which every choice we might make is causally determined, but also a true choice. If every single conceivable track is equally real and we’re in all of them in every possible way, then maybe this is meaningless, but I don’t think that’s actually the case. It may be that all the tracks exist as potential, but what activates them for us is our choosing.
There’s problems with this in a rigorous sense, and I think the reality of it is beyond our present comprehension, but I think that something along these lines may be part of a complete description of how it all works. The key is just that causal histories and the freedom to choose which soup to enjoy are not mutually exclusive positions. They are only mutually exclusive if reality is singular and fully pre-determined, and I don’t think it is (singular). I also am not convinced reality explores every last possibility. I think it will take a more advanced understanding than we now possess to unravel the details.
The example would be that you have four soups in the pantry. No matter which you choose, there would be a causal history that could be measured describing all of the physical processes that attended this decision. In essence, it would appear as if you couldn’t have chosen any different. But of course, if you’d have picked a different soup, the same would be true for that choice as well.
There are several ways that determinism and free will might interact.
One, determinism is valid, and you could only have picked the soup you picked, period.
Two, due to precision, or noise, or quantum uncertainty, determinism doesn’t strictly hold. But because the brain is a physical system, and emergence doesn’t exert any downward causation on the processes of the brain, you could only have picked the soup you picked, period.
Three, in my example above, determinism holds, or the looser sort of physical causation in [Two] exists, but you can pick whichever soup you want without violating causality, because what we call reality is not singular in nature.
Four, determinism is invalid and what we call precision, or noise, or quantum uncertainty are all “opportunities” for mind to influence the physical chains of causality, and you get to pick the soup you want.
I’ve often said that, in some sense, reality computes itself as it goes along.
I like this description, and partly because I think it is pretty close in meaning to the notion that reality doesn’t perform any computations at all. If an apple falls from a tree, I don’t think there is any part of reality calculating the trajectory. As you note, reality in a sense IS the computation. But precision is irrelevant if there’s no math taking place, except in this sense: I think what you’re really saying is that space and time are not continuous. I suppose many people think this is obvious, but I’m not well read enough on the subject to have a strong opinion.
Anyway, assuming space and time are discontinuous, then it is possible there are some angles that simply cannot be achieved. I say possible because there’s assumptions behind that, aren’t there? Doesn’t that conclusion assume the smallest bits of space are inflexible and do not move relative to one another for instance? And then there are interesting questions about how an object traveling through space really travels: does each little pixel of it shift one pixel in the direction of travel? That probably wouldn’t work too well for certain shapes and sizes of “stuff.” All of which requires more thought I think. A quark is to the Planck length what the sun is to a grain of sand, so how it occupies space is an interesting question.
This is over my pay grade. Haha.
June 7th, 2021 at 11:08 am
“But there are many of these ‘tracks’ and so maybe what free will is, is our ability to choose the tracks we inhabit.”
Oh, okay, I follow. I’ve read some science fiction where people can jump the tracks in one way or another. (Neal Stephenson’s The Rise and Fall of D.O.D.O. for instance!) As you say, it doesn’t work if MWI or something like it is correct — we’d already inhabit all possible tracks.
(I’ll be posting about MWI again in the relatively near future, and a key point I want to explore is the notion of parallel tracks versus branching tracks. Does Schrödinger’s Cat start off as a single cat that branches into two, or were there two cats all along? Many cats, actually, since the branch could occur at any moment during the experiment, and I’d assume a cat dead one minute is different from a cat dead ten. If there were always many cats, identical up to the point they diverge, that gets around what I saw as a serious energy issue with MWI.)
“It may be that all the tracks exist as potential, but what activates them for us is our choosing.”
That kind of sounds like picking among possible envisioned futures, which is very much what I think might be going on with (genuine) free will. I don’t attach any particular reality to those futures — to me they’re just possibilities we can imagine.
(As a trivial example: I’ve picked a soup, but upon opening the can find it has gone bad somehow, so the future I picked isn’t what I expected. So often what we imagine doesn’t turn out the way we thought it would. We don’t seem able to easily put ourselves on the tracks we most want. If only!)
“The key is just that causal histories and the freedom to choose which soup to enjoy are not mutually exclusive positions. They are only mutually exclusive if reality is singular and fully pre-determined, and I don’t think it is (singular).”
I’m not clear on what you mean by “singular” unless just that MWI is not the case? I have the impression you’re not sympathetic to MWI, so I can’t quite parse your meaning here.
I think reality is singular in that there is only one we all inhabit, but I don’t think it’s pre-determined. I think reality is generated as time goes along. The «now» is constantly being knit.
“No matter which you choose, there would be a causal history that could be measured describing all of the physical processes that attended this decision.”
Right! We can reduce the decision in retrospect. My question is whether we could deduce its emergence given some set of starting conditions. Forget the Big Bang. Would the conditions of that morning allow prediction of a soup that evening? I’ve tried to meta-watch myself when I’m in the position of picking a soup. What exactly causes the choice?
Of course, part of that causal chain is my thinking about what soup to eat. My brain/mind system is part of that history (including buying the soup in the first place), and it’s the brain’s role in all this that fascinates me.
One thing is that there are macro causal factors and micro causal factors. I only have access to the former. One example: I value variety, so one axis of decision is whether I’ve had that soup recently. I’m more inclined to pick one I haven’t. But it isn’t a hard rule. It feels like I can choose at this level, but it’s the micro level that’s usually meant when denying free will. I have no access to that level.
Yet that level is supposedly physically determined (and therefore so is everything built on it), but that’s what I question.
I agree with your four options and (of my free will) pick the fourth. For the many reasons we’ve discussed here (or that I discussed in the post).
“If an apple falls from a tree, I don’t think there is any part of reality calculating the trajectory.”
This depends on what you mean by “calculate.” (Mike and I went around this topic for a long time talking about computationalism.) In the reality computes itself sense, the mass of the apple, the gravitational constant, air resistance, and other minor forces, all act together to “compute” the apple’s trajectory. This sense of “compute” depends on physical laws acting on physical objects over time. It’s the sense under the umbrella of pancomputationalism, and some pancomputationalists argue that rocks compute or that pixies exist in walls.
Computer science offers another definition of “compute” — essentially it’s what a Turing Machine does (or what lambda calculus does). Philosophers trying to make that definition (even) more abstract over-thought it into pancomputationalism. The kindest thing I can say about that is that it muddies the waters. The CS definition is effective and allows meaningful distinctions. The broad umbrella of pancomputationalism smears out the notion of “computing” into something almost meaningless when “everything is a computation.”
It’s not that it’s wrong in any way, its just that it’s trivially true and thus not helpful. I suspect those sympathetic to computationalism like the idea because it makes it easy to claim the brain is a computer (despite the myriad ways it’s nothing like a computer). Then they can draw a line from the brain being a computer to other computers implementing brains or minds. The problem, of course, is that in broadening the definition of “compute,” they can no longer draw that line.
I digress. Point is that people use “compute” in two very different ways. The narrow CS definition that involves numerical models and explicit mathematics (number in; numbers out), and the broad pancomputation definition that involves physical law and physical objects. Under the former, as you say, there is no “calculating.” Under the later there is.
“But precision is irrelevant if there’s no math taking place, except in this sense: I think what you’re really saying is that space and time are not continuous.”
It may amount to that, yes. There’s a bit of nuance to it for me, though…
Firstly, almost as an aside, I’ve long wanted Einstein to have gotten it more right than the quantum crew. I’m askance about aspects of QM, especially that it needs to be interpreted, but I quite like GR. It’s a physical (background!) theory that makes sense. No interpreting needed. But GR assumes smooth spacetime, and the quantum view says everything is quantized. We know energy/matter seems to be, but the jury is out on spacetime. I’ve long hoped it’s smooth, but I’ve read some strong arguments about how it can’t be. Things like the Planck Length and Planck Time might be observation limits or might be actual physical limits; we just don’t know.
Certainly, if spacetime is quantized that’s it, game over for determinism. The numeric examples in the post apply. As you say, some angles cannot be achieved.
I am arguing precision applies even if spacetime is smooth, and I’m suggesting precision (perhaps metaphorically) applies to physical law and physical objects. As angles become vanishingly small, at some point reality doesn’t have the precision to differentiate them. Which absolutely may amount to two ways to say the same thing. Classical and quantum noise in the initial conditions and evolution make this kind of a moot point, anyway, but I like that the precision argument doesn’t depend on the specific physical limits, just the observation that determinism requires infinite precision and nothing is infinite.
(A while back, someone asked me why determinism requires infinite precision, and this comes from notes I made at some point about why I thought so.)
“A quark is to the Planck length what the sun is to a grain of sand, so how it occupies space is an interesting question.”
Size comparisons like this fascinate me. Did you ever see my post Size Matters? In it I investigated a number of such comparisons. For instance, I’d heard both “tree” and “amoeba” compared to the Milky Way galaxy for Planck Length compared to an atom. (But what kind of atom, eh?) Turns out “amoeba” is correct (assuming a hydrogen atom). The Planck Length to a hydrogen atom is like an amoeba to our galaxy.
This one caught my eye in that quarks are thought to be point-like (no known size) so I wondered what size standard applied to the comparison. It led to the interesting fact that sand varies hugely in size, from 2 mm to 0.05 mm! Assuming 10e-4 meters diameter, and 1.4e9 meters diameter for the Sun, that’s a size ratio of about 10e13 or so.
Planck Length is about 1.6e-35 meters, which means quarks here must be in the 10e-22 range — pretty tiny! (For reference, the charge radius of the proton (which is made of quarks) is 8.4e-14.)
I don’t have much feel for quarks, but amoeba to the galaxy as PL to a hydrogen atom? Wow!!
July 7th, 2021 at 8:48 pm
Sorry for my leave of absence, Wyrd. Work got insanely busy, I took a little time away from computing, and took a road trip of some 1,200 miles each way to visit family and friends in the past several weeks, so I’ve not done much on WordPress of late. I know you’ve probably moved on, but I did want to just answer one question or two you asked.
You write, I’m not clear on what you mean by “singular” unless just that MWI is not the case? I have the impression you’re not sympathetic to MWI, so I can’t quite parse your meaning here.
In your subsequent discussion you latched onto my point, so I probably don’t need to say much here. But I was suggesting that for any present, given the multiplicity of “possible paths” we can always reconstruct a deterministic path to it… but that doesn’t mean–to your own point–that we can predict it. So I’m suggesting that some sort of freedom is not incompatible with a view of a deterministic universe that is essentially saying, whenever we look back from a present, it sure as heck APPEARS pre-determined.
If there’s a myriad of plausible futures in any event, but only certain ones are actualized, that is how this might work… I think you fully understood this.
Regarding your notion that even in a continuous/smooth spacetime there are angles to small for reality to differentiate, I’m not sure I agree. Again, if everything must be defined mathematically with infinite precision to come into being… okay. But if reality is not a simulation, there could be an infinite ability to modulate any angle, which over vast distances of space (and time) would result in different results. It may be true we could not build a machine to give us infinite variation before firing a bullet into space, but I don’t think reality would be the limit. The limit may be the size of the atoms in our machine relative to the level of precision we need…
It’s an interesting thought experiment, but I only see precision as being an issue if reality has a computational basis in which numbers are being crunched somewhere to figure out what happens. As I don’t think that is the case, this doesn’t really bother me… 🙂
Michael
July 7th, 2021 at 10:09 pm
“Sorry for my leave of absence, Wyrd.”
No worries, Amigo, I just figured you were IRL. Hope you enjoyed yourself!
And I totally understand needing to get away. I took a four+ week break over April-May, and all year I’ve felt increasingly less interest in doing “computer stuff” which has reduced my blogging noticeably lately. At this point I’m on track for having one of my lesser years postings wise. (I just celebrated 10 years blogging but I had a hard time getting into it. IRL can be a challenge.)
“In your subsequent discussion you latched onto my point,…”
Yeah, it sounds like we’re largely on the same page here. Determined in the sense that everything has an identifiable cause, but not in the sense of being able to determine, even in principle, the future.
“Regarding your notion that even in a continuous/smooth spacetime there are angles to small for reality to differentiate, I’m not sure I agree.”
Okay, so not on the same page, there! 😀
As you say, there are definitely mechanical limits (like atoms), and that confounds any experiment we could do. Atoms are only about 10 orders of magnitude smaller than a meter. The Planck length is 34 orders smaller, and that would be an even harder limit on experimental results. I wouldn’t expect precision issues to be relevant for 100 or more orders, so this is strictly theoretical.
There is also, as Peter Morgan here suggests, that noise may appear at all levels, and this presents yet another limiting factor. All these limits, atoms, Planck, noise, almost certainly make precision issues a moot point.
That said, my contention beyond atoms, Planck, and noise, is that, even with smooth spacetime, although we can write such a thing down and calculate it (as in Mandelbrot deep zooms), reality may not actually allow that these two:
0.0000000000…00000000001
0.0000000000…00000000002
Are meaningfully different. For one thing, there may be no way to prepare or embody such a minute difference. Indeed, given the physical limits, this really is a moot point. Fantasy physics! 🙂
At best it puts a bit more oomph into the notion that reality isn’t determined. Even if one could surpass the physical limits, the precision of reality might limit prediction of the future.
June 5th, 2021 at 8:01 pm
Hmmm. The Greek joke went completely over my head. 😗
I don’t know whether reality is deterministic. It does seem that operationally quantum mechanics is indeterministic. Even if that is an illusion due to world splitting, nonlocal pilot waves, or time symmetric processes, it’s still effectively indeterministic for us.
The question is whether neural operations isolate and utilize that indeterminacy. In some ways, it doesn’t matter since it already has plenty sources of stochasticity that might swamp out any quantum ones. Neural circuitry seems to resist those stochastic influences, making critical pathways have thicker axons to increase signal to noise ratio, or making up for it with redundant connections or repeating signals. But it never eliminates it completely, probably because it’s adaptive to be mostly deterministic (you don’t want a random response when an opportunity to eat or mate arises), but not to devote enough resources to be completely so.
I personally don’t think this has any bearing on free will, in the sense that randomness doesn’t seem to provide virtue or guilt. But it seems clear people will be arguing about free will until the last embers of heat death. We may defy entropy and invent a special new universe just so the debate can continue. (I say as someone who does my share of free will debating. )
June 6th, 2021 at 10:19 am
Now I’m curious! Which part went too high? Did you think I thought the Latin reductio ad absurdum actually was Greek? I’m sure you didn’t think I actually thought it stood for “reduction has an absurd number of definitions”! 😀
Yes, MWI or not, our experience of QM involves probabilities, yet many still believe the classical world operates deterministically. To a rough approximation, it definitely does. My thesis here is that physical reality lacks the precision to be fully deterministic over time. In particular, in no way do conditions at the Big Bang determine what soup I pick for dinner. That is due to much more recent conditions impossible — even in principle — to predict 13.8 billion years ago.
To some extent I’m arguing that so-called super-determinism is a non-starter with me. It’s another theoretical physicist’s fantasy idea.
Free will isn’t really the topic of this post in that, if reality isn’t fully deterministic (which is what I’m arguing) then brains, in virtue of their complicated structure, don’t have to be fully deterministic.
That said, in terms of free will, my sense of it is that it doesn’t come from the low-level structure. I agree that’s evolved to be fairly deterministic. As you suggest, the brain’s function needs to be reliable. I think free will comes from the mind not the brain. It’s due to our ability to synthesize imagined futures and then select among them. I believe that process is noisy enough (see: Brain Background) that it might allow choice.
FWIW, my approach to free will turns on the question of what soup I pick for dinner given that I’ve decided to have soup. I have many varieties in my pantry, so it seems like a decision with very low cost or consequence — the ultimate freely made decision. As you say, it’s an endless and impossible debate, but what I want to try to figure out is, if reality could be wound back to that moment, is it possible I’d pick a different soup, or would I always make the same choice?
The meta-question is why it feels so much as if I could indeed pick another soup.
“I personally don’t think this has any bearing on free will, in the sense that randomness doesn’t seem to provide virtue or guilt.”
Exactly! Those are high-level notions of the mind!
June 6th, 2021 at 11:30 am
On reduction ad absurdum, I have to admit I didn’t brain enough to notice you were referring to a Latin phrase as Greek. 😌 (Something with work is stressing me out this weekend, so thinking is hard.)
I can’t say I find the precision argument, in and of itself, compelling. Any event, even if causally determined, wouldn’t be determined by conditions in one spot during the big bang, but by conditions throughout that event’s 13.8 billion year light cone. So your soup choice doesn’t need infinite precision in the early universe, because it has a huge range of causal events that can influence and make it a unique event. (Not that this demonstrates it was determined, just that the particular argument doesn’t convince me.)
But I also don’t find superdeterminism compelling. It seems like a conspiracy theory about the universe being arranged just right on vast scales just to give us correlations we shouldn’t have.
On free will coming from the mind rather than the brain, for me that’s essentially saying it’s coming from what the brain does rather than the brain itself. But then I’m a hopeless reductionist. 🙂
I think it feels like you could have picked another soup because we’re a system constantly simulating options, including what we’d do if we had the choice again. In other words, we’re evolved to think that way. But we can never have the exact same choice again, because, aside from the fact that it will never have the same light cone again, we and our immediate environment are hopelessly contaminated by the results of the first one. So there was never an adaptive benefit for it being natural to think the choice was inevitable. On the other hand, it is adaptive to mull over past decisions so we can learn from them.
June 6th, 2021 at 2:12 pm
Well, the main joke was misusing the phrase reductio ad absurdum. Calling it Greek was just a signal that I knew better. (In a novel I just read, an expert painter who created a style forgery used a shade of yellow (lemon) that was unknown at the time. To another expert it was a clear and intentional flag that the painting isn’t genuine. No painter capable of the work would make that mistake.)
I don’t know if you watched the video, but it’s exactly the case you’re describing. Lots of particles, each with their own position and velocity vector. The precision argument applies to all of them. It’s simply not possible to specify initial conditions with enough precision to produce a pre-specified definite outcome in the future.
As Peter Morgan points out, just the noise of classical systems perturbs precision, so even if precise enough initial conditions could be set, noise would mess up the trajectories anyway.
Maybe another way to put it is that, firstly, there is a huge amount of noise, quantum and classical, at all points inside that 13.8 billion-year light cone leading to now. Various trajectory adjustments happen all along its length and width. Secondly (I’m arguing), initial conditions don’t have enough precision to allow prediction even absent the noise.
To say free will comes from the mind rather than the brain is simply to say it comes from, as you say, what the brain does rather than what it is. Free will, I think, is a property of the emergent behavior of the system. As you pointed out, abstract notions can be involved in our values and decisions. Free will won’t be found in the parts, but in the system as a whole.
Unfortunately, one reason the free will debate is endless (aside from both sides having good grounds) is that, as you say, our decision points can never be truly re-created to see if choice really is possible. I do think that’s the crux of the question, though. IF reality could be wound back…
I do come down on the side of thinking I could choose another soup, especially given the low-consequence nature of the decision. I think there can be genuine selection among generally equally weighted choices. But as I said in the post, it’s not easy to say exactly how. Obviously we agree our predictive ability is involved.
June 6th, 2021 at 9:08 am
My schtick over the last few years has been to compare classical mechanics with quantum mechanics by adopting Koopman’s Hilbert space formalism for classical mechanics. Since I’m new here as a follower of your blog, I’ll mention the most recent published article, “An algebraic approach to Koopman classical mechanics”, in Annals of Physics 2020 and also findable on arXiv. It’s not perfect —in particular, Section 7.1 is somewhat flawed— so there’s a followup article, “The collapse of a quantum state as a signal analytic sleight of hand”, so far only on arXiv and not yet submitted to a journal.
Which I might not have mentioned as quickly if the question of determinism did not depend, as I think it does, on the status of classical mechanics in our theories and how precisely we can compare it with quantum mechanics. Even for classical mechanics, there is a question of whether at all scales there is always Brownian or other noise bubbling up from smaller scales. That seems to me a more physical way of approaching your discussion of real numbers. If there is always such noise, then our finitely constructed models will always have to be probabilistic-statistical (and we will have to discuss when statistics are compatible or not and when joint probabilities ought to be generated by our theories or not; I’ve also been saying to people that if there is noise all the way down then we have to worry about the axiom of choice, which will make a big mess of some aspects of the mathematics.) To echo the two very interesting comments above, the question also seems more clearly empirically undecidable but also irrelevant if we keep to a purely classical context: if we discover a scale at which there is apparently no noise whatsoever (10⁻¹⁰⁰ meters, perhaps?), we could never be sure whether there might be a much, much smaller scale at which there is noise (10⁻¹⁰⁰⁰ meters!?!), but for the purposes of the scale we can reach with our current experiments it would make no difference.
Since you have invoked Greek, Google translate gave me this, “μείωση σε σύγκριση”, which at the scale of three words is probably a reversible transform.
June 6th, 2021 at 11:09 am
Hello Peter Morgan, welcome to my blog!
I quite agree about noise. In fact, I’m surprised anyone believes in the Newtonian mechanistic determined universe anymore. At the lowest level, Heisenberg and Planck and the probabilistic nature of QM. On top of that, as you say, all the noise of classical mechanics at its lower levels. (And maybe on top of that, that reality simply doesn’t have the precision for large-scale prediction.)
It messing with the axiom of choice is an idea I haven’t heard before. You’re suggesting that, at some point, the noise removes the if not false, then true; if not true, then false assumption? I wonder if we’re not already kind of there in the quantum world with superposition (which I see as one of the big puzzles of QM).
My understanding, FWIW, is that (quantum) reality becomes more and more noisy as the Planck limit approaches. Lots of virtual particle activity. At the Planck length (about 10-35 meters), it’s a soup of activity. But it’s not possible to ever see below that because length is thought to have no meaning. Or that the energy required to probe beyond that limit creates a black hole.
Regardless, I’m among those who believe in a Heisenberg Cut, that the classical and quantum worlds have some divide, and I do think length scales are part of the Cut. The extremely small, definitely quantum. The big noisy hot world, definitely classical. The fuzzy dividing line,… an even bigger puzzle, but one I believe we’ll solve. The noise of the classical world, I’m sure, plays a role.
I ran your Greek phrase through Google Translate and got: “reduction compared”
When I try to translate reductio ad absurdum as Latin to English, the translation is: “reductio ad absurdum” 🙂
June 6th, 2021 at 4:07 pm
I typed into Google translate “Reduction to comparison”, so, no, Google translate is not a reversible process even for three words that it generated itself. Haha.
If there’s incompressibly infinite information in the world, which I suppose there would be if there’s noise from smaller scales at *every* scale, then I suppose the axiom of choice is in the game. That given, measure theory, at least, is problematic. It’s not that I can prove anything, but that I think there’s a case to discuss and that most people who declare that determinism is obvious haven’t thought through how they would do the mathematics in detail if we have to work with non-differentiable or generalized function spaces or more elaborate mathematics.
Exactly or even approximately what happens at the Planck scale and other unification scales is a big ?-mark to me. We don’t have good enough theory to make any big claims. Taking the idea of virtual particles seriously also seems to me similar to taking the idea of epicycles seriously: they’re just a way to coordinatize a complicated trajectory and we should be looking for other ways. Equally, metric geometry is only one way to model gravity: there’s also torsion and non-metricity (see “The Geometrical Trinity of Gravity”, which is open access, http://dx.doi.org/10.3390/universe5070173), either of which might play more nicely with QFT or with random fields.
The Koopman approach allows us to reconceptualize the Heisenberg cut, I think. One significant aspect is that because we present both CM and QM in a common mathematical formalism of Hilbert spaces we can distinguish cleanly between quantum noise, which is Poincaré invariant, and thermal noise, which is not. Secondly, we can think of quantum mechanics as an analytic form of a suitably extended classical mechanics, and we can present any system using either the CM or the QM approach. To me, the shared Hilbert space formalism is much clearer than the usual momentum space and Wigner function approach, but of course different people find different ideas more or less to their taste. I think of both in a more-or-less engineering way as probabilistic/statistical signal analysis formalisms, because out of our experiments come lots of signal lines into our computers and we store in computer memory a compressed form of the signals out of them: CM and QM can both be thought of as just Hilbert spaces and fourier and other transformations as models of both the signals and the algorithms we apply to analyze them.
These ideas are still too new to have been tested much, however, and they could easily turn out to be crazy but not crazy enough, paraphrasing Bohr of Bohm’s interpretation.
June 7th, 2021 at 8:51 am
I’ve done that before — translate something in GT and then translate the translation back (to English) — and the results are usually pretty hysterical. Definitely not a reversible processes!
Your phrase “incompressibly infinite information” (besides appealing to my love of alliteration) offers a view of what I’m getting at about precision. The current state of the universe — all the information about its current configuration. Can all that information be found in initial conditions at the Big Bang, or does it also represent 13.8 billion years of physical (analog) computation?
I think the latter. I think new information is created along the way. (And because information can be created, I’m not down with its supposed conservation under QM. I’ve begun to suspect unitarity is a myth.) I argue there is no way all the current state information could be compressed into BB conditions.
“Taking the idea of virtual particles seriously also seems to me similar to taking the idea of epicycles seriously”
Could be. I’ve often wondered if we’ve come up with an epicycles theory in QM, if we’ve found a mathematical formalism that comes so close to working that we’ve just fully bought into it. I am something of a quantum reconstructuralist in thinking we might need a different approach.
Virtual particles, as I understand them (which isn’t saying much), are a consequence of Heisenberg and QFT. They’re random fluctuations in the various QFT fields. The notion of two particles, one the anti- of the other, springing into existence and then merging, annihilating, and vanishing, is mythical. (Especially with regard to the whole black hole event horizon thing.)
I can see the attraction of a mathematical model that unifies QM and CM. One question I have is the extent to which your model requires an interpretation to match it to physical reality. One reason I’m skeptical of QM being a fully correct answer is that it requires interpretation. We have math that seems to work (but so did epicycles) but no understanding of what that math means. I’d be very interested in a theory that brings meaning along with it. We really do need some sort of “Copernican revolution” in fundamental science.
June 7th, 2021 at 3:17 pm
I’ll suggest a description of an interacting quantum field: A free field is constructed as a collection of measurements, say φ(x), which in principle we then deform to give us a different field of measurements, ξ(x)=U(x)*φ(x)U(x), where the U(x) are constructed using the free field measurements at different points, φ(y), in such a way that various constraints are satisfied. So the measurement ξ(x) would be a complicated pattern of measurements done together. I’ve given it a different interpretative tilt, in terms of measurements instead of in terms of “particles”, but that mathematics is all standard. It goes off the rails, however, because the conventional constraints/axioms, which have been largely unchanged since the 1950s, are such that there is no well-defined interacting field in 3+1-dimensions: regularization and renormalization try to finesse those constraints, from which various ideas about scaling have emerged.
A measurement, however, is also an opportunity to modulate whatever Poincaré invariant vacuum state we start with, where we can apply ξ(x) as a complicated pattern of measurements, so we get a different modulation than we would get if we applied φ(x): we can think of the deformed modulation either as an interacting particle having been added at x or as a complicated pattern of free particles having been added at x and everywhere in space-time in a way that tries to balance those various constraints. In this new mathematical system, however, we only have the ξ(z) at different points z to measure what modulation(s) have been applied to the vacuum state, we can’t measure and modulate using the φ-measurements. The ξ(z) are often called “dressed” particle operators, but perhaps it’s obvious I would prefer to call them “dressed measurements”.
I’ve been working on trying to see past the conventional axioms for over 20 years. It’s a total mess, but exactly how to clear up the mess is certainly tricky! I tried to write something about that, but for now it gets far too technical and, more to the point, too idiosyncratic and perhaps too wrong. There are quite a few technical details I’ve omitted from the above account, but I hope it nonetheless gives some sense of why I think the idea of a “virtual particle” might be unhelpful.
I gave a talk at IQOQI in March, “The measurement problem in a signal analysis perspective”, the first 20 minutes of which is non-technical. The other 40 minutes you might or might not find useful. There’s no question this doesn’t rise to the status of being a new interpretation of QM yet, but it suggests a different enough approach to QM and its relationship with CM that I think it’s worthwhile anyone thinking through why it does or does not make sense to them.
June 8th, 2021 at 11:20 am
It’s probably pretty obvious, but I’ll mention that I’m no physicist nor mathematician, but someone with an avid interest in both. I’m a retired software designer with a life-long interest that dates back to before quarks. In my old age I’m trying to learn the math to take this all to the next level. (FWIW, my QM-101 series of posts is my “homework” and reflects my self-study so far.) I mention this because the water is definitely getting over my head here. 🙂
I did grab your paper off arXiv as well as the gravity trinity paper you linked. I’ll chew on them more later, see what sense I can make of them. I do find the math a bit challenging, but it’s a process. I’ve learned a lot in the last couple of years.
Anyway, I was really struck by the idea of “expanding classical mechanics to include noncommutative operators” — in contrast with the common assertion that a key difference between QM and CM is that CM measurements are commutative. The idea, as I understand it, is that CM has no wave-function to collapse, so measurements don’t alter anything.
The idea of noncommutative CM is very interesting. What causes it to be noncommutative? You link measurement to modulation (a new idea for me). Is this modulation what perturbs the system? Are we talking only small scale CM? Our direct large-scale experience seems commutative.
Your preference for “measurement” over “particle” also caught my eye. My crude understanding is that “particles” are wave packets in quantum fields, and those wave packets can be quite spread out. But they interact in specific point-like locations. I’ve always imagined interaction was a measurement that localized position. If you prefer the notion of “dressed measurement” to “dressed particle” does that mean thing are still clothed? I thought the “clothing” was the virtual “particles” so what is it in “dressed measurements”? (Or have I dreadfully misunderstood the whole thing?)
I found your YouTube page, and will check out your videos when I get a chance. (I have kind of a long TODO list.) I very much agree about “comparative QM” — I tend to see all interpretations as partial views of something we’re not yet seeing clearly. Like different witnesses describing events seen through a fog. Both our biases and lack of clarity are a problem. I’m glad to know there are well-trained people trying to break new ground!
June 8th, 2021 at 3:05 pm
I’ll take your “The idea, as I understand it, is that CM has no wave-function to collapse, so measurements don’t alter anything.” and run with it. Not everybody does, but I think of the wave-function as just a tool for modeling the average of the results of measurements that we model using an operator. In classical physics we have the idea of a state that serves the same purpose: state-m and measurement-n models or predicts the average of the results of those measurements MR-mn. A bit fancier than that, we can use a state and an operator to generate a probability distribution for a given state and a given measurement.
We naively expect that if we have two probability distributions then we’ll be able to construct a joint probability distribution, but even in the 19th Century Boole knew that isn’t always possible. Here’s the thing: commuting operators let us generate joint probabilities, but noncommuting operators don’t. On the other side, for a given joint probability we can always find a state and commuting operators that generate it.
But in QM, we use noncommuting operators and collapse of the state to generate joint probabilities. That’s just mathematics. I think we can say that’s “No physics!” Furthermore, because we’ve generated joint probabilities, we could have used two commuting operators to generate them and a different state, with no collapse. In the old way of speaking, that’s classical, even though it’s out of QM. We can also happily use that mathematics in CM, if we find it useful to do so.
Collapse of the wave-function, in this perspective, is just a consequence of using noncommuting operators where it might have been convenient, but we didn’t really have to. There are other circumstances where noncommuting operators are essential, however, when we obtain two probability distributions that don’t have a joint probability distribution, but that’s just the same for CM as it is in QM. I often say that CM without noncommutativity is a straw man.
————————————
You’re right that the clothing of a “dressed particle” can be thought of made of the virtual particles, but my understanding is that “dressed particles” can’t be undressed. I think the analogy of free particles being like circular orbits is pretty close here: I think we can’t take the epicycles out of the orbit.
I think we can usefully ask about how we make as sure as we can that we can control where events will happen (I won’t say that we try to make particles go where we want them to go, thereby causing events, essentially because talking about particles so often gets us in trouble). We try different kinds of preparation apparatuses and different kinds of measurement apparatuses and we see what events happen and compute statistics of those events. When from those many statistics we have a good idea of the state from those many perspectives, we can try a new measurement apparatus with the many preparation apparatus and decide how best to model it using a new measurement operator.
In this more-or-less operational approach, there are no particles, just statistics of recorded events. People often don’t like operational approaches because they can’t see what’s happening, but everything is about imagining what would happen if we were to change some detail very slightly. We don’t have to do a new experiment if we move a mirror by a fraction of micron: we can work out how the statistics would change. We engineer the statistics of events, not particles.
This must be too much!?! As you perhaps can tell, I find it very helpful, for me, to riff somewhere between the idea in my head and the ideas in someone else’s. When you get bored, just say so or go quiet and I’ll go bother someone else or else I’ll happily talk amongst myselves.
BTW, I forwarded your next post to my wife and she’ll look up the Jonathan Gash in the library. She hadn’t heard of him, but she’s something of a fan both of antiquing and of murder mysteries, so it could work well for her.
June 8th, 2021 at 5:03 pm
Well, here you’ve found someone who enjoys chatting at length and is fascinated by QM, so not too much at all. Given the difference in our experience levels, you may end up fielding a lot of questions, though. Certainly if you’re looking for someone with their own (somewhat untutored) ideas, you’re in the right place, pull up a stool.
(I’ve already learned something useful. WordPress supports LaTeX, and some do use it to insert math inline, but in posts for exponents I usually just use the HTML <SUP> element. The problem is that WordPress strips it out of comments. If one knows one’s audience, one can write 6.62e-34 or 10^-12, but otherwise exponents in WP comments have been a sore point. Your first comment here showed me a way around that, at least for digit exponents and signs. I’m sure I’ve seen those Unicode characters, but it never occurred to me I could insert them in comments. It’s an option I’m delighted to have.)
I agree the wave-function is epistemic. (It can be a bit red flag to a bull, but I can’t resist writing posts about why the MWI doesn’t make any sense to me.) Your reply kinda answers a question I had about whether your view extended the notion of wave-function to macro objects. Also about the role of collapse. (I am very skeptical the idea of wave-function is meaningful for macro objects.)
I’m a “for instance” kinda guy, so I’d like to try to make my understanding of this concrete with an example. Spin measurements intrigue me because the incompatibility varies depending on the angle. As I understand it, we can think of the measurement as a superposition of the same axis and the orthogonal axis with coefficients depending on the angle from 0 to 90.
In the case that we first measure 0° (say we follow the “up” particles), then 90° (which gives 50/50 statistics “left/right”), and then measure 0° again, we find 50/50 statistics again. The second measurement erases the first. I had always viewed this as distinctive of QM over CM. Under CM rules I’d expect to always get “up” on the third measurement.
I don’t know how to apply that to what you said about joint probabilities. As with many, on my mental map, the statistics territory is labeled “There Be Dragons Here!” (Which has been a real impediment to trying to understand either Everett’s or Bell’s famous papers.)
What’s had me pondering virtual particles lately is the g-2 experiment. Firstly, I got to wondering about size scales, how isolated, or not, electrons and their virtual clouds are. It’s complicated because they aren’t particles at all. Secondly, I have strong ontological leanings, and I keep trying to visualize some skeletal idea of what’s “really” going on (if [A] anything can be said to be “really” going on and [B] if human intuition can come anywhere close to it; two big IFs, I know).
To me, electrons are almost like globular clusters — big fuzzy energy disturbances in the quantum electron field. I reason that the energy of that wave-packet couples to other fields perturbing them enough to produce clothing. (And, yeah, no real electron goes naked.)
My ontological side does hope for more than what our instruments tell us, but I was very taken by an example due to Philip Ball. It’s the game of 20 Questions with a twist. The group giving the answers has not decided on an object. The gimmick is that each answer must accord with all previous answers in picking out at least one possible object. Ball notes that the questioner sees each answer taking longer and longer (because the person answering has to find a qualifying object). It’s a metaphor for the notion that reality doesn’t have specifics until we ask it questions.
I hope your wife enjoys Lovejoy. He’s a bit of a piece of work.
June 8th, 2021 at 7:56 pm
”(I am very skeptical the idea of wave-function is meaningful for macro objects.)” The idea of a state for a macro object makes complete sense. What I think is in question is whether we can perform something like a fourier transform measurement in hardware for a person. Section 7.3 of AlgKoopman is kinda whimsical, but it points out that we can perform an “is the cat dead or alive?” measurement, and we can perform a “can the cat be resuscitated/killed?” measurement, but performing both at the same time is tricky. The eigenstates of “is the cat dead or alive?” are clearly dead and alive, but the eigenstates of “can the cat be resuscitated/killed?” are something like half-dead±half-alive, which I think are not nonsense if we think “can the cat be resuscitated/killed?” is not nonsense.
The sequence of slides in the talk at IQOQI is so graphic that I don’t think I can do justice to a conversation about the Bell inequalities until you’ve had a quick look at the first 20 minutes. I found it very helpful to have a close look at what was once the best experiment around and to see how we can talk about it if we focus on signal analysis and events, the selection of subsets of the actually recorded data, and the various algorithms that are applied to compute statistics. Some people have found the slides telling. Also, you can meet Frank, who some people have even quite enjoyed. I emphasize, however, that I personally don’t care much whether we thread faster-than-light communication or superdeterminism or whatever else into our models, because I think our models are massively underdetermined by the experimental data. Given that I think a plethora of models is possible, I’ve felt happy enough not obsessing over one in particular.
Philip Ball’s 20 questions example is one of the best, but it’s not statistical. My view is more that when we set up an experiment that is constructed to produce many millions of events per hour, of course there will be statistics. How those statistics are arrived at may not be accessible, but we can think of it as enough that they are arrived at. Perhaps there’ll be an almost perfect frequency, but the timings of events are more likely to be somewhat random: that still means, however, that there will be an average rate, standard deviation, and higher moments, et cetera. When we change anything about the experiment, there will be a small change of those statistics. We’ll be able to tell, after a few years of data, that someone substituted a titanium bolt for a brass bolt, say, because of slight resonances in the statistics over time. We can perform an equivalent of an MRI of the whole apparatus, but, like an MRI, it takes time to collect enough data and the environment has to be sufficiently controlled. Obviously this is different from what you’ll find in books about QM, because in those there are particles all the time, even though we know for sure that talking about particle properties often doesn’t work at all well; the events that might be naively supposed to be caused one-for-one by particles are barely mentioned!
If we record the signal levels out of dozens of devices, giving us TeraBytes of data, all that data is jointly collected, effectively as single data points for trillions of measurement operators. We could say that those trillions of operators do not commute and there’s a collapse of the state after every single measurement, but because all those measurements are jointly collected we could also say that the state was effectively a classical state and all the measurement operators commute. The first picture, with collapses, is effectively Heisenberg’s version of the Copenhagen Interpretation, whereas the second picture, without collapse, is effectively Bohr’s version of the Copenhagen Interpretation: recorded results are classical. Single data points for each of trillions of measurement operators doesn’t give us very good statistics(!), so we have to apply algorithms that consolidate those data points into subensembles that we think are similar enough that we can model every entry in the subensemble as measurement results for the same measurement operator. We have to keep track of whether the selection of a given subensemble is compatible with the selection of all the others or not.
We can certainly fill in between the single data point data, satisfying our “strong ontological leanings” and our desire for “trying to visualize some skeletal idea of what’s “really” going on” but if we construct incompatible subensembles then we can’t necessarily fill in between the results of those algorithms. We can have models that fill in between the Terabytes of recorded data when we think of them as single data points for an equal number of measurement operators, but after we have applied some complicated set of algorithms that consolidate that data as large numbers of samples associated with relatively few measurement operators that don’t commute, we can’t. [I said that twice, but I can’t decide which to delete.] Of course models that consistently fill in between single data points for trillions of measurements in the presence of significant noise are wildly underdetermined by the recorded data, so we should be careful not to take any particular model too seriously, but if it helps the imagination enough, it seems to me OK.
It’s perhaps worth saying that I think this places all the noncommutativity in the algorithms we use to analyze the actually recorded signal levels, each of which is effectively a transformation of the Terabytes of recorded data. Much of what those algorithms do is effectively nonlocal and they may subtly encode an a priori ontology — which is OK if so, but it’s good to understand what assumptions have been encoded. I should also mention that I’ve recently come to think that the above account can inform Everett’s relative state interpretation, which I never thought I would hear myself say, however I think MWI is just wrong. The idea of worlds splitting, so that measurement results are different in different worlds at later times, is AFAICT contrary to the mathematics of QM when there’s only unitary evolution of the state over time (in which case both worlds would have to give exactly the same result for the same measurement, for every measurement).
June 9th, 2021 at 2:52 pm
I want to chew on this one a bit more, but a couple quick responses: I do agree some specific cat can have the macro eigenstates |dead〉 or |alive〉 but I’m not sure they are meaningful. Per your reply, perhaps “not useful” is a better term. It just seems that all the participating terms in the wave-function would make it impossibly mixed.
Now I’m not quite sure what to make of eigenstates such as |can-be-killed〉 or |can-be-revived〉. There seems something of a basis problem there. The “can” could lead to so many possible measurements (can be petted, can be found, can be sold,…).
I quite agree MWI is just wrong. My reasons have evolved over time. My main objection used to be about energy. How could reality branch in light of E=mc2? Sean Carroll speaks of energy “thinning out” which I just can’t buy at all. First there is one cat, now there are two. How is that possible with cats let alone, as the theory implies, entire universes?
There seems another interpretation, that there were always two (infinite, actually) cats, but they were identical cats until one infinite bunch diverged from the other infinite bunch. Other infinite bunches missed the experiment for various reasons, and all sorts of things happened to other infinite bunches. Anyway, there’s no branching of universes, just an infinite number evolving their own wave-function, coinciding with others until they don’t.
I take your point about unitary evolution. I guess the idea is that tiny differences in earlier conditions cause the wave-function to eventually diverge in its evolution. MWI theories are implicitly deterministic and don’t include free will. The idea, as I understand it, is that each world’s wave-function has different starting conditions. The ones “closest” have nearly identical starting conditions and, presumably, evolve the same way until a certain cat either dies or doesn’t. I suspect MWI and superdeterminism theories go hand-in-hand.
Multiple cats gets around the energy issue, and other issues with worlds actually splitting, but opens entire cases of cans of worms on its own. My current question: How does physical reality coincide? The canonical answer seems to be “decoherence” but I don’t see how that applies.
Anyway, yeah, MWI is just wrong. (I don’t think superdeterminism is, as they say, even wrong.) 🙂
More later.
June 9th, 2021 at 4:06 pm
The way the math works, if we can perform noncommuting measurements on different subensembles, then with enough measurements we can distinguish between mixtures and superpositions. I don’t put much weight on my discussion of cats, which is why I pass it off as whimsical, but that noncommutativity
“there were always two (infinite, actually) cats, but they were identical cats until one infinite bunch diverged from the other infinite bunch.” You modify that slightly later on, to say that different cats “have nearly identical starting conditions” (my emphasis).
I think that comes to be almost exactly the same as what I think my position is: essentially just a many-worlds interpretation of classical probability and Liouvillian evolution of a classical probability, so that chaos sometimes makes two worlds that are almost but not exactly identical now quite far from identical after even a short time. All worlds that are different in any way whatsoever are different, and it’s the leveraging of whatever that difference is that we eventually notice. As a mostly-empiricist, I don’t think classical many worlds is necessary, but if it makes some people happy then OK. Conservation of energy is OK for the Liouvillian evolution of classical probabilities.
I think superdeterminism is more-or-less coherent, but if noise and the axiom of choice enter into the game then the mathematics is a seriously wild mess.
June 10th, 2021 at 7:22 pm
A quick clarification: by identical cats I mean their histories (down to the particle) are identical up to the point mechanism kills one of them. The worlds the cats inhabit have nearly identical starting conditions, but some slightest of slight differences causes the radioactive sample to decay in one but not the other. I think that’s the more sensible reading of MWI than the one promoted by Sean Carroll where one cat branches into two. (Or Carroll himself splits when he uses quantum coin app to tell him whether to jump left or right during his evangelistic talks about MWI.)
What’s vexing is that it seems the wave-function can work that way. (One reason I’m trying to learn the math of QM is to better understand this situation.) In lectures I’ve seen visualizations of a particle tunneling through a barrier. In those, the wave-function, after tunneling, “branches” into descriptions of two probable locations for the particle, one that tunneled, one that bounced.
When a photon’s location is split by a half-silver mirror, is the proper wave-function description of one photon that branches to having two probable locations? Or is it a superposition that describes two paths for the photon from beginning to end and those paths are the same until they diverge at the mirror? Or are those just two options at describing the situation?
Everett’s description, to me, seems to suggest a superposed quantum system interacts with a measuring device causing a superposition of possible measurements, and this spreads to the scientist and Wigner and the world. One cat becomes two. But calling it the “universal wave function” makes me wonder if I’m misreading that.
Thinking about the dual description and the idea that some initial starting condition must differ to create the divergence, I’m not sure, under MWI, that completely identical worlds can’t diverge given the probabilistic nature of QM. (And perhaps CM!) Couldn’t completely identical worlds diverge by collapsing differently? A photon with a 50% chance of passing through a filter does in one (set of) world(s) and not in other set(s)?
June 11th, 2021 at 8:54 am
“What’s vexing is that it seems the wave-function can work that way.” I think the point is that there is noise —something is unaccounted for in a statistical model— but there can nonetheless be some measurement results that are 100% correlated. 100% correlation is just a statistic that can happen, so we have to be able to model it when it does happen, which we can; when it happens in the results of an experiment, then it is notable.
Even more, a particular 100% correlation may only happen if we apply very complicated algorithms to what we might call the raw data of an experiment. I think that as soon as we notice an algorithm that consistently gives us 100% correlations, we look for a hardware way to apply that algorithm —because that’s an important algorithm!— then that new hardware produces what looks like a qualitatively different kind of raw data, because of that 100% correlation.
That hardware, however, conceals a complete morass of noise. A modern computer achieves a very small level of error by applying error correction at all scales, concealing that noise played a large part in the computer operating at all. It doesn’t achieve 100% correlations for what goes into memory and what comes out, but it gets very close. The engineering needed is remarkable.
From a computing point of view, I suppose a Hilbert space operator just takes an input Turing machine tape, a vector, and produces an output Turing machine tape, in one step. I suppose that any algorithm that can be guaranteed to terminate, software or hardware, can be presented by that. Some of those vectors represent 100% correlations of some measurements with others, but there are other measurements that would be barely correlated at all if we performed them. And thus, I think, the game of measurement and modeling evolves.
Commenting at this lowest level is getting slightly awkward. I have to scroll up a long way to the comment that has a “Reply” button. What to do? Just shutting up for now is an option!
June 12th, 2021 at 11:18 am
I have comment indent level set to three, so WordPress doesn’t offer a Reply link on third-level comments. (A deeper comment level just means things get really narrow.) One trick is to just start a new thread at the bottom, which is what I’ll do here. That way I can merge what’s branched into two sub-threads.
Some start a new comment thread every time they reply. That works fine, especially in cases like this where it’s just us two chickens.
June 8th, 2021 at 11:41 am
After reading the first page or so, would I be anywhere close in saying it looks like extending wave mechanics into the classical regime and Fourier analysis accounts for the noncommutative nature (as I understand it does in QM)? Or am I way off the mark?
June 8th, 2021 at 1:50 pm
That’s not far from it, but there’s a weird mix of lots goin’ on in the mathematics, but at the same time that lots-goin’-on is a consequence of actually very few assumptions. One of the assumptions in my work is that we have no choice but to work with probabilities and statistics because there is noise. Probability theory, with all measurements compatible, can already be counter-intuitive, but on top of that we add measurement incompatibility, which puts us into a mathematical theory called quantum probability. That’s been around for about 50 years, but the assumption has never shifted from the idea that measurement incompatibility is not classically natural.
As your QM-101 series says, transformations don’t always commute: move and rotate is not the same as rotate and move. It’s always said, however, for the last 90 years and counting, that everything in CM is commutative, we only have noncommutativity in QM. If we’re talking about probability and statistics, then commutativity goes hand-in-glove with measurement compatibility and no-entanglement; noncommutativity goes with measurement incompatibility and entanglement. If we add noncommutativity into CM, then I think a lot becomes subtly less confusing, but we are in a more elaborate mathematics, so it isn’t gonna be easy.
One of the most important transformations in QM is the fourier transform, but it hardly makes an appearance in CM, largely because we don’t run time-series analysis in CM. In contrast, fourier analysis is used all the time in classical signal analysis, but probability theory makes hardly any appearance, because there’s just the signal. We can lay noncommutativity at the door of the fourier transform, but it’s only when we talk probability theory as well that we get to a classically natural mathematics that comes close to QM. I’ve seen demolitions of the idea that the fourier transform in signal analysis —what you can see in the four YouTube videos I link to in AlgKoopman (which is the abbreviation I use for that paper)— is like QM, and I think those demolitions are kinda right, but they’re also too quick.
Where CM becomes very like QM is when we run fourier analysis on probability distributions. We don’t have to do time-series analysis to screw with our heads, we just have to do probability distribution analysis.
June 8th, 2021 at 3:35 pm
Okay, I think I follow. It’s an intriguing notion! I like the idea of a single mathematics, a single physics, that describes reality.
(Do you worry that “more elaborate mathematics” is any kind of red flag? Not that it need be, reality is what it is, and perhaps your work might simplify as it comes together? I’ve noticed that in software development, code size grows with function, up to a point where, suddenly, its essence becomes clear, and then code size often decreases despite added function. BTW: I also appreciate the way you bracket your work with caveats. Most of us internet crackpots are so sure we’re on to something!)
That’s an interesting point about rotation. It’s kind of the canonical example of noncommutative operations, isn’t it. (Or the one about getting dressed. Underwear last doesn’t work so good.) I take it that QM is advertised as noncommutative, in part because of the Fourier duality, but also because measurement “collapses” the wave-function creating the measurement incompatibility. The canonical example there being spin or polarization measurements.
(With position/momentum it seems more obvious to me a Fourier duality is behind the incompatibility, but spin measurements “feel” different to me; I don’t see the time-frequency duality there. That may be due to my mathematical ignorance, though.)
Now that you say it, it is kind of odd that signal analysis is all about Fourier transforms, but they don’t otherwise show up that much in most CM while they’re fundamental in QM. I guess we just don’t see CM as wave-like. (Ironically, my next post is about the de Broglie wavelength of real world objects.)
I take it from something you mentioned about unitarity in your theory that you don’t share any of my skepticism about the conservation of information? (Not that I’ve met many who do. Roger Penrose seems to, so I don’t feel I’m completely crazy on that point. Seems that it can be created, so why not destroyed?)
June 8th, 2021 at 5:03 pm
The “more elaborate mathematics” was always there in CM, it just was not noted. The extra is just the Poisson bracket, which is what Dirac noticed to have a similar structure to the commutator in Heisenberg’s work. What has been done for the last 90+ years has been to try to force the Poisson bracket and the commutator into correspondence, which definitely doesn’t work. There are theorems. What does work is that we can take the purpose of the Poisson bracket in CM to be to generate transformations like rotations, and then equate those transformations to the transformations of QM. That’s the difference of starting point between what’s called geometric quantization and the Koopman approach.
I know what you mean about bloat because I was a computer programmer myself until about 1989, when I dropped out. I hope some of the bloat will drop out if/when such ideas become commonplace: philosophy of science sometimes talks about the difference between the context of discovery and the context of explanation, and I think there is also a context of what we might call proliferation, when the subtleties of not overclaiming but still explaining why a new approach has some advantages over older approaches can be somewhat eliminated because that battle, over years or decades, has been fought and won. If hundreds of physicists and engineers put their minds to how to teach some new stuff to their students, there will be a rapid evolution towards different approaches for undergraduates, graduates, and others. New textbooks, yay!
You’re right about spin measurement being different. My take on it is definitely speculative. Suppose you could measure the electromagnetic field at ridiculously small scales (such as 10⁻⁶⁰ meters, say) with such great accuracy that you could tell the difference between an electron and proton, and even between those and a neutrino and a neutron, et cetera, because of their different electric currents, which cause different patterns in the EM field: in that case, you hardly need measurements of the electron and proton fields, including of their spins. In classical physics, this would just be to eliminate some degrees of freedom from the equations of motion (which would seem too much if we still thought CM and QM are hopelessly different, but perhaps not if we have a clearer path between them.) In signal analysis, this is just to deduce what various measurement results would be —if we were to measure them, even though we didn’t— from the measurement results we actually have: an MRI doesn’t measure spins, it measures the EM field in the ring, then computationally reconstitutes an image of what the EM field would be measured to be in the body, as a density, possibly with different colors for different kinds of resonance. In practice, a model of atoms and molecules as up-or-down or left-or-right spin measurements, et cetera, will often be a useful way to organize a calculation and a display of the information, but in principle I think it is not absolutely necessarily must-be how we think about the state and its measurements.
There are other stories that can be told about spin in the world of QFT, which is quite different from the very practically useful but not necessarily fundamental ideas of spin that one finds in the kinds of 2-dimensional Hilbert space models one uses in quantum computation, but that gets to be a very long story indeed.
I’m pretty sure the difference between signal analysis and CM is because CM works with a line working its way through phase space over time. There’s also something like the Heisenberg picture of CM, in which the state doesn’t change at all but what we measure changes over time. A closed system requires perfect isolation of the state from the outside world, whereas signal analysis is specifically about an electromechanical device that responds to its environment so we can decode what is happening in the surroundings of the device. [I’d better emphasize that I almost always think in terms of Hamiltonian CM.] The kind of CM I’m proposing models the outside world as a particular kind of noise, the Gibbs thermal state, which I think puts us more in the signal analysis world.
I should add that there is a huge difference between thermal noise and quantum noise in this perspective, but that the difference can only be discerned when we discuss quantum fields. Crucially, quantum noise is Poincaré invariant, whereas thermal noise is not, but the concept of Poincaré invariance requires at least 1+1-dimensions for us to be able to define it. I take it that a Poincaré invariant noise is perfectly comprehensible for a classical physicist. Just throwing it out, but we can think of quantum computation as noise engineering: I think of that as a higher order mathematics than analog computation with, say, electrical currents in an electronic circuit, for which we try to eliminate noise as much as possible.
If there is noise all the way down, then information is infinite, in which case questions about whether there is more or less information is about differences between infinities. If the universe is truly finite —which we can’t possibly determine from experiment and who knows— then OK, but otherwise, Whoops! If indeed whoops, then information can still be useful if there are natural discretizations available, which atoms can give us in pragmatic ways, but I think in principle it’s problematic as hell. Even if the universe is finite, if that finiteness is sufficiently large that it’s for the purposes of our current measurements hundreds of orders of magnitude away, then we will still have to work with whatever natural discretizations we can find and the concept of information in practice will be fraught. So I think you can think of me as in something like your camp, but perhaps not for the same reasons?
I feel like I’m bouncing off the walls a little here with so many analogies in play, but hey, thanks for asking good questions.
June 8th, 2021 at 5:09 pm
It’s been fun; you’ve given me some tasty food for thought. I gotta go IRL for a while, but I’ll be back tomorrow.
June 9th, 2021 at 2:14 pm
Your comments about students and teaching (and quantum reconstruction in general) remind me somewhat of posts Stacy McGaugh has been writing over on Triton Station with regard to dark matter and MOND. There issue there is a view that DM must exist, it’s just a matter of finding it. That blinders many from the latest evidence and thought.
Education is slow to shift sometimes. (Yet oddly over-reactive in other ways.) I suppose a problem with both education and science is the degree to which politics and social biases apply. Or money. Conservative thinking often opposes both education and science. (But that’s a whole other conversation.)
I couldn’t keep up with your speculation about spin and electric currents. I get the sense you’re saying spin isn’t a physical characteristic of quantum systems, but I’m sure I’m misunderstanding. A lot of my thought is linked to the Stern-Gerlach experiment with silver atoms in a magnetic field. I agree there are multiple ways to formalize that mathematically, but the physical behavior fascinates the ontologist in me.
Therefore, talking about such small size scales struck a chord. I was a fan of string theory when it first took off (I bought into Brian Greene’s hype), but over the years I’ve lost faith in it (and Brian Greene). But I’ve imagined that if something along the lines of string theory was true, that might give “particles” a physical orientation (down at the string theory scale) that shows up as spin.
As I mentioned in the other thread, the thing about spin that impresses me is what happens with multiple measurements, the way each measurement seems to affect — “collapse” — the wave-function. I imagine the magnetic field used in the spin measurement might align that physical orientation. I know Dirac (I think? or was it Pauli?) calculated quantum spin couldn’t be physical because it would require spinning faster than light speed, but was he thinking down at string theory scales?
I do take what I believe to be your point about measurement statistics. The three results we get of each specific particle are assigned probabilities based on myriad similar test results, so there’s an implicit assumption about reality built in. FWIW, I do think we build a reasonable model of reality from repeated tests. (But like I said, maybe I’m not keeping up with your point here.)
I’m perceiving two distinct key ideas here, one about the role of noise, the other unifying CM+QM. A third aspect involves a signal analysis approach. I find the noise ideas attractive, and I’ve mentioned I’m on board with quantum reconstruction. Signal analysis isn’t something I’ve had much exposure to, so that’s the most opaque part for me. After I get a chance to check out the video and paper I might be able to come up with something intelligent to say.
A fellow coder! I went from being a hardware guy to being a software guy. Retired a bit early when The Company stopped taking custom corporate software seriously. I was a tailor and they were increasingly buying off the rack. The “open office” bullshit was the final straw…
June 9th, 2021 at 4:44 pm
“the way each measurement seems to affect — “collapse” — the wave-function”. Since “The collapse of the quantum state as a signal analytic sleight of hand” is not published, it’s only on arXiv, it doesn’t have the same status as AlgKoopman. But if I make a grand claim for it anyway, what it says is that collapse is an artifact of a particular way of modeling an experiment. That’s fine, it works, but there’s another picture in which there’s a different state and different measurements and there are no collapses. I’ve been claiming that this is not unlike the difference between the Heisenberg and Schrödinger pictures, which apply an evolution to the measurements in the first case and to the state in the second case. That’s just mathematics, so I think we can’t easily just dismiss it, but of course we can suggest perhaps quite divergent ways to think about and to use that mathematics. Strangely, I think that mathematics is vaguely implicit in Bohr’s thinking, but it’s also explicitly enough like some mathematics done by Belavkin in 1994 that Richard Gill pointed me to it.
The account I give above for an MRI —as a purely electromagnetic way to detect the spin precession of atoms that are perturbed by a relatively large EM field and also radio frequency perturbations— seems to me in retrospect to be a moderately potent argument that we can talk just about EM measurement. Then we can deduce information about spin properties from those EM field measurements. Of course when I say we measure the EM field, what I really mean is that the EM field induces currents in an electronic circuit, which we then record in computer memory as a number (which, if it’s on a hard disc, is just aligned magnetic spins, which we can detect because of its effect on the magnetic field in the hard drive head, so there’s a lot of chicken and egg in such an account!)
I’ve never been a string theory guy. Whenever I’ve tried to read the papers and textbooks, I’ve always been repelled by how easily they leap into (difficult) mathematics without enough understanding of QM in particular.
I hope you find those 20 minutes of the IQOQI video worthwhile. I will certainly be interested in any response.
June 10th, 2021 at 7:39 pm
I may still be missing the point; I quite agree we can use EM to measure spin. (I think even a Stern-Gerlach device would qualify as such?) What impresses me is what happens when we do multi-stage measurements. Even performing the experiment on a single particle, the third measurement has a 50% chance of being different from the first (identical) measurement.
A question I have about a signal analysis approach to test data is forest and trees. For example, it would be possible to analyze the signal characteristics of an RF signal in myriad ways, none of which need include that the signal content is a sweet jazz sax solo. My dim conception here is of treating test data as signal to be analyzed. Is the sax solo lost at all in this analysis?
June 11th, 2021 at 8:24 am
I think the point is that Hilbert spaces of high enough dimension and operator algebras that act upon them are general enough to model whatever statistics we could possibly calculate for any actually recorded experimental results. Restricting to only probability densities that admit joint probability densities is an unhelpful restriction that prevents us from modeling some kinds of statistical analysis. What nature does is amazing and amazingly complicated, so we have to do whatever is necessary to model it and our relationships with it in useful ways. I suppose. Some of what we do might make us think we really understand some aspect of nature, but I find myself that a few days or years later I start to see that something or a lot is missing and humility comes in with a bang.
I think losing the sax solo in just simple-minded fourier analysis is absolutely a problem. If our minds can concentrate on perhaps half a dozen things at a time, however, I think it’s good to focus on having those half dozen things be at multiple scales. The overall feel of the sax solo is fine as one level, but I suppose part of what makes a performance really tingle is noticing as well the fingering of a particular sequence, the expression on the soloist’s face, and something of how the rest of the audience is responding; knowing that the wider world is raving about the previous night and knowing something about Jazz as a whole can step beyond the single event. Reducing a forest to just its component trees is not a completely successful strategy, but never taking a more detailed look at the forest leaves us with no appreciation of the trees. “Multi-scale analysis” is about trying to get at different aspects of huge and large and small and tiny.
June 12th, 2021 at 12:25 pm
“What nature does is amazing and amazingly complicated, so we have to do whatever is necessary to model it and our relationships with it in useful ways.”
Totally agree! Tracing back, this sub-thread seems to have started when I distinguished spin measurements from position/momentum measurements, and that came from talking about noncommutative measurements. Which came from how such are an oft stated supposed difference between QM and CM.
At root for me seems the notion that QM measurements alter the thing measured (“collapse” the wave-function), while CM measurements don’t (because there isn’t a meaningful wave-function to collapse). Spin measurement experiments seem to demonstrate this nicely in allowing multiple measurements on a quantum system. (Beam-splitter experiments usually measure just once.)
[For me there is also the question of the ontology behind multi-stage measurements. Is there something like an actual spinning string? Or a non-spinning string with a wave running along it for spin? Does the spin measurement — however it’s done — change some physical aspect of the (in this case) silver atom? The math matches experiment, but what is it describing?]
Returning to the present, as you go on to say, reality has a way of humbling our best ideas. I very much agree science is a contingent process that seeks to converge on an understanding of the patterns we observe. I think we both see QM as not quite ready to come out of the oven.
“If our minds can concentrate on perhaps half a dozen things at a time, however, I think it’s good to focus on having those half dozen things be at multiple scales.”
Totally agree again! People can walk and chew gum at the same time. (While texting!) And, as you go on to say, the more context and background one has, the richer the experience.
“100% correlation is just a statistic that can happen, so we have to be able to model it when it does happen, which we can; when it happens in the results of an experiment, then it is notable.”
I think I’ve lost the thread here or just aren’t keeping up. We’d wandered into the MWI, and perhaps it isn’t particularly relevant here.
The thing that impresses me about spin or polarization experiments is the physicality. It isn’t (such as they do at CERN, for instance) a matter of picking out statistical patterns from a very noisy background. One can see non-classical behavior with just three pieces of polarizing filter. (In fact, I think that’s an under-appreciated experiment. To me it’s as mind-blowing as two-slit experiments.)
It’s interesting what you go to say about computer hardware. Indeed. Circuits are designed so transistors are run in saturated mode, either all on or all off. Capacitors dampen voltage fluctuations. At small scale and fast cycle rates, RF coupling is a problem, and at really small size scales, quantum issues arise.
It helps highlight something I’m vaguely trying to get into focus… Computers can be built on large more physical scales that effectively banish noise simply due to scale. A mechanical calculator, for instance, is almost physically incapable of accidental error. Per Church-Turing, such machines would be equivalent, in principle, to noisier machines.
With computers it’s the information patterns that matter; that’s what underlies C-T, the information patterns. Now there’s a whole thing about dualism with digital computing, and the real world is different (noise and all), but what I’m fumbling for is the idea that the patterns in our measurement data are telling us (I believe real) things about the physical world.
Our conceptions of reality may be only a kind of wireframe model, but I do think that model reflects something real. The correlations in the data are telling us something.
Great conversation! A lot to think about!
June 12th, 2021 at 8:08 pm
“At root for me seems the notion that QM measurements alter the thing measured (“collapse” the wave-function), while CM measurements don’t (because there isn’t a meaningful wave-function).” Right, but a “state” (a better, more general name than the wave-function, IMO) tells us the results of measurements only relative to a particular choice of operators to describe those measurements. Thus, ρ₁(M₁₁), ρ₁(M₁₂), ρ₁(M₁₃), … can give the same results as ρ₂(M₂₁), ρ₂(M₂₂), ρ₂(M₂₃), …, even though ρ₁ and ρ₂ are different and the measurement operators are different as well. The classic example is of the Schrödinger and Heisenberg “pictures” of the unitary dynamical evolution, the first of which changes the state over time while the measurement operator stay the same, the second of which changes the measurement operators over time while the state stays the same.
A collapse picture of measurement dynamics changes the state every time a measurement happens, but leaves the measurement operators unchanged, whereas a no-collapse picture has a single unchanged state but chooses different, mutually commuting measurement operators to represent the same measurements. [This is a new enough realization for me that I’m struggling to give a clear discussion of how the mathematics works; furthermore, it’s not something in the literature as far as I know, except vaguely in a paper by Belavkin from 1994, so I can’t tell you which book to look at. That aside, …] To say the above slightly differently, the mathematics of “collapse” of a state is exactly what is needed for us to be able to construct joint probabilities, but joint probabilities are exactly what we need for us to be able to give a classical description, using a collection of mutually commuting measurement operators.
Suppose we throw two dice and we always get a double: we would say that those results are 100% correlated. We don’t know how it’s done, but we do know how to describe that circumstance. We have a probability distribution p(x,y)=1/6 if x=y, otherwise zero.
Now we throw the same two dice, but we only look at one of them. We have a choice for how we model this experiment: if we see a 5 for the one die, we can “collapse” the probability distribution for the other die to say p(5)=1, otherwise zero; or we can work with the probability distribution for the two dice unchanged, and just note that the result of the first dice is 5. These are, I think, different pictures of the same situations, which can be more or less useful in different experiments. It’s not clear that one is right and the other wrong, but I think the empirical content of the different approaches can be made to come out the same when we perform thousands of experiments, which is arguably what matters in physics. Different ideas of what probability is “really” about might prefer one or the other picture, and might have different consequences for what future experiments we think it would be interesting to perform, however I don’t see a way to justify one or the other picture from experimental results alone.
If for a given preparation for an experiment we repeatedly measure what we call spin up-or-down followed by spin right-or-left, followed by spin up-or-down, we will obtain a given set of correlations and other statistics for that set of three measurements. We can write down a classical state that generates the statistics of those three joint measurement results. We can also write down a different state that changes systematically after the first measurement and again after the second measurement, which gives the same joint statistics. They’re very different theoretical pictures of the same world. There’s a lot to be said in favor of the “the state changes” approach at one level, but when we look at details of the measurement device, in which signal analysis looks at the output of every device picosecond by picosecond and it’s not just about a 0/1 result, I think it can also be helpful to work with a state that models correlations in a relatively simple way instead of with a state that changes.
Does that seem helpful? Thanks for pushing me to think more about spin measurements, in any case!
June 25th, 2021 at 7:48 am
[…] A quick note about the idea from a recent post about how earlier stages of the universe can’t contain enough information to specify everything about the current state. (See: Is Reality Determined?) […]
June 27th, 2021 at 7:48 am
[…] In my view it grounds at a higher level — I think free will emerges from how the brain functions, from our minds. Regardless, it’s a fascinating paper that I’ll post about in the near future. (It aligns with the idea about information I raised in my post Is Reality Determined?) […]
July 2nd, 2021 at 8:50 am
“There is no experiment that proves quantum measurements to be FUNDAMENTALLY probabilistic.”
All experiments demonstrate that it’s effectively probabilistic. The mathematics say it’s theoretically probabilistic. Until there’s a better mathematics, that’s about as fundamental as it gets. Regardless, we currently have no way to predict the outcome of a quantum measurement except probabilistically. Further, both theory and experiment suggest eigenvalues don’t exist until we measure them.
Consider a simple experiment that measures the spin of an electron using Stern-Gerlach devices. The experiment has three stages, each with their own S-G device. Stage 1 and stage 3 measure the Z axis (vertical) while stage 2 measures the X axis (horizontal). We could do similar experiments with photons and polarizing filters.
Stage 1 give 50/50 results, spin-up/spin-down. We direct the spin-up output into stage 2, which also gives 50/50 results, spin-left/spin-right. We direct the spin-left output into stage 3, which measures vertical again. Note that we’re measuring the particles previously measured as spin-up. But we again find 50/50 results. The stage 2 measurement erased the stage 1 measurement.
But if the particle had definite spin, shouldn’t the stage 3 measurement only produce spin-up results? If stage 2 is skipped, and spin-up electrons are sent to stage 3, it only produces spin-up results, which shows how stage 2 erases the first measurement.
“So, you take the view that physics is non-local and relativity should be thrown away, this is what you are saying?”
Not at all. I said explicitly that, while locality is true, but what’s called local realism is not in QM. Special Relativity, of course, is also true; entanglement does not violate the notion that information cannot be exchanged outside the light cone. There is no way to leverage entanglement to send information FTL, which means causality is also preserved.
But we are indeed left with the very weird situation that a wave-function can be spread out spatially and changes to that wave-function are monolithic.
“Relativity forbids two space-like events to cause each other.”
Sure, but that’s not what’s happening. It’s not that measuring particle A then causes something in particle B. It’s that particle A and particle B share a wave-function and are two parts of the same single thing.
“This seems to be an unjustified assertion. Where did you prove determinism to be wrong?”
I was referring to tests of Bell’s Inequality. Determinism as a general proposition is an open question. I believe it isn’t true, you obviously believe it is true. Certainly at a gross level reality seems deterministic, but how true that is at lower levels isn’t known. At the lowest level, like it or not, quantum events appear probabilistic.
“Just take a look at the abstract [of Bell’s seminal paper]:”
And Bell immediately continues: “In this note that idea will be formulated mathematically and shown to be incompatible with the statistical predictions of quantum mechanics. It is the requirement of locality, or more precisely that the result of a measurement on one system be unaffected by operations on a distant system with which it has interacted in the past, that creates the essential difficulty.”
“So, you need hidden variables to restore locality, without them QM is non-local.”
Yes, in this sense QM is non-local, but since it cannot be leveraged causally, reality remains safely local per SR. Hence we say locality is preserved, but local realism is not.
July 5th, 2021 at 10:56 am
If things were as simple as you believe, everyone would be on the same page and there would be no dispute. Since there is dispute, I think things can’t that simple.
“Yes, the pair behaves as a rigid rod. And rigid rods are forbidden in relativity.”
Agreed, but I think that’s the wrong intuition; there is no rod, there is only the two particles, which are described by a single wave-function. Think of it more, perhaps, as a wormhole linking them. (Wormholes are not forbidden by relativity.) Relativity isn’t violated because there is no travel distance and no travel time.
Further, there is no way to leverage the connection, so there can be no causal violation.
Indeed there are frames in which Alex is seen to measure first and frames in which Blair is seen to measure first. But as these are presumably events with space-like separation, and random to both Alex and Blair — and thus to passing observers — I don’t see a contradiction.
When Alex and Blair measure orthogonal angles, their results are random and not correlated at all, right? Passing observers would see nothing paradoxical about seeing those results occur in any order, agreed? If Alex and Blair measure non-orthogonal angles they don’t know that so their results still appear random to them, and thus to passing observers. It’s only in comparing those results that a correlation (cos(θ)^2) appears.
So all passing observers in any frame see is Alex and Blair making measurements and getting random results. Any correlation they note works regardless of who appeared to make the first measurement. Presumably there is some fact to the matter, say it was Alex who measured first in the particle’s frame of reference, and thus Alex who collapsed the wave-function to a known eigenstate. But that is indistinguishable from it appearing Blair measured first.
“The magnetic field forces the electron’s spin to align with it.”
I agree the measurement changes the spin property, but disagree on the nature of that property. My first question is: What classical thing is spinning? As I understand it, Pauli calculated that anything physical would end up spinning faster than light. Perhaps more importantly, why is spin quantized and fixed? Spin rates cannot be changed. This doesn’t seem classical behavior.
My second question is, given a second spin measurement at 10° to the first, don’t classical and quantum probabilities disagree? Classically, if 0° is 100% correlation and 90° is 0%, then unless there’s reason for non-linearity, there should be an 88% correlation in results? But the quantum probability is cos(10°)^2=97% correlation.
Classical and quantum probabilities agree at angles of 0°, 45°, and 90°, but differ at other angles. (Unless there’s some reason that classical probability isn’t linear?)
So I just don’t think spin is that simple.
July 6th, 2021 at 11:16 am
“It’s not about beliefs here. The argument is simple, it does not contain any advanced math or unclear assumptions. If many physicists are immune to logic it’s their business.”
And on that note I think I’ll recuse myself from further discussion. I think the conversation has gone as far as it’s going to.
July 2nd, 2021 at 9:32 am
“there is no contradiction between the existence of a deterministic world and our inability to perform infinitely precise measurements.” I believe Wyrd is referring to the problems we get if at every scale there are more details at a smaller scale, which causes something like Brownian motion at the larger scale. This gets the math of classical mechanics into deep waters: it’s not necessarily impossible to model such a state of affairs perfectly, and hence deterministically, but I think it requires much more sophisticated mathematics than Newton or Hamilton, and it might not be possible. It seems to me that Brownian or similar motion bubbling up into every scale is not different from fundamental randomness, and there’s no way to determine whether there is or isn’t a scale at which there’s no more bubbling up from smaller scales: I’m not saying there is or is not such randomness, it’s just that I don’t know how to be not a skeptic both about claims that there is and about claims that there is not.
By “nonlocal correlations”, I more mean the kinds of correlations one finds in a classical lattice at thermal equilibrium. Such correlations drop off rapidly with increasing distance, either close to inverse polynomially for a massless interaction or close to exponentially for massive interactions. This is the kind of nonlocality one finds in the vacuum state of a quantum field theory, which is formally Poincaré invariant. This also relates to your comment to Wyrd, that his “view is non-local and violates relativity.” There’s at least one type of nonlocality that can be understood to be not incompatible with relativity, although we can define “relativity” not to allow any nonlocal correlations.
The way I think about experiments that violate Bell-EPR inequalities and other weirdnesses is that we engineer the vacuum state, which has its nonlocal correlations, to have more pronounced nonlocal correlations in just the right way. I think in signal analytic terms that the vacuum state can be modulated . It’s not that easy to do, but it can, apparently, be done.
I think perhaps you’re reading too much into the instrumentalism that you rightly do see? Just because I’m allowing the records of the signals out of the apparatus as the only formal physical aspect doesn’t mean that I’m closed to you or anybody else imagining whatever there is for which we have no “instrumental” records. This is not different from projecting a 3d object onto three or four different 2d sheets and extrapolating from that to what a similar projection would be onto any of the infinite number of 2d sheets for which we haven’t done the measurements. So you’re correct, but also, I think, not. I find the really difficult trick is doing this stuff in a mathematically compelling enough way that physicists can’t ignore it while also not losing track of intuition. Just as tricky is finding worthwhile things to say about the weirdnesses that don’t upend how we naturally think about about how the world too much. Thanks for your comments here, which I’ll try to use if I can think how to.
July 5th, 2021 at 9:17 am
“we do have a perfectly fine deterministic theory, classical electromagnetism” — but it’s only OK up to a certain point. The theory of matter that we have is an essential addition, and it’s not just about point particles. Whether we like it or not, our better theories do include a probabilistic theory of noise.
I’m afraid I have to translate your comments into my own ways of thinking, which are essentially field theoretical, with, as I say, a probabilistic model for noise. I’m not saying I’m right to have been thinking that way for the last 20 years and more, but that’s how I’ve been trained and how I’ve trained myself to think. When people say “particle”, “electron”, “proton”, … (which many, many people say, not just you!), I translate that into signal traces on oscilloscopes and sudden transitions of those signals that trigger a record being made of an “event” (almost always by a computer, at MHz rates, with humans involved in the design and building of the experiment, but not at all in the routine running of the experiment). Although, as I say above, I can see that there’s a between that hasn’t been recorded, and we can have an informal conversation about it, there’s a formal physics sense in which I see only the records of events, I do not see particles. I feel more-or-less comfortable with this choice because saying that particles travel from one place to another gets people into such twisty places, such as de Broglie-Bohm-type trajectories in configuration space (which are OK, and I even think in such terms sometimes, but they’re mostly not what I work on).
What I am willing to talk about formally, because it changes what experiments and engineering we try next, is what measurement results we expect we would record if we tried a different experiment: if we moved this or that piece of an apparatus by a few nanometers or by a few kilometers, would the recorded results change by a lot or by a little? Accommodating arbitrary refinements, this gives us a continuum of how recorded results would change as we introduce various changes (perhaps with some discrete changes of response, but we can’t in practice change an experiment continuously, so we can’t ever be sure whether what appears to be a discrete change of response would become continuous if we changed the experiment in a more refined way), so we end up with a field theory of what recorded results we expect for a continuum of different apparatuses.
Because of the math in my various papers (which is mostly just derivative of work by much better mathematicians, but in some cases it’s interesting enough that it has been published: I’m both inside and outside the academic box), my understanding is that quantum probability is a natural extension of classical probability, so it seems to me, as you say, not so weird. I think much of the math is contrary to simple intuition, however, at least to mine, so that explaining in detail why quantum this or that is natural has been for me a labor of many years. Even after so many years, I can see many reasons why my work is not and may never be compelling, for all that it sometimes seems passable work to me. Writing comments that take perhaps an hour on blogs and on Facebook helps refine ideas, but it’s only as grist to the mill of writing over months and years much more considered papers that, hopefully, help physics as a whole move forward by tiny increments.
On the Bell inequalities, I more-or-less agree with this of yours above, “The original argument, that there is something unexpected about Bell’s correlations is wrong”, except that I think I would put it less stridently. There’s a paper in the literature by Landau, in 1987, that points out that everything can be understood to be about measurement incompatibility. You can see the first page of that paper here, https://www.facebook.com/peter.w.morgan.98/posts/10217711122076280, together with some responses. The question came up yesterday here, https://www.facebook.com/per.arve/posts/10220058540630828
July 7th, 2021 at 9:35 pm
ROFL! It really cracks me up how often synchronicity pops up in my life. This video from PBS SpaceTime just dropped, and it couldn’t possibly be more relevant here:
July 9th, 2021 at 8:01 am
“the fact that there are deterministic theories that do not have the problems you suggest seems to imply that one cannot formulate a general argument against determinism in this way.” I agree with that, and indeed the de Broglie-Bohm theory is something of an example, but I think it’s not only what can be constructed that’s at issue. We also have to consider what is tractable, useful, and even more demanding, that we can use for engineering purposes and that gives an effective “intuition” (which I think of as about imagining what experiment we might try next that might have a useful or interesting result). I think it’s there that deBB falls down: engineers use Hilbert spaces because they play nice with other engineering tools, whereas the mathematics of deBB doesn’t do enough to pay its way (and, it seems, I think, it doesn’t in practice do quite enough for the intuition; in contrast, the MWI is said to be perhaps the most effective for the intuition in the quantum computing game, which I’ve always struggled with but that I’ve slightly come to see in the last 6 months).
At the level of signals, and signal/noise, which is where I think the details play out, there is noise in any serious boundary-pushing experiment and having an effective noise model of some kind is essential. We can use probability densities (which deBB does, so it’s deterministic but also probabilistic), or we can use stochastic methods (Wiener processes, Itô calculus, …), or whatever other formalism is useful. I have a personal preference for what seems to me the relatively clean mathematics of probability, but it’s not hard and fast for me and I’ve seen people like Christof Wetterich achieve quite compelling results using stochastic methods.
“In other words, it is assumed that a classical measurement of a classical system always gives you the corresponding property of the system. But why? I am unaware of the existence of any proof that this has to be the case.” This is to me a slight misstatement of the issue, but it’s close. It’s essentially the argument of “An algebraic approach to Koopman classical mechanics” that it is classically natural to use noncommutative operators and that it sets up classical mechanics as a straw man to deny it. Once we admit noncommutative operators into classical mechanics, we also introduce the interpretational problems associated with noncommutativity, however I personally think they look just different enough to allow us to understand the mathematics more clearly. Unfortunately, it’s probably the case that I’m far enough out on the limb of my mathematical abilities that I can’t explain that understanding very well and of course I may be wrong to think that I understand anything, as so many people have been before.
Measurement incompatibility (precisely: two observed probability distributions do not admit a joint probability) goes hand in hand with operator noncommutativity, but not precisely. We can use operators that do not commute in a sophisticated way (“collapse”) to generate a joint probability (that’s the math in my “sleight of hand” paper, which is arguably standard in the quantum measurement theory literature); and, if the state happens to be already an eigenstate of one of two operators then we can also construct a joint probability (when there are more than two operators the conditions get complicated). In the discussion section of the sleight of hand paper, I try to give some sense of why I think any probabilistic description will sometimes need to use noncommutativity, which is often conceptualized in the literature as “contextuality”, but I think there are numerous ways to approach the mathematics of applying multiple arbitrary transforms to recorded experimental data.
I think I can almost agree with most of this last comment of yours, but for wide acceptance I think it’s making the fine details of the math work out in a principled way to the satisfaction of the physics community that matters as the first step. I think I’m not getting it right enough, but I’m getting feedback, including but not only that some of it has been published in a reasonably good physics journal (but not in Nature, …), that makes me hopeful that I might be getting it just right enough that someone good can clean it up.
July 29th, 2021 at 11:54 am
Here’s a great video from Numberphile with a number of illustrations of chaos:
September 6th, 2021 at 8:29 am
[…] Lastly, I’ve begun to wonder if reality not being a continuum — not having infinite precision — doesn’t imply a non-deterministic universe. […]