In the last four posts (Quantum Measurement, Wavefunction Collapse, Quantum Decoherence, and Measurement Specifics), I’ve explored the conundrum of measurement in quantum mechanics. As always, you should read those before you read this.
Those posts covered a lot of ground, so here I want to summarize and wrap things up. The bottom line is that we use objects with classical properties to observe objects with quantum properties. Our (classical) detectors are like mousetraps with hair-triggers, using stored energy to amplify a quantum interaction to classical levels.
Also, I never got around to objective collapse. Or spin experiments.
The basic motivation for objective collapse theories to preserve the linearity of the Schrödinger equation during measurement by extending the mathematics to account for the nonlinear jump of the state vector.
The objective part is that the wavefunction collapses (localizes to position eigenstates) for concrete reasons. Environmental elements, such as time or gravity, are key factors in why it does so. Large objects collapse, evolve briefly, and re-collapse constantly and rapidly. Quantum systems that interact with large objects collapse quickly under the large object’s influence.
Spontaneous collapse theories are a subset with an emphasis on time. Wavefunctions collapse spontaneously after some period of time (a little bit like a half-life decay). Collapse is also environmentally dependent, so isolated systems decay slowly. (All these theories, of course, must match what we observe experimentally. They need to account for why tiny, isolated systems can be quantum but why large systems can’t.)
I like the basic idea of objective collapse theories. I think they help explain how the classical world emerges from the quantum one. I lean more towards ideas about object mass and gravity (plus the environment) than towards ideas involving time. I like the Diósi–Penrose model. Its link with gravity (the “missing link” of quantum) intrigues me. (The notion of a wavefunction “half-life” is attractive, though. Uncertainty adds some time fuzziness in any event.)
These models are not without their problems. They are criticized for not conserving energy (because of noise), but better models may fix that. (And I wonder if a gravity-based model might be less prone to noise.) And there is still the issue of the nonlocality of wavefunction collapse.
Bottom line, though, I think the notion of objective collapse is a step in the right direction. (And I think gravity from the mass of objects is an important factor.) It explains our experience of a classical world — reality observes, or at least collapses, itself.
I have long wondered why the measurement problem isn’t just a matter of adding some kind of math to the Schrödinger equation. That seems obvious somehow. The Schrödinger describes the evolution of a quantum state like a parabola describes the arc of a baseball. Both describe an uninterrupted process.
The equation for a parabola describes only the flight of a baseball. It doesn’t account for it hitting the ground or a wall (or a bird) — terms have to be added to the parabolic equation to define those interactions. Likewise, it seems the Schrödinger equation needs terms to describe the interaction of a measurement. In fact, applying an operator is the mathematical equivalent of making a measurement, so I’ve never been entirely clear why that doesn’t make measurement a non-problem. At least in terms of the non-linear vector jump.
Note that, in many kinds of measurement, the quantum system being measured disappears. For example, when an electron absorbs a photon, the photon vanishes. Its wavefunction vector doesn’t just jump — it goes away entirely!
Sometimes I wonder if the measurement problem isn’t something of a misunderstanding. Why would we expect the Schrödinger equation to evolve linearly when its system encounters a much larger object, one presumably governed by its own Schrödinger equation Why don’t we expect that sudden interaction to have a sudden effect on the wave states? Definitely a huge one on the quantum system (assuming it’s even still around). Possibly an indistinguishable one on the large system as far as its quantum state. As discussed last time, its energy state and physical configuration can shift suddenly and massively.
This still leaves the nonlocal spooky action at a distance of the wavefunction vanishing everywhere, the shift from dispersed wave to point-like particle, but entanglement experiments seem to make quantum nonlocality something we may have to just accept.
In the last post, I didn’t talk about spin experiments. Many modern ones involve single particles, so the noise issues I discussed last time apply. A common example is detecting photons that pass — or don’t pass — through a polarizing filter. There is, firstly, the issue of knowing when a photon is present, and secondly, the problem of false negatives and positives due to noise.
[I believe it’s more accurate to visualize the laser as generating a continuous EM field on the borderline of having one quanta (photon) of light energy per a certain beam length. Variation and uncertainty randomizes exactly when there is enough energy to extract one photon. The mystery, as always, is why a particular electron is “chosen” to extract the photon.]
Some experiments use large numbers of photons (or other “particle”) to build statistics that rise above the noise. Geiger counters, for instance, depend on lots of events to signal the presence of significant levels of radiation.
Large numbers may also bring out patterns single particles don’t demonstrate. While individual particles do interfere with themselves in the two-slit experiment, it’s only the pattern that builds up over time that demonstrates the interference behavior. And many entanglement experiments need lots of data points to demonstrate the quantum correlations of entangled particles.
The original Stern-Gerlach experiment spin experiment is worth mentioning. It used silver atoms, which have spin because of the unpaired electron in the outer shell. Results depended on lots of silver atoms accumulating on a glass screen — enough atoms to observably block transmission of light (and silver is nicely opaque).
It’s a striking experiment because it creates a very concrete classical record of a quantum behavior. A big part of the amplification here lies in using lots of silver atoms to build up the record over time. Another aspect lies in the energy of the magnetic field; that’s where the actual measurement is made.
There is something of a question of exactly when the silver atoms are localized. Is it upon passing through the magnetic field, or not until they splat against the glass screen? I have seen accounts that view the two possible flight paths as superpositions of up and down measurements until the atoms strike the screen. This in analogy to beam-splitter experiments where the photon goes one of two ways. (Two-slit experiments also put the two paths in superposition.)
I think the atoms get localized at least in the magnetic field. They’re already bathed in the environment and may have localized even before they interact with the magnet. The interaction with the magnetic field aligns their spins and deflects their path, which certainly changes their wavefunction. There is a question of how rapidly this occurs — is there a quantum jump (a wavefunction collapse) or some sort of progressive interaction? Does the path kink or bend?
The snap of a kink wouldn’t surprise me, but regardless, I think the magnetic field “measures” the silver atoms, which localizes them. The interaction depends on the location of the atom as it passes through the field, which collapses its position. From then on, it’s a “classical” silver atom flying through the air and colliding with the glass screen.
To wrap things up:
There is a wave/particle duality to matter. It only emerges at small (quantum) scales. It’s not apparent at large (classical) scales. Quantum systems do have wave behavior, but it’s not like classical wave behavior. In particular, interference between waves works differently — quantum systems combine probabilities, classical systems combine energies. In fact, a key difference with quantum systems is that they are probabilistic at their core (via the Born rule). Classical systems are fundamentally deterministic.
The measurement problem is the tension between the linear deterministic evolution of the quantum state and the nonlinear probabilistic jump of “collapse” (or “reduction”) to a measurement state. The former is a strong article of faith while the latter is so far without a solid explanation. (Or even a nearly universally accepted one.)
There are three distinct problems: Firstly, the nonlinear jump from evolving state to measured state. Secondly, the “spooky” nonlocal vanishing of the wavefunction everywhere instantly. Thirdly, the apparent randomness in determining which “particles” interact. The urgency of the first two depends on your ontological vs epistemic view of the wavefunction. (And interference effects lean hard in the direction of ontological.) The third one applies in all cases. It might be axiomatic that reality is somewhat random.
I don’t worry about wavefunction collapse. I think about it, but I don’t fret the disconnect between QM linearity and classical measurement. I think it’s very possible the wavefunction is just our Ptolemaic view of reality — almost right, enough to work very well, but missing a key insight. I don’t think the Schrödinger equation is the last word, just a great approximation, so the difficulties don’t bother me. I think solving the deeper quantum mysteries will allow us to understand measurement.
I think those deeper mysteries involve quantum superposition, quantum interference, and quantum entanglement. Because of its connection with the wave-like side of things, interference strikes me as especially interesting. But understanding the limits of superposition would be very helpful, too. As just mentioned, there are also the mysteries of apparent randomness (why that electron?) and nonlocality (which is an aspect of entanglement). And, of course, what exactly is the wavefunction?
Ultimately, I do think there is a Heisenberg Cut where classical behavior emerges and quantum behavior is swamped or averaged out. I think large objects decohere in the noise of 10²³ singers. And perhaps objectively due to mass and the gravity gradient it creates.
The big conflict between QM and GR suggests at least one, if not both, are significantly missing the mark. It’s quantum vs smooth; linear vs non-linear; background-dependent vs is the background. (That GR is a nonlinear physical theory seems points in its favor.)
So, bottom line, what do I think “measurement” and “collapse” is?
I can only guess: “Particles” in flight are single-quanta vibrations in the relevant particle field. Since the single quanta of energy is spread in a wave, the field vibrations in any one area are very much sub-quanta. I assume the wave moves at lightspeed (or whatever speed the particle does). At some spacetime point along the way, nature decrees an interaction. All the energy related to the “particle” instantly “drains” into a point-like interaction with another field, either starting a new wave or affecting an existing one.
However, this just equates “drains” with “collapses”, so it still invokes nonlocal magic. And in “nature decrees” it invokes the random selection magic. We do seem stuck with a (quantumly) nonlocal and possibly genuinely random universe. I’ve always been fine with that — prefer it, even — so I’d be willing to take them as axiomatic. (Doesn’t mean we stop investigating. They might not be.)
But bottom line, I think measurement is a mousetrap. The detector is a coiled spring with a hair trigger. Along comes a quantum mouse, and… snap!
And on that note, it’s time to move on to other things! (Not that there isn’t a great deal more to say, details to fill in, topics to revisit, elaborations to unpack. Like all good rabbit holes, Alice can explore forever. But I’m ready for a change of pace; I’m sure you are, too!)
Stay objective, my friends! Go forth and spread beauty and light.