Physicists disagree wildly on what quantum mechanics says about reality, Nature survey shows
First major attempt to chart researchers’ views finds interpretations in conflict.
I also think there’s a loophole to #4. When there is no firm scientific consensus to overturn, you can introduce a new explanation and have it gain traction without new experimental tests. I have in mind here some future reformulation of quantum mechanics. www.nature.com/articles/d41...
10.03.2026 17:19
👍 0
🔁 0
💬 0
📌 0
“The standard explanation for the rejection of continental drift is the lack of a causal mechanism, but this explanation is false. There was a spirited and rigorous international debate over the possible mechanisms… which ultimately settled on the same explanation generally accepted today.”
10.03.2026 17:17
👍 0
🔁 0
💬 1
📌 0
Nice piece!
One nitpick: I’ve recently learned that the conventional-wisdom story you’re using about the rejection of continental drift isn’t correct. Here’s a quote from Naomi Oreskes (“Plate Tectonics”, 2001):
10.03.2026 17:16
👍 0
🔁 0
💬 1
📌 0
I just don’t see the “formal problem”, if we’re using subjective probability distributions. There’s no formal problem with having one rule when you move around probability bins, and another rule when you open a bin to see what’s inside. (*Not* having different rules would be a 'formal problem'! :-)
06.03.2026 17:38
👍 1
🔁 0
💬 1
📌 0
But consider: why focus on “states”, vs. “histories”? We know (instantaneous) states are frame-dependent. Histories don’t require a foliation or chosen hypersurfaces. If you’re only thinking about states, parameterized by time but not physical space, that’s essentially a pre-relativistic framing.
06.03.2026 17:25
👍 1
🔁 0
💬 1
📌 0
I think we’re all taking it for granted that we’re going to map our ontology onto some mathematics; if it’s a good map then distinguishing between the ‘physical stuff’ and the ‘mathematical states of that stuff’ is pretty much beside the point. One can (and should!) still ask what the stuff is.
06.03.2026 17:24
👍 1
🔁 0
💬 1
📌 0
Well, if it’s epistemic, having two rules is no longer contradictory. Using the Liouville equation to evolve a probability distribution is very different from collapsing down that distribution upon learning more information, and yet both are absolutely correct things to do in those circumstances.
06.03.2026 05:51
👍 2
🔁 0
💬 1
📌 0
Well, the *interesting* parts of the discussion usually come down to the question of what might be physically happening between measurements. Granted, this does tend to get obscured by the anti-realists (unmeasured events don’t exist) and the futilists (it doesn’t matter/we’ll never know).
06.03.2026 04:49
👍 0
🔁 0
💬 0
📌 0
Yes, good key question! But if the answer was “no” -- say, if the wavefunction only represented epistemic knowledge of some very different underlying state -- then what would then be the right way to frame the “measurement problem”? For that, I think you’d really need to pick an ontology.
06.03.2026 04:35
👍 1
🔁 0
💬 2
📌 0
I think one can’t even frame the problem before taking a stand on the ontology. For instance, I think we completely agree that the quantum system and the measurement device needs to be essentially the same “stuff” -- but we likely have a vast disagreement about what that “stuff” is likely to be.
06.03.2026 00:51
👍 4
🔁 0
💬 1
📌 0
I need to get myself a new copy of Travis Norsen’s textbook (lost in a fire, sadly). I liked the way he laid out the “Measurement Problems”, and even more importantly, the way he identifies the closely-related “Ontology Problem”.
scholar.google.com/citations?vi...
06.03.2026 00:50
👍 4
🔁 1
💬 1
📌 0
Sure, you could set this aside as a “different problem”, but MWI has trouble with it, while other approaches don’t (Bohmian mechanics, for instance). Carving problems into categories (X,Y,Z), and claiming that one approach “solves problem X, so it’s better”, obscures more than it illuminates.
05.03.2026 23:00
👍 5
🔁 0
💬 1
📌 0
That’s not really fair -- usually people use the term “measurement problem” to refer to a set of interrelated problems concerning measurement. One of those problems is how to connect the empirical results of laboratory measurements with the mathematical objects of the theory.
05.03.2026 22:59
👍 4
🔁 0
💬 2
📌 0
The craft of writing, or the craft of romance? 😉
05.03.2026 04:20
👍 1
🔁 0
💬 1
📌 0
You mean, interventionist causation in general? That's a big blind spot for many physicists, sadly. We all "know" about it at a gut level, like when we tell students that external forces *cause* accelerations, rather than the other way around. But many physicists probably couldn't articulate why.
27.02.2026 00:22
👍 1
🔁 0
💬 0
📌 0
If there is a law-like restriction instead of an uncertainty principle, that’s like taking the retrocausal model and reinterpreting the hidden variables as a gauge, where the gauge itself seems to prevent signaling faster than light. But that would defeat the purpose of the paper, wouldn’t it?
26.02.2026 18:42
👍 0
🔁 0
💬 1
📌 0
The first model (2 posts back) is forward-causal, but with a strange restriction on what I’m allowed to do in the future. The last model is retrocausal, but with an epistemic restriction on the initial state, tuned just right to prevent me from being able to send a known signal faster than light.
26.02.2026 18:39
👍 0
🔁 0
💬 1
📌 0
If a different model allows those future experiments, it must then it must in turn forbid my complete causal control of the initial state. Maybe, as you say, with some law-like restriction on the initial state itself, or maybe with some uncertainty principle, forbidding paradox-sufficient access.
26.02.2026 18:37
👍 0
🔁 0
💬 1
📌 0
If I had control of a system that could signal faster than light, and Lorentz transformations are correct when boosting frames, then it’s pretty clear what sort of experiment I could set up to make a paradox. But if a model forbids my causal control of those future experiments, no more paradoxes.
26.02.2026 18:36
👍 0
🔁 0
💬 1
📌 0
Yes, there are lots of different causal models which correspond to the very same equations. (Most obvious example here is the Ideal Gas Law; depending on which variables you’re allowed to control, you get very different casual pathways.) Different intervention freedoms = different models.
26.02.2026 18:35
👍 0
🔁 0
💬 1
📌 0
You can't see this problem if all you look at is 'existence and uniqueness'. Instead, you have to ask questions about the input-output structure of the model -- and that often requires more analysis than just writing down the bare equations.
24.02.2026 03:54
👍 2
🔁 0
💬 0
📌 0
In this case, with a restriction on (S) due to later events, the issue isn’t *forward* causation; it’s *backward*! If I can set something in the future that would constrain allowed past values of (S), that’s retrocausal. And without restricting access to S, such a model could signal back in time.
24.02.2026 03:52
👍 1
🔁 0
💬 2
📌 0
In many models (1) and (2) are the same thing, because the inputs are assumed to be the initial state. But in some models, like the one in this paper, this isn’t the case. If there are rules telling you that the initial state (S) can’t be freely set, from outside the model, then (S) isn’t an input.
24.02.2026 03:51
👍 2
🔁 0
💬 2
📌 0
This brings us to question (1), which is where the true causal analysis lies. But you can’t answer this from the bare equations; the model needs to specify the causal structure. What are the model’s “inputs”? What events are we allowed to “set”, from outside the model, independently?
24.02.2026 03:50
👍 1
🔁 0
💬 1
📌 0
After all, correlation is not causation. Our causal instincts are “interventionist”. We ask ourselves, if I set event A to this value, instead of that (counterfactual) value, is there an effect at B? We’re not just asking about correlations, we’re asking about input-output relationships.
24.02.2026 03:50
👍 1
🔁 0
💬 1
📌 0
The only nice thing about (2) is that you can analyze it in terms of the “bare equations”, without making any causal assumptions. Given the equations, the answer to (2) is easy to assess -- it’s just a question of which events are correlated. But does it have anything to do with “causation”?
24.02.2026 03:49
👍 1
🔁 0
💬 1
📌 0
Fun stuff! Still, it’s important not to conflate these two questions, for any given model: (1) Are there inputs to the model at event A which have an effect at event B? vs. (2) Is there an initial event A that is correlated with some other event B? These questions come apart in cases like this.
24.02.2026 03:49
👍 2
🔁 0
💬 1
📌 0
So my conclusion is almost opposite theirs. Instead of banning signaling-in-principle to rule out "all-at-once" models, it just means that any good all-at-once model must be able to generate its own “uncertainty principle” limit, to forbid any signaling-in-practice. A challenge, not a no-go theorem.
15.02.2026 20:51
👍 2
🔁 0
💬 0
📌 0
And if “signaling-in-principle” was only possible beyond the model’s own restriction, then there’s nothing amiss. In fact, if one thinks there is a realistic way to explain Bell inequality violations, then of course there *would* be signaling if you could see all the hidden variables. Yet you can't.
15.02.2026 20:50
👍 0
🔁 0
💬 1
📌 0