Posted on by Chris Warburton

The many-worlds interpretation takes the quantum nature of observers seriously; it claims to add nothing else to the underlying theory (such as “wavefunction collapse”).

So-called “superdeterminism” takes the determinism of observers seriously; I claim it adds nothing else to the underlying theory of determinism (suvh as “randomness” or “choice”).

I don’t know if either is right, but I never understood objections to determinism based on Bell-like setups, e.g. “it can’t be deterministic, since we hadn’t yet chosen which axis to measure!”, but the whole point of determinism is that the choice of axis is pre-determined (just like everything else).

Experiments (leading to Nobel prizes!) have investigated the use of far-away influences, like choosing axes based on distant photons. If we ignore effects of the intervening space (e.g. a screen which blocks the far-away light and emits a fake image in its place) then any ‘conspiracy’ between the detector’s choice of axes and the particle’s choice of hidden variables would have to occur within their intersecting light-cones.

On the one hand, so what? Determinism doesn’t have a time limit; although, sure, standard statistical approaches would lose any correlation in the noise over such scales.

On the other hand, it seems plausible that spacetime itself may emerge from correlations, entanglement and causality. In which case, the use of “far away” information becomes suspect, as it’s putting the cart before the horse. If proximity is determined by causal influence, is it meaningful to claim that inputs to an experiment (like photons determining choice of axes) are somehow “distant” from that experiment’s outcome?

(Of course, photons from far-off stars are not assumed to be entangled with the experimental setup; at least initially…)