In the interpretation of quantum mechanics, a local hidden-variable theory is a hidden-variable theory that satisfies the condition of being consistent with local realism. This definition restricts all types of those theories that attempt to account for the probabilistic features of quantum mechanics via the mechanism of underlying inaccessible variables with the additional requirement that distant events be independent, ruling out instantaneous (that is, faster-than-light) interactions between separate events.
The mathematical implications of a local hidden-variable theory in regard to the phenomenon of quantum entanglement were explored by physicist John Stewart Bell, who in 1964 proved that broad classes of local hidden-variable theories cannot reproduce the correlations between measurement outcomes that quantum mechanics predicts. The most notable exception is superdeterminism. Superdeterministic hidden-variable theories can be local and yet be compatible with observations.
Local hidden variables and the Bell tests
Bell's theorem starts with the implication of the principle of local realism, that separated measurement processes are independent. Based on this premise, the probability of a coincidence between separated measurements of particles with correlated (e.g. identical or opposite) orientation properties can be written:
where $p_{A}(a,\lambda )$ is the probability of detection of particle $A$ with hidden variable $\lambda$ by detector $A$, set in direction $a$, and similarly $p_{B}(b,\lambda )$ is the probability at detector $B$, set in direction $b$, for particle $B$, sharing the same value of $\lambda$. The source is assumed to produce particles in the state $\lambda$ with probability $\rho (\lambda )$.
Using (1), various Bell inequalities can be derived, which provide limits on the possible behaviour of local hidden-variable models.
When John Stewart Bell originally derived his inequality, it was in relation to pairs of entangled spin-1/2 particles, every one of those emitted being detected. Bell showed that when detectors are rotated with respect to each other, local realist models must yield a correlation curve that is bounded by a straight line between maxima (detectors aligned), whereas the quantum correlation curve is a cosine relationship. The first Bell tests were performed not with spin-1/2 particles, but with photons, which have spin 1. A classical local hidden-variable prediction for photons, based on Maxwell's equations, yields a cosine curve, but of reduced amplitude, such that the curve still lies within the straight-line limits specified in the original Bell inequality.
Bell's theorem assumes that measurement settings are completely independent, and not in principle determined by the universe at large. If this assumption were to be incorrect, as proposed in superdeterminism, conclusions drawn from Bell's theorem may be invalidated. The theorem also relies on very efficient and space-like separated measurements. Such flaws are generally called loopholes. A loophole free experimental verification of a Bell inequality violation was performed in 2015.^{[1]}
Bell tests with no "non-detections"
Consider, for example, David Bohm's thought experiment, in which a molecule breaks into two atoms with opposite spins.^{[2]} Assume that this spin can be represented by a real vector, pointing in any direction. It will be the "hidden variable" in our model. Taking it to be a unit vector, all possible values of the hidden variable are represented by all points on the surface of a unit sphere.
Suppose that the spin is to be measured in the direction a. Then the natural assumption, given that all atoms are detected, is that all atoms the projection of whose spin in the direction a is positive will be detected as spin-up (coded as +1), while all whose projection is negative will be detected as spin-down (coded as −1). The surface of the sphere will be divided into two regions, one for +1, one for −1, separated by a great circle in the plane perpendicular to a. Assuming for convenience that a is horizontal, corresponding to the angle a with respect to some suitable reference direction, the dividing circle will be in a vertical plane. So far we have modelled side A of our experiment.
Now to model side B. Assume that b too is horizontal, corresponding to the angle b. There will be second great circle drawn on the same sphere, to one side of which we have +1, the other −1 for particle B. The circle will be again in a vertical plane.
The two circles divide the surface of the sphere into four regions. The type of "coincidence" (++, −−, +− or −+) observed for any given pair of particles is determined by the region within which their hidden variable falls. Assuming the source to be "rotationally invariant" (to produce all possible states λ with equal probability), the probability of a given type of coincidence will clearly be proportional to the corresponding area, and these areas will vary linearly with the angle between a and b. (To see this, think of an orange and its segments. The area of peel corresponding to a number n of segments is roughly proportional to n. More accurately, it is proportional to the angle subtended at the centre.)
The formula (1) above has not been used explicitly – it is hardly relevant when, as here, the situation is fully deterministic. The problem could be reformulated in terms of the functions in the formula, with ρ constant and the probability functions step functions. The principle behind (1) has in fact been used, but purely intuitively.
The realist prediction (solid lines) for quantum correlation when there are no non-detections. The quantum-mechanical prediction is the dotted curve.
Thus the local hidden-variable prediction for the probability of coincidence is proportional to the angle (b − a) between the detector settings. The quantum correlation is defined to be the expectation value of the sum of the individual outcomes, and this is
$E=P_{++}+P_{--}-P_{+-}-P_{-+},$
(2)
where P_{++} is the probability of a "+" outcome on both sides, P_{+−} that of a "+" on side A, a "−" on side B, etc.
Since each individual term varies linearly with the difference (b − a), so does their sum.
The result is shown in the figure.
Optical Bell tests
In almost all real applications of Bell's inequalities, the particles used have been photons. It is not necessarily assumed that the photons are particle-like. They may be just short pulses of classical light.^{[3]} It is not assumed that every single one is detected. Instead the hidden variable set at the source is taken to determine only the probability of a given outcome, the actual individual outcomes being partly determined by other hidden variables local to the analyser and detector. It is assumed that these other hidden variables are independent on the two sides of the experiment.^{[4]}^{[5]}
In this stochastic model, in contrast to the above deterministic case, we do need equation (1) to find the local-realist prediction for coincidences. It is necessary first to make some assumption regarding the functions $p_{A}(a,\lambda )$ and $p_{B}(a,\lambda )$, the usual one being that these are both cosine squares, in line with Malus' law. Assuming the hidden variable to be polarisation direction (parallel on the two sides in real applications, not orthogonal), equation (1) becomes
The predicted quantum correlation can be derived from this and is shown in the figure.
The realist prediction (solid curve) for quantum correlation in an optical Bell test. The quantum-mechanical prediction is the dotted curve.
In optical tests, incidentally, it is not certain that the quantum correlation is well-defined. Under a classical model of light, a single photon can go partly into the "+" channel, partly into the "−" one, resulting in the possibility of simultaneous detections in both. Though experiments such as by Grangier et al. have shown that this probability is very low,^{[6]} it is not logical to assume that it is actually zero. The definition of quantum correlation is adapted to the idea that outcomes will always be +1, −1 or 0. There is no obvious way of including any other possibility, which is one of the reasons why Clauser and Horne's 1974 Bell test, using single-channel polarisers, should be used instead of the CHSH Bell test. The CH74 inequality concerns just probabilities of detection, not quantum correlations.
Quantum states with a local hidden-variable model
For separable states of two particles, there is a simple hidden-variable model for any measurements on the two parties. Surprisingly, there are also entangled states for which all von Neumann measurements can be described by a hidden-variable model.^{[7]} Such states are entangled, but do not violate any Bell inequality. The so-called Werner states are a single-parameter family of states that are invariant under any transformation of the type $U\otimes U,$ where $U$ is a unitary matrix. For two qubits, they are noisy singlets given as
where the singlet is defined as $\vert \psi ^{-}\rangle ={\tfrac {1}{\sqrt {2))}\left(\vert 01\rangle -\vert 10\rangle \right)$.
R. F. Werner showed that such states allow for a hidden-variable model for $p\leq 1/2$, while they are entangled if $p>1/3$. The bound for hidden-variable models could be improved until $p=2/3$.^{[8]} Hidden-variable models have been constructed for Werner states even if POVM measurements are allowed, not only von Neumann measurements.^{[9]} Hidden variable models were also constructed to noisy maximally entangled states, and even extended to arbitrary pure states mixed with white noise.^{[10]} Beside bipartite systems, there are also results for the multipartite case. A hidden-variable model for any von Neumann measurements at the parties has been presented for a three-qubit quantum state.^{[11]}
Generalizations of the models
By varying the assumed probability and density functions in equation (1), we can arrive at a considerable variety of local-realist predictions.
Time effects
Previously some new hypotheses were conjectured concerning the role of time in constructing hidden-variables theory. One approach was suggested by K. Hess and W. Philipp and relies upon possible consequences of time dependencies of hidden variables; this hypothesis has been criticized by R. D. Gill, G. Weihs, A. Zeilinger and M. Żukowski, as well as D. M. Appleby.^{[12]}^{[13]}^{[14]}
If we make realistic (wave-based) assumptions regarding the behaviour of light on encountering polarisers and photodetectors, we find that we are not compelled to accept that the probability of detection will reflect Malus' law exactly.
We might perhaps suppose the polarisers to be perfect, with output intensity of polariser A proportional to cos^{2}(a − λ), but reject the quantum-mechanical assumption that the function relating this intensity to the probability of detection is a straight line through the origin. Real detectors, after all, have "dark counts" that are there even when the input intensity is zero, and become saturated when the intensity is very high. It is not possible for them to produce outputs in exact proportion to input intensity for all intensities.
By varying our assumptions, it seems possible that the realist prediction could approach the quantum-mechanical one within the limits of experimental error,^{[15]} though clearly a compromise must be reached. We have to match both the behaviour of the individual light beam on passage through a polariser and the observed coincidence curves. The former would be expected to follow Malus' law fairly closely, though experimental evidence here is not so easy to obtain. We are interested in the behaviour of very weak light and the law may be slightly different from that of stronger light.