Part of a series of articles about |

Quantum mechanics |
---|

This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.Find sources: "Quantum indeterminacy" – news · newspapers · books · scholar · JSTOR (December 2008) (Learn how and when to remove this message)

**Quantum indeterminacy** is the apparent *necessary* incompleteness in the description of a physical system, that has become one of the characteristics of the standard description of quantum physics. Prior to quantum physics, it was thought that

- a physical system had a determinate state which uniquely determined all the values of its measurable properties, and
- conversely, the values of its measurable properties uniquely determined the state.

Quantum indeterminacy can be quantitatively characterized by a probability distribution on the set of outcomes of measurements of an observable. The distribution is uniquely determined by the system state, and moreover quantum mechanics provides a recipe for calculating this probability distribution.

Indeterminacy in measurement was not an innovation of quantum mechanics, since it had been established early on by experimentalists that errors in measurement may lead to indeterminate outcomes. By the later half of the 18th century, measurement errors were well understood, and it was known that they could either be reduced by better equipment or accounted for by statistical error models. In quantum mechanics, however, indeterminacy is of a much more fundamental nature, having nothing to do with errors or disturbance.

An adequate account of quantum indeterminacy requires a theory of measurement. Many theories have been proposed since the beginning of quantum mechanics and quantum measurement continues to be an active research area in both theoretical and experimental physics.^{[1]} Possibly the first systematic attempt at a mathematical theory was developed by John von Neumann. The kinds of measurements he investigated are now called projective measurements. That theory was based in turn on the theory of projection-valued measures for self-adjoint operators which had been recently developed (by von Neumann and independently by Marshall Stone) and the Hilbert space formulation of quantum mechanics (attributed by von Neumann to Paul Dirac).

In this formulation, the state of a physical system corresponds to a vector of length 1 in a Hilbert space *H* over the complex numbers. An observable is represented by a self-adjoint (i.e. Hermitian) operator *A* on *H*. If *H* is finite dimensional, by the spectral theorem, *A* has an orthonormal basis of eigenvectors. If the system is in state ψ, then immediately after measurement the system will occupy a state which is an eigenvector *e* of *A* and the observed value λ will be the corresponding eigenvalue of the equation *A* *e* = *λ* *e*. It is immediate from this that measurement in general will be non-deterministic. Quantum mechanics, moreover, gives a recipe for computing a probability distribution Pr on the possible outcomes given the initial system state is *ψ*. The probability is

where

In this example, we consider a single spin 1/2 particle (such as an electron) in which we only consider the spin degree of freedom. The corresponding Hilbert space is the two-dimensional complex Hilbert space **C**^{2}, with each quantum state corresponding to a unit vector in **C**^{2} (unique up to phase). In this case, the state space can be geometrically represented as the surface of a sphere, as shown in the figure on the right.

are self-adjoint and correspond to spin-measurements along the 3 coordinate axes.

The Pauli matrices all have the eigenvalues +1, −1.

- For σ
_{1}, these eigenvalues correspond to the eigenvectors - For σ
_{3}, they correspond to the eigenvectors

Thus in the state

σ

There are various questions that can be asked about the above indeterminacy assertion.

- Can the apparent indeterminacy be construed as in fact deterministic, but dependent upon quantities not modeled in the current theory, which would therefore be incomplete? More precisely, are there
*hidden variables*that could account for the statistical indeterminacy in a completely classical way? - Can the indeterminacy be understood as a disturbance of the system being measured?

Von Neumann formulated the question 1) and provided an argument why the answer had to be no, *if* one accepted the formalism he was proposing. However, according to Bell, von Neumann's formal proof did not justify his informal conclusion.^{[2]} A definitive but partial negative answer to 1) has been established by experiment: because Bell's inequalities are violated, any such hidden variable(s) cannot be *local* (see Bell test experiments).

The answer to 2) depends on how disturbance is understood, particularly since measurement entails disturbance (however note that this is the observer effect, which is distinct from the uncertainty principle). Still, in the most natural interpretation the answer is also no. To see this, consider two sequences of measurements: (A) which measures exclusively σ_{1} and (B) which measures only σ_{3} of a spin system in the state *ψ*. The measurement outcomes of (A) are all +1, while the statistical distribution of the measurements (B) is still divided between +1, −1 with equal probability.

Quantum indeterminacy can also be illustrated in terms of a particle with a definitely measured momentum for which there must be a fundamental limit to how precisely its location can be specified. This quantum uncertainty principle can be expressed in terms of other variables, for example, a particle with a definitely measured energy has a fundamental limit to how precisely one can specify how long it will have that energy.
The units involved in quantum uncertainty are on the order of Planck's constant (defined to be 6.62607015×10^{−34} J⋅Hz^{−1}^{[3]}).

Quantum indeterminacy is the assertion that the state of a system does not determine a unique collection of values for all its measurable properties. Indeed, according to the Kochen–Specker theorem, in the quantum mechanical formalism it is impossible that, for a given quantum state, each one of these measurable properties (observables) has a determinate (sharp) value. The values of an observable will be obtained non-deterministically in accordance with a probability distribution which is uniquely determined by the system state. Note that the state is destroyed by measurement, so when we refer to a collection of values, each measured value in this collection must be obtained using a freshly prepared state.

This indeterminacy might be regarded as a kind of essential incompleteness in our description of a physical system. Notice however, that the indeterminacy as stated above only applies to values of measurements not to the quantum state. For example, in the spin 1/2 example discussed above, the system can be prepared in the state ψ by using measurement of σ_{1} as a *filter* which retains only those particles such that σ_{1} yields +1. By the von Neumann (so-called) postulates, immediately after the measurement the system is assuredly in the state ψ.

However, Albert Einstein believed that quantum state cannot be a complete description of a physical system and, it is commonly thought, never came to terms with quantum mechanics. In fact, Einstein, Boris Podolsky and Nathan Rosen showed that if quantum mechanics is correct, then the classical view of how the real world works (at least after special relativity) is no longer tenable. This view included the following two ideas:

- A measurable property of a physical system whose value can be predicted with certainty is actually an element of (local) reality (this was the terminology used by EPR).
- Effects of local actions have a finite propagation speed.

This failure of the classical view was one of the conclusions of the EPR thought experiment in which two remotely located observers, now commonly referred to as Alice and Bob, perform independent measurements of spin on a pair of electrons, prepared at a source in a special state called a spin singlet state. It was a conclusion of EPR, using the formal apparatus of quantum theory, that once Alice measured spin in the *x* direction, Bob's measurement in the *x* direction was determined with certainty, whereas immediately before Alice's measurement Bob's outcome was only statistically determined. From this it follows that either value of spin in the *x* direction is not an element of reality or that the effect of Alice's measurement has infinite speed of propagation.

We have described indeterminacy for a quantum system which is in a pure state. Mixed states are a more general kind of state obtained by a statistical mixture of pure states. For mixed states the "quantum recipe" for determining the probability distribution of a measurement is determined as follows:

Let *A* be an observable of a quantum mechanical system. *A* is given by a densely
defined self-adjoint operator on *H*. The spectral measure of *A* is a projection-valued measure defined by the condition

for every Borel subset *U* of **R**. Given a mixed state *S*, we introduce the *distribution* of *A* under *S* as follows:

This is a probability measure defined on the Borel subsets of **R** which is the probability distribution obtained by measuring *A* in *S*.

Quantum indeterminacy is often understood as information (or lack of it) whose existence we infer, occurring in individual quantum systems, prior to measurement. *Quantum randomness* is the statistical manifestation of that indeterminacy, witnessable in results of experiments repeated many times. However, the relationship between quantum indeterminacy and randomness is subtle and can be considered differently.^{[4]}

In *classical physics,* experiments of chance, such as coin-tossing and dice-throwing, are deterministic, in the sense that, perfect knowledge of the initial conditions would render outcomes perfectly predictable. The ‘randomness’ stems from ignorance of physical information in the initial toss or throw. In diametrical contrast, in the case of *quantum physics*, the theorems of Kochen and Specker,^{[5]} the inequalities of John Bell,^{[6]} and experimental evidence of Alain Aspect,^{[7]}^{[8]} all indicate that quantum randomness does not stem from any such *physical information*.

In 2008, Tomasz Paterek et al. provided an explanation in *mathematical information*. They proved that quantum randomness is, exclusively, the output of measurement experiments whose input settings introduce *logical independence* into quantum systems.^{[9]}^{[10]}

Logical independence is a well-known phenomenon in Mathematical Logic. It refers to the null logical connectivity that exists between mathematical propositions (in the same language) that neither prove nor disprove one another.^{[11]}

In the work of Paterek et al., the researchers demonstrate a link connecting quantum randomness and *logical independence* in a formal system of Boolean propositions. In experiments measuring photon polarisation, Paterek et al. demonstrate statistics correlating predictable outcomes with logically dependent mathematical propositions, and random outcomes with propositions that are logically independent.^{[12]}^{[13]}

In 2020, Steve Faulkner reported on work following up on the findings of Tomasz Paterek et al.; showing what logical independence in the Paterek Boolean propositions means, in the domain of Matrix Mechanics proper. He showed how indeterminacy's *indefiniteness* arises in evolved density operators representing mixed states, where measurement processes encounter irreversible 'lost history' and ingression of ambiguity.^{[14]}