This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.Find sources: "Optional stopping theorem" – news · newspapers · books · scholar · JSTOR (February 2012) (Learn how and when to remove this template message)

In probability theory, the optional stopping theorem (or sometimes Doob's optional sampling theorem, for American probabilist Joseph Doob) says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial expected value. Since martingales can be used to model the wealth of a gambler participating in a fair game, the optional stopping theorem says that, on average, nothing can be gained by stopping play based on the information obtainable so far (i.e., without looking into the future). Certain conditions are necessary for this result to hold true. In particular, the theorem applies to doubling strategies.

The optional stopping theorem is an important tool of mathematical finance in the context of the fundamental theorem of asset pricing.

Statement

A discrete-time version of the theorem is given below, with 0 denoting the set of natural integers, including zero.

Let X = (Xt)t0 be a discrete-time martingale and τ a stopping time with values in 0 ∪ {∞}, both with respect to a filtration (Ft)t0. Assume that one of the following three conditions holds:

(a) The stopping time τ is almost surely bounded, i.e., there exists a constant c such that τc a.s.
(b) The stopping time τ has finite expectation and the conditional expectations of the absolute value of the martingale increments are almost surely bounded, more precisely, and there exists a constant c such that almost surely on the event {τ > t} for all t0.
(c) There exists a constant c such that |Xtτ| ≤ c a.s. for all t0 where denotes the minimum operator.

Then Xτ is an almost surely well defined random variable and

Similarly, if the stochastic process X = (Xt)t0 is a submartingale or a supermartingale and one of the above conditions holds, then

for a submartingale, and

for a supermartingale.

Remark

Under condition (c) it is possible that τ = ∞ happens with positive probability. On this event Xτ is defined as the almost surely existing pointwise limit of (Xt)t0 , see the proof below for details.

Applications

Proof

Let Xτ denote the stopped process, it is also a martingale (or a submartingale or supermartingale, respectively). Under condition (a) or (b), the random variable Xτ is well defined. Under condition (c) the stopped process Xτ is bounded, hence by Doob's martingale convergence theorem it converges a.s. pointwise to a random variable which we call Xτ.

If condition (c) holds, then the stopped process Xτ is bounded by the constant random variable M := c. Otherwise, writing the stopped process as

gives |Xtτ| ≤ M for all t0, where

.

By the monotone convergence theorem

.

If condition (a) holds, then this series only has a finite number of non-zero terms, hence M is integrable.

If condition (b) holds, then we continue by inserting a conditional expectation and using that the event {τ > s} is known at time s (note that τ is assumed to be a stopping time with respect to the filtration), hence

where a representation of the expected value of non-negative integer-valued random variables is used for the last equality.

Therefore, under any one of the three conditions in the theorem, the stopped process is dominated by an integrable random variable M. Since the stopped process Xτ converges almost surely to Xτ, the dominated convergence theorem implies

By the martingale property of the stopped process,

hence

Similarly, if X is a submartingale or supermartingale, respectively, change the equality in the last two formulas to the appropriate inequality.

References

  1. Grimmett, Geoffrey R.; Stirzaker, David R. (2001). Probability and Random Processes (3rd ed.). Oxford University Press. pp. 491–495. ISBN 9780198572220.
  2. Bhattacharya, Rabi; Waymire, Edward C. (2007). A Basic Course in Probability Theory. Springer. pp. 43–45. ISBN 978-0-387-71939-9.