A Dynkin system,[1] named after Eugene Dynkin, is a collection of subsets of another universal set ${\displaystyle \Omega }$ satisfying a set of axioms weaker than those of 𝜎-algebra. Dynkin systems are sometimes referred to as 𝜆-systems (Dynkin himself used this term) or d-system.[2] These set families have applications in measure theory and probability.

A major application of 𝜆-systems is the π-𝜆 theorem, see below.

## Definition

Let ${\displaystyle \Omega }$ be a nonempty set, and let ${\displaystyle D}$ be a collection of subsets of ${\displaystyle \Omega }$ (that is, ${\displaystyle D}$ is a subset of the power set of ${\displaystyle \Omega }$). Then ${\displaystyle D}$ is a Dynkin system if

1. ${\displaystyle \Omega \in D;}$
2. ${\displaystyle D}$ is closed under complements of subsets in supersets: if ${\displaystyle A,B\in D}$ and ${\displaystyle A\subseteq B,}$ then ${\displaystyle B\setminus A\in D;}$
3. ${\displaystyle D}$ is closed under countable increasing unions: if ${\displaystyle A_{1}\subseteq A_{2}\subseteq A_{3}\subseteq \cdots }$ is an increasing sequence[note 1] of sets in ${\displaystyle D}$ then ${\displaystyle \bigcup _{n=1}^{\infty }A_{n}\in D.}$

It is easy to check[proof 1] that any Dynkin system ${\displaystyle D}$ satisfies:

1. ${\displaystyle \varnothing \in D;}$
2. ${\displaystyle D}$ is closed under complements in ${\displaystyle \Omega }$: if ${\textstyle A\in D,}$ then ${\displaystyle \Omega \setminus A\in D;}$
• Taking ${\displaystyle A:=\Omega }$ shows that ${\displaystyle \varnothing \in D.}$
3. ${\displaystyle D}$ is closed under countable unions of pairwise disjoint sets: if ${\displaystyle A_{1},A_{2},A_{3},\ldots }$ is a sequence of pairwise disjoint sets in ${\displaystyle D}$ (meaning that ${\displaystyle A_{i}\cap A_{j}=\varnothing }$ for all ${\displaystyle i\neq j}$) then ${\displaystyle \bigcup _{n=1}^{\infty }A_{n}\in D.}$
• To be clear, this property also holds for finite sequences ${\displaystyle A_{1},\ldots ,A_{n))$ of pairwise disjoint sets (by letting ${\displaystyle A_{i}:=\varnothing }$ for all ${\displaystyle i>n}$).

Conversely, it is easy to check that a family of sets that satisfy conditions 4-6 is a Dynkin class.[proof 2] For this reason, a small group of authors have adopted conditions 4-6 to define a Dynkin system.

An important fact is that any Dynkin system that is also a π-system (that is, closed under finite intersections) is a 𝜎-algebra. This can be verified by noting that conditions 2 and 3 together with closure under finite intersections imply closure under finite unions, which in turn implies closure under countable unions.

Given any collection ${\displaystyle {\mathcal {J))}$ of subsets of ${\displaystyle \Omega ,}$ there exists a unique Dynkin system denoted ${\displaystyle D$$(\mathcal {J))$$)$ which is minimal with respect to containing ${\displaystyle {\mathcal {J)).}$ That is, if ${\displaystyle {\tilde {D))}$ is any Dynkin system containing ${\displaystyle {\mathcal {J)),}$ then ${\displaystyle D$$(\mathcal {J))\}\subseteq {\tilde {D)).}$ ${\displaystyle D\((\mathcal {J))$$)$ is called the Dynkin system generated by ${\displaystyle {\mathcal {J)).}$ For instance, ${\displaystyle D\{\varnothing \}=\{\varnothing ,\Omega \}.}$ For another example, let ${\displaystyle \Omega =\{1,2,3,4\))$ and ${\displaystyle {\mathcal {J))=\{1\))$; then ${\displaystyle D$$(\mathcal {J))\}=\{\varnothing ,\{1\},\{2,3,4\},\Omega \}.}$ ## Sierpiński–Dynkin's π-λ theorem Sierpiński-Dynkin's π-𝜆 theorem:[3] If ${\displaystyle P}$ is a π-system and ${\displaystyle D}$ is a Dynkin system with ${\displaystyle P\subseteq D,}$ then ${\displaystyle \sigma \{P\}\subseteq D.}$ In other words, the 𝜎-algebra generated by ${\displaystyle P}$ is contained in ${\displaystyle D.}$ Thus a Dynkin system contains a π-system if and only if it contains the 𝜎-algebra generated by that π-system. One application of Sierpiński-Dynkin's π-𝜆 theorem is the uniqueness of a measure that evaluates the length of an interval (known as the Lebesgue measure): Let ${\displaystyle (\Omega ,{\mathcal {B)),\ell )}$ be the unit interval [0,1] with the Lebesgue measure on Borel sets. Let ${\displaystyle m}$ be another measure on ${\displaystyle \Omega }$ satisfying ${\displaystyle m[(a,b)]=b-a,}$ and let ${\displaystyle D}$ be the family of sets ${\displaystyle S}$ such that ${\displaystyle m[S]=\ell [S].}$ Let ${\displaystyle I:=\{(a,b),[a,b),(a,b],[a,b]:0 and observe that ${\displaystyle I}$ is closed under finite intersections, that ${\displaystyle I\subseteq D,}$ and that ${\displaystyle {\mathcal {B))}$ is the 𝜎-algebra generated by ${\displaystyle I.}$ It may be shown that ${\displaystyle D}$ satisfies the above conditions for a Dynkin-system. From Sierpiński-Dynkin's π-𝜆 Theorem it follows that ${\displaystyle D}$ in fact includes all of ${\displaystyle {\mathcal {B))}$, which is equivalent to showing that the Lebesgue measure is unique on ${\displaystyle {\mathcal {B))}$. ### Application to probability distributions  This section is transcluded from pi system. (edit | history) The π-𝜆 theorem motivates the common definition of the probability distribution of a random variable ${\displaystyle X:(\Omega ,{\mathcal {F)),\operatorname {P} )\to \mathbb {R} }$ in terms of its cumulative distribution function. Recall that the cumulative distribution of a random variable is defined as ${\displaystyle F_{X}(a)=\operatorname {P} [X\leq a],\qquad a\in \mathbb {R} ,}$ whereas the seemingly more general law of the variable is the probability measure ${\displaystyle {\mathcal {L))_{X}(B)=\operatorname {P} \left[X^{-1}(B)\right]\quad {\text{ for all ))B\in {\mathcal {B))(\mathbb {R} ),}$ where ${\displaystyle {\mathcal {B))(\mathbb {R} )}$ is the Borel 𝜎-algebra. The random variables ${\displaystyle X:(\Omega ,{\mathcal {F)),\operatorname {P} )\to \mathbb {R} }$ and ${\displaystyle Y:({\tilde {\Omega )),{\tilde {\mathcal {F))},{\tilde {\operatorname {P} )))\to \mathbb {R} }$ (on two possibly different probability spaces) are equal in distribution (or law), denoted by ${\displaystyle X\,{\stackrel {\mathcal {D)){=))\,Y,}$ if they have the same cumulative distribution functions; that is, if ${\displaystyle F_{X}=F_{Y}.}$ The motivation for the definition stems from the observation that if ${\displaystyle F_{X}=F_{Y},}$ then that is exactly to say that ${\displaystyle {\mathcal {L))_{X))$ and ${\displaystyle {\mathcal {L))_{Y))$ agree on the π-system ${\displaystyle \{(-\infty ,a]:a\in \mathbb {R}$$)$ which generates ${\displaystyle {\mathcal {B))(\mathbb {R} ),}$ and so by the example above: ${\displaystyle {\mathcal {L))_{X}={\mathcal {L))_{Y}.}$

A similar result holds for the joint distribution of a random vector. For example, suppose ${\displaystyle X}$ and ${\displaystyle Y}$ are two random variables defined on the same probability space ${\displaystyle (\Omega ,{\mathcal {F)),\operatorname {P} ),}$ with respectively generated π-systems ${\displaystyle {\mathcal {I))_{X))$ and ${\displaystyle {\mathcal {I))_{Y}.}$ The joint cumulative distribution function of ${\displaystyle (X,Y)}$ is

${\displaystyle F_{X,Y}(a,b)=\operatorname {P} [X\leq a,Y\leq b]=\operatorname {P} \left[X^{-1}((-\infty ,a])\cap Y^{-1}((-\infty ,b])\right],\quad {\text{ for all ))a,b\in \mathbb {R} .}$

However, ${\displaystyle A=X^{-1}((-\infty ,a])\in {\mathcal {I))_{X))$ and ${\displaystyle B=Y^{-1}((-\infty ,b])\in {\mathcal {I))_{Y}.}$ Because

${\displaystyle {\mathcal {I))_{X,Y}=\left\{A\cap B:A\in {\mathcal {I))_{X},{\text{ and ))B\in {\mathcal {I))_{Y}\right\))$
is a π-system generated by the random pair ${\displaystyle (X,Y),}$ the π-𝜆 theorem is used to show that the joint cumulative distribution function suffices to determine the joint law of ${\displaystyle (X,Y).}$ In other words, ${\displaystyle (X,Y)}$ and ${\displaystyle (W,Z)}$ have the same distribution if and only if they have the same joint cumulative distribution function.

In the theory of stochastic processes, two processes ${\displaystyle (X_{t})_{t\in T},(Y_{t})_{t\in T))$ are known to be equal in distribution if and only if they agree on all finite-dimensional distributions; that is, for all ${\displaystyle t_{1},\ldots ,t_{n}\in T,\,n\in \mathbb {N} ,}$

${\displaystyle \left(X_{t_{1)),\ldots ,X_{t_{n))\right)\,{\stackrel {\mathcal {D)){=))\,\left(Y_{t_{1)),\ldots ,Y_{t_{n))\right).}$

The proof of this is another application of the π-𝜆 theorem.[4]

• Algebra of sets – Identities and relationships involving sets
• δ-ring – Ring closed under countable intersections
• Field of sets – Algebraic concept in measure theory, also referred to as an algebra of sets
• Monotone class – theorem
• π-system – Family of sets closed under intersection
• Ring of sets – Family closed under unions and relative complements
• σ-algebra – Algebraic structure of set algebra
• 𝜎-ideal – Family closed under subsets and countable unions
• 𝜎-ring – Ring closed under countable unions

## Notes

1. ^ A sequence of sets ${\displaystyle A_{1},A_{2},A_{3},\ldots }$ is called increasing if ${\displaystyle A_{n}\subseteq A_{n+1))$ for all ${\displaystyle n\geq 1.}$

Proofs

1. ^ Assume ${\displaystyle {\mathcal {D))}$ satisfies (1), (2), and (3). Proof of (5) :Property (5) follows from (1) and (2) by using ${\displaystyle B:=\Omega .}$ The following lemma will be used to prove (6). Lemma: If ${\displaystyle A,B\in {\mathcal {D))}$ are disjoint then ${\displaystyle A\cup B\in {\mathcal {D)).}$ Proof of Lemma: ${\displaystyle A\cap B=\varnothing }$ implies ${\displaystyle B\subseteq \Omega \setminus A,}$ where ${\displaystyle \Omega \setminus A\subseteq \Omega }$ by (5). Now (2) implies that ${\displaystyle {\mathcal {D))}$ contains ${\displaystyle (\Omega \setminus A)\setminus B=\Omega \setminus (A\cup B)}$ so that (5) guarantees that ${\displaystyle A\cup B\in {\mathcal {D)),}$ which proves the lemma. Proof of (6) Assume that ${\displaystyle A_{1},A_{2},A_{3},\ldots }$ are pairwise disjoint sets in ${\displaystyle {\mathcal {D)).}$ For every integer ${\displaystyle n>0,}$ the lemma implies that ${\displaystyle D_{n}:=A_{1}\cup \cdots \cup A_{n}\in {\mathcal {D))}$ where because ${\displaystyle D_{1}\subseteq D_{2}\subseteq D_{3}\subseteq \cdots }$ is increasing, (3) guarantees that ${\displaystyle {\mathcal {D))}$ contains their union ${\displaystyle D_{1}\cup D_{2}\cup \cdots =A_{1}\cup A_{2}\cup \cdots ,}$ as desired. ${\displaystyle \blacksquare }$
2. ^ Assume ${\displaystyle {\mathcal {D))}$ satisfies (4), (5), and (6). proof of (2): If ${\displaystyle A,B\in {\mathcal {D))}$ satisfy ${\displaystyle A\subseteq B}$ then (5) implies ${\displaystyle \Omega \setminus B\in {\mathcal {D))}$ and since ${\displaystyle (\Omega \setminus B)\cap A=\varnothing ,}$ (6) implies that ${\displaystyle {\mathcal {D))}$ contains ${\displaystyle (\Omega \setminus B)\cup A=\Omega \setminus (B\setminus A)}$ so that finally (4) guarantees that ${\displaystyle \Omega \setminus (\Omega \setminus (B\setminus A))=B\setminus A}$ is in ${\displaystyle {\mathcal {D)).}$ Proof of (3): Assume ${\displaystyle A_{1}\subseteq A_{2}\subseteq \cdots }$ is an increasing sequence of subsets in ${\displaystyle {\mathcal {D)),}$ let ${\displaystyle D_{1}=A_{1},}$ and let ${\displaystyle D_{i}=A_{i}\setminus A_{i-1))$ for every ${\displaystyle i>1,}$ where (2) guarantees that ${\displaystyle D_{2},D_{3},\ldots }$ all belong to ${\displaystyle {\mathcal {D)).}$ Since ${\displaystyle D_{1},D_{2},D_{3},\ldots }$ are pairwise disjoint, (6) guarantees that their union ${\displaystyle D_{1}\cup D_{2}\cup D_{3}\cup \cdots =A_{1}\cup A_{2}\cup A_{3}\cup \cdots }$ belongs to ${\displaystyle {\mathcal {D)),}$ which proves (3).${\displaystyle \blacksquare }$
1. ^ Dynkin, E., "Foundations of the Theory of Markov Processes", Moscow, 1959
2. ^ Aliprantis, Charalambos; Border, Kim C. (2006). Infinite Dimensional Analysis: a Hitchhiker's Guide (Third ed.). Springer. Retrieved August 23, 2010.
3. ^ Sengupta. "Lectures on measure theory lecture 6: The Dynkin π − λ Theorem" (PDF). Math.lsu. Retrieved 3 January 2023.
4. ^ Kallenberg, Foundations Of Modern Probability, p. 48