The cumulative distribution function of a real-valued random variable is the function given by: p. 77
where the right-hand side represents the probability that the random variable takes on a value less than or equal to .
The probability that lies in the semi-closed interval, where , is therefore: p. 84
In the definition above, the "less than or equal to" sign, "≤", is a convention, not a universally used one (e.g. Hungarian literature uses "<"), but the distinction is important for discrete distributions. The proper use of tables of the binomial and Poisson distributions depends upon this convention. Moreover, important formulas like Paul Lévy's inversion formula for the characteristic function also rely on the "less than or equal" formulation.
If treating several random variables etc. the corresponding letters are used as subscripts while, if treating only one, the subscript is usually omitted. It is conventional to use a capital for a cumulative distribution function, in contrast to the lower-case used for probability density functions and probability mass functions. This applies when discussing general distributions: some specific distributions have their own conventional notation, for example the normal distribution uses and instead of and , respectively.
The probability density function of a continuous random variable can be determined from the cumulative distribution function by differentiating using the Fundamental Theorem of Calculus; i.e. given ,
Every function with these four properties is a CDF, i.e., for every such function, a random variable can be defined such that the function is the cumulative distribution function of that random variable.
Here is the probability of success and the function denotes the discrete probability distribution of the number of successes in a sequence of independent experiments, and is the "floor" under , i.e. the greatest integer less than or equal to .
Complementary cumulative distribution function (tail distribution)
Sometimes, it is useful to study the opposite question and ask how often the random variable is above a particular level. This is called the complementary cumulative distribution function (ccdf) or simply the tail distribution or exceedance, and is defined as
This has applications in statisticalhypothesis testing, for example, because the one-sided p-value is the probability of observing a test statistic at least as extreme as the one observed. Thus, provided that the test statistic, T, has a continuous distribution, the one-sided p-value is simply given by the ccdf: for an observed value of the test statistic
As , and in fact provided that is finite. Proof: Assuming has a density function , for any
Then, on recognizing
and rearranging terms,
For a random variable having an expectation,
and for a non-negative random variable the second term is 0. If the random variable can only take non-negative integer values, this is equivalent to
Folded cumulative distribution
While the plot of a cumulative distribution often has an S-like shape, an alternative illustration is the folded cumulative distribution or mountain plot, which folds the top half of the graph over, that is
If is a collection of independent -distributed random variables defined on the same sample space, then there exist random variables such that is distributed as and with probability 1 for all .
The inverse of the cdf can be used to translate results obtained for the uniform distribution to other distributions.
Empirical distribution function
The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function.
Definition for two random variables
When dealing simultaneously with more than one random variable the joint cumulative distribution function can also be defined. For example, for a pair of random variables , the joint CDF is given by: p. 89
where the right-hand side represents the probability that the random variable takes on a value less than or equal to and that takes on a value less than or equal to .
Example of joint cumulative distribution function:
For two continuous variables X and Y:
For two discrete random variables, it is beneficial to generate a table of probabilities and address the cumulative probability for each potential range of X and Y, and here is the example:
given the joint probability mass function in tabular form, determine the joint cumulative distribution function.
Y = 2
Y = 4
Y = 6
Y = 8
X = 1
X = 3
X = 5
X = 7
Solution: using the given table of probabilities for each potential range of X and Y, the joint cumulative distribution function may be constructed in tabular form:
Y < 2
2 ≤ Y < 4
4 ≤ Y < 6
6 ≤ Y < 8
Y ≥ 8
X < 1
1 ≤ X < 3
3 ≤ X < 5
5 ≤ X < 7
X ≥ 7
Definition for more than two random variables
For random variables , the joint CDF is given by
Interpreting the random variables as a random vector yields a shorter notation:
Every multivariate CDF is:
Monotonically non-decreasing for each of its variables,
Right-continuous in each of its variables,
Not every function satisfying the above four properties is a multivariate CDF, unlike in the single dimension case. For example, let for or or and let otherwise. It is easy to see that the above conditions are met, and yet is not a CDF since if it was, then as explained below.
The probability that a point belongs to a hyperrectangle is analogous to the 1-dimensional case:
Complex random variable
The generalization of the cumulative distribution function from real to complex random variables is not obvious because expressions of the form make no sense. However expressions of the form make sense. Therefore, we define the cumulative distribution of a complex random variables via the joint distribution of their real and imaginary parts:
as definition for the CDS of a complex random vector .
Use in statistical analysis
The concept of the cumulative distribution function makes an explicit appearance in statistical analysis in two (similar) ways. Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The empirical distribution function is a formal direct estimate of the cumulative distribution function for which simple statistical properties can be derived and which can form the basis of various statistical hypothesis tests. Such tests can assess whether there is evidence against a sample of data having arisen from a given distribution, or evidence against two samples of data having arisen from the same (unknown) population distribution.
Kolmogorov–Smirnov and Kuiper's tests
The Kolmogorov–Smirnov test is based on cumulative distribution functions and can be used to test to see whether two empirical distributions are different or whether an empirical distribution is different from an ideal distribution. The closely related Kuiper's test is useful if the domain of the distribution is cyclic as in day of the week. For instance Kuiper's test might be used to see if the number of tornadoes varies during the year or if sales of a product vary by day of the week or day of the month.