Probability mass function | |||

Cumulative distribution function | |||

Notation | |||
---|---|---|---|

Parameters |
– number of trials – success probability for each trial | ||

Support | – number of successes | ||

PMF | |||

CDF | (the regularized incomplete beta function) | ||

Mean | |||

Median | or | ||

Mode | or | ||

Variance | |||

Skewness | |||

Ex. kurtosis | |||

Entropy |
in shannons. For nats, use the natural log in the log. | ||

MGF | |||

CF | |||

PGF | |||

Fisher information |
(for fixed ) |

Part of a series on statistics |

Probability theory |
---|

In probability theory and statistics, the **binomial distribution** with parameters *n* and *p* is the discrete probability distribution of the number of successes in a sequence of *n* independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: *success* (with probability *p*) or *failure* (with probability ). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., *n* = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.^{[1]}

The binomial distribution is frequently used to model the number of successes in a sample of size *n* drawn with replacement from a population of size *N*. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for *N* much larger than *n*, the binomial distribution remains a good approximation, and is widely used.

In general, if the random variable *X* follows the binomial distribution with parameters *n* ∈ and *p* ∈ [0,1], we write *X* ~ B(*n*, *p*). The probability of getting exactly *k* successes in *n* independent Bernoulli trials is given by the probability mass function:

for *k* = 0, 1, 2, ..., *n*, where

is the binomial coefficient, hence the name of the distribution. The formula can be understood as follows: *k* successes occur with probability *p*^{k} and *n* − *k* failures occur with probability . However, the *k* successes can occur anywhere among the *n* trials, and there are different ways of distributing *k* successes in a sequence of *n* trials.

In creating reference tables for binomial distribution probability, usually the table is filled in up to *n*/2 values. This is because for *k* > *n*/2, the probability can be calculated by its complement as

Looking at the expression *f*(*k*, *n*, *p*) as a function of *k*, there is a *k* value that maximizes it. This *k* value can be found by calculating

and comparing it to 1. There is always an integer *M* that satisfies^{[2]}

*f*(*k*, *n*, *p*) is monotone increasing for *k* < *M* and monotone decreasing for *k* > *M*, with the exception of the case where (*n* + 1)*p* is an integer. In this case, there are two values for which *f* is maximal: (*n* + 1)*p* and (*n* + 1)*p* − 1. *M* is the *most probable* outcome (that is, the most likely, although this can still be unlikely overall) of the Bernoulli trials and is called the mode.

Suppose a biased coin comes up heads with probability 0.3 when tossed. The probability of seeing exactly 4 heads in 6 tosses is

The cumulative distribution function can be expressed as:

where is the "floor" under *k*, i.e. the greatest integer less than or equal to *k*.

It can also be represented in terms of the regularized incomplete beta function, as follows:^{[3]}

which is equivalent to the cumulative distribution function of the F-distribution:^{[4]}

Some closed-form bounds for the cumulative distribution function are given below.

If *X* ~ *B*(*n*, *p*), that is, *X* is a binomially distributed random variable, *n* being the total number of experiments and *p* the probability of each experiment yielding a successful result, then the expected value of *X* is:^{[5]}

This follows from the linearity of the expected value along with the fact that X is the sum of n identical Bernoulli random variables, each with expected value p. In other words, if are identical (and independent) Bernoulli random variables with parameter p, then and

The variance is:

This similarly follows from the fact that the variance of a sum of independent random variables is the sum of the variances.

The first 6 central moments, defined as , are given by

The non-central moments satisfy

and in general
^{[6]}
^{[7]}

where are the Stirling numbers of the second kind, and is the th falling power of .
A simple bound
^{[8]} follows by bounding the Binomial moments via the higher Poisson moments:

This shows that if , then is at most a constant factor away from

Usually the mode of a binomial *B*(*n*, *p*) distribution is equal to , where is the floor function. However, when (*n* + 1)*p* is an integer and *p* is neither 0 nor 1, then the distribution has two modes: (*n* + 1)*p* and (*n* + 1)*p* − 1. When *p* is equal to 0 or 1, the mode will be 0 and *n* correspondingly. These cases can be summarized as follows:

**Proof:** Let

For only has a nonzero value with . For we find and for . This proves that the mode is 0 for and for .

Let . We find

- .

From this follows

So when is an integer, then and is a mode. In the case that , then only is a mode.^{[9]}

In general, there is no single formula to find the median for a binomial distribution, and it may even be non-unique. However, several special results have been established:

- If
*np*is an integer, then the mean, median, and mode coincide and equal*np*.^{[10]}^{[11]} - Any median
*m*must lie within the interval ⌊*np*⌋ ≤*m*≤ ⌈*np*⌉.^{[12]} - A median
*m*cannot lie too far away from the mean: |*m*−*np*| ≤ min{ ln 2, max{*p*, 1 −*p*} }.^{[13]} - The median is unique and equal to
*m*= round(*np*) when |*m*−*np*| ≤ min{*p*, 1 −*p*} (except for the case when*p*= 1/2 and*n*is odd).^{[12]} - When
*p*is a rational number (with the exception of*p*= 1/2 and*n*odd) the median is unique.^{[14]} - When
*p*= 1/2 and*n*is odd, any number*m*in the interval 1/2(*n*− 1) ≤*m*≤ 1/2(*n*+ 1) is a median of the binomial distribution. If*p*= 1/2 and*n*is even, then*m*=*n*/2 is the unique median.

For *k* ≤ *np*, upper bounds can be derived for the lower tail of the cumulative distribution function , the probability that there are at most *k* successes. Since , these bounds can also be seen as bounds for the upper tail of the cumulative distribution function for *k* ≥ *np*.

Hoeffding's inequality yields the simple bound

which is however not very tight. In particular, for *p* = 1, we have that *F*(*k*;*n*,*p*) = 0 (for fixed *k*, *n* with *k* < *n*), but Hoeffding's bound evaluates to a positive constant.

A sharper bound can be obtained from the Chernoff bound:^{[15]}

where *D*(*a* || *p*) is the relative entropy (or Kullback-Leibler divergence) between an *a*-coin and a *p*-coin (i.e. between the Bernoulli(*a*) and Bernoulli(*p*) distribution):

Asymptotically, this bound is reasonably tight; see ^{[15]} for details.

One can also obtain *lower* bounds on the tail , known as anti-concentration bounds. By approximating the binomial coefficient with Stirling's formula it can be shown that^{[16]}

which implies the simpler but looser bound

For *p* = 1/2 and *k* ≥ 3*n*/8 for even *n*, it is possible to make the denominator constant:^{[17]}

See also: Beta distribution § Bayesian inference |

When *n* is known, the parameter *p* can be estimated using the proportion of successes:

This estimator is found using maximum likelihood estimator and also the method of moments. This estimator is unbiased and uniformly with minimum variance, proven using Lehmann–Scheffé theorem, since it is based on a minimal sufficient and complete statistic (i.e.: *x*). It is also consistent both in probability and in MSE.

A closed form Bayes estimator for *p* also exists when using the Beta distribution as a conjugate prior distribution. When using a general as a prior, the posterior mean estimator is:

The Bayes estimator is asymptotically efficient and as the sample size approaches infinity (*n* → ∞), it approaches the MLE solution. The Bayes estimator is biased (how much depends on the priors), admissible and consistent in probability.

For the special case of using the standard uniform distribution as a non-informative prior, , the posterior mean estimator becomes:

(A posterior mode should just lead to the standard estimator.) This method is called the rule of succession, which was introduced in the 18th century by Pierre-Simon Laplace.

When estimating *p* with very rare events and a small *n* (e.g.: if x=0), then using the standard estimator leads to which sometimes is unrealistic and undesirable. In such cases there are various alternative estimators.^{[18]} One way is to use the Bayes estimator, leading to:

Another method is to use the upper bound of the confidence interval obtained using the rule of three:

Main article: Binomial proportion confidence interval |

Even for quite large values of *n*, the actual distribution of the mean is significantly nonnormal.^{[19]} Because of this problem several methods to estimate confidence intervals have been proposed.

In the equations for confidence intervals below, the variables have the following meaning:

*n*_{1}is the number of successes out of*n*, the total number of trials- is the proportion of successes
- is the quantile of a standard normal distribution (i.e., probit) corresponding to the target error rate . For example, for a 95% confidence level the error = 0.05, so = 0.975 and = 1.96.

Main article: Binomial proportion confidence interval § Wald interval |

A continuity correction of 0.5/*n* may be added.^{[clarification needed]}

Main article: Binomial proportion confidence interval § Agresti–Coull interval |

^{[20]}

Here the estimate of *p* is modified to

This method works well for and .^{[21]} See here for .^{[22]} For use the Wilson (score) method below.

Main article: Binomial proportion confidence interval § Arcsine transformation |

^{[23]}

Main article: Binomial proportion confidence interval § Wilson score interval |

The notation in the formula below differs from the previous formulas in two respects:^{[24]}

- Firstly,
*z*_{x}has a slightly different interpretation in the formula below: it has its ordinary meaning of 'the*x*th quantile of the standard normal distribution', rather than being a shorthand for 'the (1 −*x*)-th quantile'. - Secondly, this formula does not use a plus-minus to define the two bounds. Instead, one may use to get the lower bound, or use to get the upper bound. For example: for a 95% confidence level the error = 0.05, so one gets the lower bound by using , and one gets the upper bound by using .

^{[25]}

The so-called "exact" (Clopper–Pearson) method is the most conservative.^{[19]} (*Exact* does not mean perfectly accurate; rather, it indicates that the estimates will not be less conservative than the true value.)

The Wald method, although commonly recommended in textbooks, is the most biased.^{[clarification needed]}

If *X* ~ B(*n*, *p*) and *Y* ~ B(*m*, *p*) are independent binomial variables with the same probability *p*, then *X* + *Y* is again a binomial variable; its distribution is *Z=X+Y* ~ B(*n+m*, *p*):^{[26]}

A Binomial distributed random variable *X* ~ B(*n*, *p*) can be considered as the sum of *n* Bernoulli distributed random variables. So the sum of two Binomial distributed random variable *X* ~ B(*n*, *p*) and *Y* ~ B(*m*, *p*) is equivalent to the sum of *n* + *m* Bernoulli distributed random variables, which means *Z=X+Y* ~ B(*n+m*, *p*). This can also be proven directly using the addition rule.

However, if *X* and *Y* do not have the same probability *p*, then the variance of the sum will be smaller than the variance of a binomial variable distributed as

The binomial distribution is a special case of the Poisson binomial distribution, which is the distribution of a sum of *n* independent non-identical Bernoulli trials B(*p _{i}*).

This result was first derived by Katz and coauthors in 1978.^{[28]}

Let *X* ~ B(*n*, *p*_{1}) and *Y* ~ B(*m*, *p*_{2}) be independent. Let *T* = (*X*/*n*) / (*Y*/*m*).

Then log(*T*) is approximately normally distributed with mean log(*p*_{1}/*p*_{2}) and variance ((1/*p*_{1}) − 1)/*n* + ((1/*p*_{2}) − 1)/*m*.

If *X* ~ B(*n*, *p*) and *Y* | *X* ~ B(*X*, *q*) (the conditional distribution of *Y*, given *X*), then *Y* is a simple binomial random variable with distribution *Y* ~ B(*n*, *pq*).

For example, imagine throwing *n* balls to a basket *U _{X}* and taking the balls that hit and throwing them to another basket

[Proof]

Since and , by the law of total probability,

Since the equation above can be expressed as

Factoring and pulling all the terms that don't depend on out of the sum now yields

After substituting in the expression above, we get

Notice that the sum (in the parentheses) above equals by the binomial theorem. Substituting this in finally yields

and thus as desired.

The Bernoulli distribution is a special case of the binomial distribution, where *n* = 1. Symbolically, *X* ~ B(1, *p*) has the same meaning as *X* ~ Bernoulli(*p*). Conversely, any binomial distribution, B(*n*, *p*), is the distribution of the sum of *n* independent Bernoulli trials, Bernoulli(*p*), each with the same probability *p*.^{[29]}

See also: Binomial proportion confidence interval § Normal approximation interval |

If *n* is large enough, then the skew of the distribution is not too great. In this case a reasonable approximation to B(*n*, *p*) is given by the normal distribution

and this basic approximation can be improved in a simple way by using a suitable continuity correction.
The basic approximation generally improves as *n* increases (at least 20) and is better when *p* is not near to 0 or 1.^{[30]} Various rules of thumb may be used to decide whether *n* is large enough, and *p* is far enough from the extremes of zero or one:

- One rule
^{[30]}is that for*n*> 5 the normal approximation is adequate if the absolute value of the skewness is strictly less than 0.3; that is, if

This can be made precise using the Berry–Esseen theorem.

- A stronger rule states that the normal approximation is appropriate only if everything within 3 standard deviations of its mean is within the range of possible values; that is, only if

- This 3-standard-deviation rule is equivalent to the following conditions, which also imply the first rule above.

[Proof]

The rule is totally equivalent to request that

Moving terms around yields:

Since , we can apply the square power and divide by the respective factors and , to obtain the desired conditions:

Notice that these conditions automatically imply that . On the other hand, apply again the square root and divide by 3,

Subtracting the second set of inequalities from the first one yields:

and so, the desired first rule is satisfied,

- Another commonly used rule is that both values and must be greater than or equal to 5. However, the specific number varies from source to source, and depends on how good an approximation one wants. In particular, if one uses 9 instead of 5, the rule implies the results stated in the previous paragraphs.

[Proof]

Assume that both values and are greater than 9. Since , we easily have that

We only have to divide now by the respective factors and , to deduce the alternative form of the 3-standard-deviation rule:

The following is an example of applying a continuity correction. Suppose one wishes to calculate Pr(*X* ≤ 8) for a binomial random variable *X*. If *Y* has a distribution given by the normal approximation, then Pr(*X* ≤ 8) is approximated by Pr(*Y* ≤ 8.5). The addition of 0.5 is the continuity correction; the uncorrected normal approximation gives considerably less accurate results.

This approximation, known as de Moivre–Laplace theorem, is a huge time-saver when undertaking calculations by hand (exact calculations with large *n* are very onerous); historically, it was the first use of the normal distribution, introduced in Abraham de Moivre's book *The Doctrine of Chances* in 1738. Nowadays, it can be seen as a consequence of the central limit theorem since B(*n*, *p*) is a sum of *n* independent, identically distributed Bernoulli variables with parameter *p*. This fact is the basis of a hypothesis test, a "proportion z-test", for the value of *p* using *x/n*, the sample proportion and estimator of *p*, in a common test statistic.^{[31]}

For example, suppose one randomly samples *n* people out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If groups of *n* people were sampled repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportion *p* of agreement in the population and with standard deviation

The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while the product *np* converges to a finite limit. Therefore, the Poisson distribution with parameter *λ* = *np* can be used as an approximation to B(*n*, *p*) of the binomial distribution if *n* is sufficiently large and *p* is sufficiently small. According to two rules of thumb, this approximation is good if *n* ≥ 20 and *p* ≤ 0.05, or if *n* ≥ 100 and *np* ≤ 10.^{[32]}

Concerning the accuracy of Poisson approximation, see Novak,^{[33]} ch. 4, and references therein.

*Poisson limit theorem*: As*n*approaches ∞ and*p*approaches 0 with the product*np*held fixed, the Binomial(*n*,*p*) distribution approaches the Poisson distribution with expected value*λ = np*.^{[32]}*de Moivre–Laplace theorem*: As*n*approaches ∞ while*p*remains fixed, the distribution of

- approaches the normal distribution with expected value 0 and variance 1. This result is sometimes loosely stated by saying that the distribution of
*X*is asymptotically normal with expected value 0 and variance 1. This result is a specific case of the central limit theorem.

The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is the PMF of k successes given n independent events each with a probability p of success.
Mathematically, when *α* = *k* + 1 and *β* = *n* − *k* + 1, the beta distribution and the binomial distribution are related by^{[clarification needed]} a factor of *n* + 1:

Beta distributions also provide a family of prior probability distributions for binomial distributions in Bayesian inference:^{[34]}

Given a uniform prior, the posterior distribution for the probability of success p given n independent events with k observed successes is a beta distribution.^{[35]}

Further information: Pseudo-random number sampling |

Methods for random number generation where the marginal distribution is a binomial distribution are well-established.^{[36]}^{[37]}
One way to generate random variates samples from a binomial distribution is to use an inversion algorithm. To do so, one must calculate the probability that Pr(*X* = *k*) for all values k from 0 through n. (These probabilities should sum to a value close to one, in order to encompass the entire sample space.) Then by using a pseudorandom number generator to generate samples uniformly between 0 and 1, one can transform the calculated samples into discrete numbers by using the probabilities calculated in the first step.

This distribution was derived by Jacob Bernoulli. He considered the case where *p* = *r*/(*r* + *s*) where *p* is the probability of success and *r* and *s* are positive integers. Blaise Pascal had earlier considered the case where *p* = 1/2.