Parameters Support Probability mass function Cumulative distribution function ${\displaystyle 0 success probability (real) ${\displaystyle 0 success probability (real) k trials where ${\displaystyle k\in \mathbb {N} =\{1,2,3,\dotsc \))$ k failures where ${\displaystyle k\in \mathbb {N} _{0}=\{0,1,2,\dotsc \))$ ${\displaystyle (1-p)^{k-1}p}$ ${\displaystyle (1-p)^{k}p}$ ${\displaystyle 1-(1-p)^{\lfloor x\rfloor ))$ for ${\displaystyle x\geq 1}$,${\displaystyle 0}$ for ${\displaystyle x<1}$ ${\displaystyle 1-(1-p)^{\lfloor x\rfloor +1))$ for ${\displaystyle x\geq 0}$,${\displaystyle 0}$ for ${\displaystyle x<0}$ ${\displaystyle {\frac {1}{p))}$ ${\displaystyle {\frac {1-p}{p))}$ ${\displaystyle \left\lceil {\frac {-1}{\log _{2}(1-p)))\right\rceil }$ (not unique if ${\displaystyle -1/\log _{2}(1-p)}$ is an integer) ${\displaystyle \left\lceil {\frac {-1}{\log _{2}(1-p)))\right\rceil -1}$ (not unique if ${\displaystyle -1/\log _{2}(1-p)}$ is an integer) ${\displaystyle 1}$ ${\displaystyle 0}$ ${\displaystyle {\frac {1-p}{p^{2))))$ ${\displaystyle {\frac {1-p}{p^{2))))$ ${\displaystyle {\frac {2-p}{\sqrt {1-p))))$ ${\displaystyle {\frac {2-p}{\sqrt {1-p))))$ ${\displaystyle 6+{\frac {p^{2)){1-p))}$ ${\displaystyle 6+{\frac {p^{2)){1-p))}$ ${\displaystyle {\tfrac {-(1-p)\log(1-p)-p\log p}{p))}$ ${\displaystyle {\tfrac {-(1-p)\log(1-p)-p\log p}{p))}$ ${\displaystyle {\frac {pe^{t)){1-(1-p)e^{t))},}$for ${\displaystyle t<-\ln(1-p)}$ ${\displaystyle {\frac {p}{1-(1-p)e^{t))},}$for ${\displaystyle t<-\ln(1-p)}$ ${\displaystyle {\frac {pe^{it)){1-(1-p)e^{it))))$ ${\displaystyle {\frac {p}{1-(1-p)e^{it))))$ ${\displaystyle {\frac {pz}{1-(1-p)z))}$ ${\displaystyle {\frac {p}{1-(1-p)z))}$

In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions:

• The probability distribution of the number ${\displaystyle X}$ of Bernoulli trials needed to get one success, supported on ${\displaystyle \mathbb {N} =\{1,2,3,\ldots \))$;
• The probability distribution of the number ${\displaystyle Y=X-1}$ of failures before the first success, supported on ${\displaystyle \mathbb {N} _{0}=\{0,1,2,\ldots \))$.

These two different geometric distributions should not be confused with each other. Often, the name shifted geometric distribution is adopted for the former one (distribution of ${\displaystyle X}$); however, to avoid ambiguity, it is considered wise to indicate which is intended, by mentioning the support explicitly.

The geometric distribution gives the probability that the first occurrence of success requires ${\displaystyle k}$ independent trials, each with success probability ${\displaystyle p}$. If the probability of success on each trial is ${\displaystyle p}$, then the probability that the ${\displaystyle k}$-th trial is the first success is

${\displaystyle \Pr(X=k)=(1-p)^{k-1}p}$

for ${\displaystyle k=1,2,3,4,\dots }$

The above form of the geometric distribution is used for modeling the number of trials up to and including the first success. By contrast, the following form of the geometric distribution is used for modeling the number of failures until the first success:

${\displaystyle \Pr(Y=k)=\Pr(X=k+1)=(1-p)^{k}p}$

for ${\displaystyle k=0,1,2,3,\dots }$

The geometric distribution gets its name because its probabilities follow a geometric sequence. It is sometimes called the Furry distribution after Wendell H. Furry.[1]: 210

## Definition

The geometric distribution is the discrete probability distribution that describes when the first success in an infinite sequence of independent and identically distributed Bernoulli trials occurs. Its probability mass function is${\displaystyle P(X=k)=(1-p)^{k-1}p}$where ${\displaystyle k=1,2,3,\dotsc }$ is the number of trials and ${\displaystyle p}$ is the probability of success in each trial.[2]: 260–261

Alternatively, some texts define the distribution where ${\displaystyle k=0,1,2,\dotsc }$ and call the former the zero-truncated geometric distribution. This alters the probability mass function into:[3]: 66 ${\displaystyle P(Y=k)=(1-p)^{k}p}$An example of a geometric distribution arises from rolling a six-sided die until a "1" appears. Each roll is independent with a ${\displaystyle 1/6}$ chance of success. The number of rolls needed follows a geometric distribution with ${\displaystyle p=1/6}$.

## Properties

### Memorylessness

 Main article: Memorylessness

The geometric distribution is the only memoryless discrete probability distribution.[4] It is the discrete version of the same property found in the exponential distribution.[1]: 228  The property asserts that the number of previously failed trials does not affect the number of future trials needed for a success. Expressed in terms of conditional probability, ${\displaystyle \Pr(X>m+n\mid X>n)=\Pr(X>m)}$where ${\displaystyle m}$ and ${\displaystyle n}$ are natural numbers. The equality is still true when ≥ is substituted.[3]: 71

### Moments and cumulants

The expected value and variance of a geometrically distributed random variable ${\displaystyle X}$ defined over ${\displaystyle \mathbb {N} }$ is[2]: 261 ${\displaystyle \operatorname {E} (X)={\frac {1}{p)),\qquad \operatorname {var} (X)={\frac {1-p}{p^{2))}.}$ When a geometrically distributed random variable ${\displaystyle Y}$ defined over ${\displaystyle \mathbb {N} _{0))$, the expected value changes into${\displaystyle \operatorname {E} (Y)={\frac {1-p}{p)),}$while the variance stays the same.[5]: 114–115

For example, when rolling a six-sided die until landing on a "1", the average number of rolls needed is ${\displaystyle {\frac {1}{1/6))=6}$ and the average number of failures is ${\displaystyle {\frac {1-1/6}{1/6))=5}$.

The moment generating function of the geometric distribution when defined over ${\displaystyle \mathbb {N} }$ and ${\displaystyle \mathbb {N} _{0))$ respectively is[6][5]: 114 {\displaystyle {\begin{aligned}M_{X}(t)&={\frac {pe^{t)){1-(1-p)e^{t))}\\M_{Y}(t)&={\frac {p}{1-(1-p)e^{t))},t<-\ln(1-p)\end{aligned))}The moments for the number of failures before the first success are given by

{\displaystyle {\begin{aligned}\mathrm {E} (Y^{n})&{}=\sum _{k=0}^{\infty }(1-p)^{k}p\cdot k^{n}\\&{}=p\operatorname {Li} _{-n}(1-p)&({\text{for ))n\neq 0)\end{aligned))}

where ${\displaystyle \operatorname {Li} _{-n}(1-p)}$ is the polylogarithm function.

The cumulant generating function of the geometric distribution defined over ${\displaystyle \mathbb {N} _{0))$ is[1]: 216  ${\displaystyle K(t)=\ln p-\ln(1-(1-p)e^{t})}$The cumulants ${\displaystyle \kappa _{r))$ satisfy the recursion${\displaystyle \kappa _{r+1}=q{\frac {\delta \kappa _{r)){\delta q)),r=1,2,\dotsc }$where ${\displaystyle q=1-p}$, when defined over ${\displaystyle \mathbb {N} _{0))$.[1]: 216

#### Proof of expected value

Consider the expected value ${\displaystyle \mathrm {E} (X)}$ of X as above, i.e. the average number of trials until a success. On the first trial, we either succeed with probability ${\displaystyle p}$, or we fail with probability ${\displaystyle 1-p}$. If we fail the remaining mean number of trials until a success is identical to the original mean. This follows from the fact that all trials are independent. From this we get the formula:

${\displaystyle \operatorname {E} (X)=p\cdot 1+(1-p)\cdot (1+\mathrm {E} (X)),}$

which, if solved for ${\displaystyle \mathrm {E} (X)}$, gives:

${\displaystyle \operatorname {E} (X)={\frac {1}{p)).}$

The expected value of ${\displaystyle Y}$ can be found from the linearity of expectation, ${\displaystyle \mathrm {E} (Y)=\mathrm {E} (X-1)=\mathrm {E} (X)-1={\frac {1}{p))-1={\frac {1-p}{p))}$. It can also be shown in the following way:

{\displaystyle {\begin{aligned}\operatorname {E} (Y)&{}=\sum _{k=0}^{\infty }(1-p)^{k}p\cdot k\\&{}=p\sum _{k=0}^{\infty }(1-p)^{k}k\\&{}=p(1-p)\sum _{k=0}^{\infty }(1-p)^{k-1}\cdot k\\&{}=p(1-p)\left[{\frac {d}{dp))\left(-\sum _{k=0}^{\infty }(1-p)^{k}\right)\right]\\&{}=p(1-p){\frac {d}{dp))\left(-{\frac {1}{p))\right)={\frac {1-p}{p)).\end{aligned))}

The interchange of summation and differentiation is justified by the fact that convergent power series converge uniformly on compact subsets of the set of points where they converge.

### Summary statistics

The mean of the geometric distribution is its expected value which is, as previously discussed in § Moments and cumulants, ${\displaystyle {\frac {1}{p))}$ or ${\displaystyle {\frac {1-p}{p))}$ when defined over ${\displaystyle \mathbb {N} }$ or ${\displaystyle \mathbb {N} _{0))$ respectively.

The median of the geometric distribution is ${\displaystyle \left\lfloor -{\frac {\log 2}{\log(1-p)))\right\rfloor }$ when defined over ${\displaystyle \mathbb {N} _{0))$.[3]: 69

The mode of the geometric distribution is the first value in the support set. This is 1 when defined over ${\displaystyle \mathbb {N} }$ and 0 when defined over ${\displaystyle \mathbb {N} _{0))$.[3]: 69

The skewness of the geometric distribution is ${\displaystyle {\frac {2-p}{\sqrt {1-p))))$.[5]: 115

The kurtosis of the geometric distribution is ${\displaystyle 9+{\frac {p^{2)){1-p))}$.[5]: 115  The excess kurtosis of a distribution is the difference between its kurtosis and the kurtosis of a normal distribution, ${\displaystyle 3}$.[7]: 217  Therefore, the excess kurtosis of the geometric distribution is ${\displaystyle 6+{\frac {p^{2)){1-p))}$. Since ${\displaystyle {\frac {p^{2)){1-p))\geq 0}$, the excess kurtosis is always positive so the distribution is leptokurtic.[3]: 69  In other words, the tail of a geometric distribution decays faster than a Gaussian.[7]: 217

### General properties

• The probability generating functions of geometric random variables ${\displaystyle X}$ and ${\displaystyle Y}$ defined over ${\displaystyle \mathbb {N} }$ and ${\displaystyle \mathbb {N} _{0))$ are, respectively,[5]: 114–115
{\displaystyle {\begin{aligned}G_{X}(s)&={\frac {s\,p}{1-s\,(1-p))),\\[10pt]G_{Y}(s)&={\frac {p}{1-s\,(1-p))),\quad |s|<(1-p)^{-1}.\end{aligned))}
• The characteristic function ${\displaystyle \varphi (t)}$ is equal to ${\displaystyle G(e^{it})}$ so the geometric distribution's characteristic function, when defined over ${\displaystyle \mathbb {N} }$ and ${\displaystyle \mathbb {N} _{0))$ respectively, is[8]: 1630 {\displaystyle {\begin{aligned}\varphi _{X}(t)&={\frac {pe^{it)){1-(1-p)e^{it))},\\[10pt]\varphi _{Y}(t)&={\frac {p}{1-(1-p)e^{it))}.\end{aligned))}
• The entropy of a geometric distribution with parameter ${\displaystyle p}$ is[9]${\displaystyle -{\frac {p\log _{2}p+(1-p)\log _{2}(1-p)}{p))}$
• Given a mean, the geometric distribution is the maximum entropy probability distribution of all discrete probability distributions. The corresponding continuous distribution is the exponential distribution.[10]
• The geometric distribution defined on ${\displaystyle \mathbb {N} _{0))$ is infinitely divisible, that is, for any positive integer ${\displaystyle n}$, there exist ${\displaystyle n}$ independent identically distributed random variables whose sum is also geometrically distributed. This is because the negative binomial distribution can be derived from a Poisson-stopped sum of logarithmic random variables.[8]: 606–607
• The decimal digits of the geometrically distributed random variable Y are a sequence of independent (and not identically distributed) random variables.[citation needed] For example, the hundreds digit D has this probability distribution:
${\displaystyle \Pr(D=d)={q^{100d} \over 1+q^{100}+q^{200}+\cdots +q^{900)),}$
where q = 1 − p, and similarly for the other digits, and, more generally, similarly for numeral systems with other bases than 10. When the base is 2, this shows that a geometrically distributed random variable can be written as a sum of independent random variables whose probability distributions are indecomposable.

## Related distributions

• The sum of ${\displaystyle r}$ independent geometric random variables with parameter ${\displaystyle p}$ is a negative binomial random variable with parameters ${\displaystyle r}$ and ${\displaystyle p}$.[11] The geometric distribution is a special case of the negative binomial distribution, with ${\displaystyle r=1}$.
• The geometric distribution is a special case of discrete compound Poisson distribution.
• The minimum of ${\displaystyle n}$ geometric random variables with parameters ${\displaystyle p_{1},\dotsc ,p_{n))$ is also geometrically distributed with parameter ${\displaystyle 1-\prod _{i=1}^{n}(1-p_{i})}$.[12]
• Suppose 0 < r < 1, and for k = 1, 2, 3, ... the random variable Xk has a Poisson distribution with expected value rk/k. Then
${\displaystyle \sum _{k=1}^{\infty }k\,X_{k))$
has a geometric distribution taking values in ${\displaystyle \mathbb {N} _{0))$, with expected value r/(1 − r).[citation needed]
• The exponential distribution is the continuous analogue of the geometric distribution. Applying the floor function to the exponential distribution with parameter ${\displaystyle \lambda }$ creates a geometric distribution with parameter ${\displaystyle p=1-e^{-\lambda ))$ defined over ${\displaystyle \mathbb {N} _{0))$.[3]: 74  This can be used to generate geometrically distributed random numbers as detailed in § Random variate generation.
• If p = 1/n and X is geometrically distributed with parameter p, then the distribution of X/n approaches an exponential distribution with expected value 1 as n → ∞, since
{\displaystyle {\begin{aligned}\Pr(X/n>a)=\Pr(X>na)&=(1-p)^{na}=\left(1-{\frac {1}{n))\right)^{na}=\left[\left(1-{\frac {1}{n))\right)^{n}\right]^{a}\\&\to [e^{-1}]^{a}=e^{-a}{\text{ as ))n\to \infty .\end{aligned))}

More generally, if p = λ/n, where λ is a parameter, then as n→ ∞ the distribution of X/n approaches an exponential distribution with rate λ:

${\displaystyle \Pr(X>nx)=\lim _{n\to \infty }(1-\lambda /n)^{nx}=e^{-\lambda x))$

therefore the distribution function of X/n converges to ${\displaystyle 1-e^{-\lambda x))$, which is that of an exponential random variable.

## Statistical inference

The true parameter ${\displaystyle p}$ of an unknown geometric distribution can be inferred through estimators and conjugate distributions.

### Method of moments

Provided they exist, the first ${\displaystyle l}$ moments of a probability distribution can be estimated from a sample ${\displaystyle x_{1},\dotsc ,x_{n))$ using the formula${\displaystyle m_{i}={\frac {1}{n))\sum _{j=1}^{n}x_{j}^{i))$where ${\displaystyle m_{i))$ is the ${\displaystyle i}$th sample moment and ${\displaystyle 1\leq i\leq l}$.[13]: 349–350  Estimating ${\displaystyle \mathrm {E} (X)}$ with ${\displaystyle m_{1))$ gives the sample mean, denoted ${\displaystyle {\bar {x))}$. Substituting this estimate in the formula for the expected value of a geometric distribution and solving for ${\displaystyle p}$ gives the estimators ${\displaystyle {\hat {p))={\frac {1}{\bar {x))))$ and ${\displaystyle {\hat {p))={\frac {1}((\bar {x))+1))}$ when supported on ${\displaystyle \mathbb {N} }$ and ${\displaystyle \mathbb {N} _{0))$ respectively. These estimators are biased since ${\displaystyle \mathrm {E} \left({\frac {1}{\bar {x))}\right)>{\frac {1}{\mathrm {E} ({\bar {x)))))=p}$ as a result of Jensen's inequality.[14]: 53–54

### Maximum likelihood estimation

The maximum likelihood estimator of ${\displaystyle p}$ is the value that maximizes the likelihood function given a sample.[13]: 308  By finding the zero of the derivative of the log-likelihood function when the distribution is defined over ${\displaystyle \mathbb {N} }$, the maximum likelihood estimator can be found to be ${\displaystyle {\hat {p))={\frac {1}{\bar {x))))$, where ${\displaystyle {\bar {x))}$ is the sample mean.[15] If the domain is ${\displaystyle \mathbb {N} _{0))$, then the estimator shifts to ${\displaystyle {\hat {p))={\frac {1}((\bar {x))+1))}$. As previously discussed in § Method of moments, these estimators are biased.

Regardless of the domain, the bias is equal to

${\displaystyle b\equiv \operatorname {E} {\bigg [}\;({\hat {p))_{\mathrm {mle} }-p)\;{\bigg ]}={\frac {p\,(1-p)}{n))}$

which yields the bias-corrected maximum likelihood estimator,

${\displaystyle {\hat {p\,))_{\text{mle))^{*}={\hat {p\,))_{\text{mle))-{\hat {b\,))}$

### Bayesian inference

In Bayesian inference, the parameter ${\displaystyle p}$ is a random variable from a prior distribution with a posterior distribution calculated using Bayes' theorem after observing samples.[14]: 167  If a beta distribution is chosen as the prior distribution, then the posterior will also be a beta distribution and it is called the conjugate distribution. In particular, if a ${\displaystyle \mathrm {Beta} (\alpha ,\beta )}$ prior is selected, then the posterior, after observing samples ${\displaystyle k_{1},\dotsc ,k_{n}\in \mathbb {N} }$, is[16]${\displaystyle p\sim \mathrm {Beta} \left(\alpha +n,\ \beta +\sum _{i=1}^{n}(k_{i}-1)\right).\!}$Alternatively, if the samples are in ${\displaystyle \mathbb {N} _{0))$, the posterior distribution is[17]${\displaystyle p\sim \mathrm {Beta} \left(\alpha +n,\beta +\sum _{i=1}^{n}k_{i}\right).}$The posterior mean approaches its maximum likelihood estimate ${\displaystyle {\widehat {p))}$ as ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ approach zero, regardless of the support.

### Unbiased estimator

An unbiased estimator of ${\displaystyle p}$ will, on average, equal the true parameter. Expressed in terms of expectation, the unbiased estimator ${\displaystyle T(x)}$ satisfies ${\displaystyle \mathrm {E} (T(x))=p}$. For the geometric distribution, the unbiased estimator would need to satisfy:${\displaystyle \sum _{x=1}^{\infty }T(x)p(1-p)^{x-1}=p}$The only solution to this equation is,${\displaystyle T(x)={\begin{cases}1&x=1\\0&x\geq 2\end{cases))}$This estimator is not useful when ${\displaystyle p}$ is not close to 0 or 1.[14]: 53–54

## Random variate generation

 Further information: Non-uniform random variate generation

The geometric distribution can be generated experimentally from i.i.d. standard uniform random variables by finding the first such random variable to be less than or equal to ${\displaystyle p}$. However, the number of random variables needed is also geometrically distributed and the algorithm slows as ${\displaystyle p}$ decreases.[18]: 498

Random generation can be done in constant time by truncating exponential random numbers. An exponential random variable ${\displaystyle E}$ can become geometrically distributed with parameter ${\displaystyle p}$ through ${\displaystyle \lceil -E/\log(1-p)\rceil }$. In turn, ${\displaystyle E}$ can be generated from a standard uniform random variable ${\displaystyle U}$ altering the formula into ${\displaystyle \lceil \log(U)/\log(1-p)\rceil }$.[18]: 499–500 [19]

## References

1. ^ a b c d Johnson, Norman L.; Kemp, Adrienne W.; Kotz, Samuel (2005-08-19). Univariate Discrete Distributions. Wiley Series in Probability and Statistics (1 ed.). Wiley. doi:10.1002/0471715816. ISBN 978-0-471-27246-5.
2. ^ a b Nagel, Werner; Steyer, Rolf (2017-04-04). Probability and Conditional Expectation: Fundamentals for the Empirical Sciences. Wiley Series in Probability and Statistics (1st ed.). Wiley. doi:10.1002/9781119243496. ISBN 978-1-119-24352-6.
3. Chattamvelli, Rajan; Shanmugam, Ramalingam (2020). Discrete Distributions in Engineering and the Applied Sciences. Synthesis Lectures on Mathematics & Statistics. Cham: Springer International Publishing. doi:10.1007/978-3-031-02425-2. ISBN 978-3-031-01297-6.
4. ^ Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuhaä, Hendrik Paul; Meester, Ludolf Erwin (2005). A Modern Introduction to Probability and Statistics. Springer Texts in Statistics. London: Springer London. p. 50. doi:10.1007/1-84628-168-7. ISBN 978-1-85233-896-1.
5. Forbes, Catherine; Evans, Merran; Hastings, Nicholas; Peacock, Brian (2010-11-29). Statistical Distributions (1st ed.). Wiley. doi:10.1002/9780470627242. ISBN 978-0-470-39063-4.
6. ^ Bertsekas, Dimitri P.; Tsitsiklis, John N. (2008). Introduction to probability. Optimization and computation series (2nd ed.). Belmont: Athena Scientific. p. 235. ISBN 978-1-886529-23-6.
7. ^ a b Chan, Stanley (2021). Introduction to Probability for Data Science (1st ed.). Michigan Publishing. ISBN 978-1-60785-747-1.
8. ^ a b Lovric, Miodrag, ed. (2011). International Encyclopedia of Statistical Science (1st ed.). Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-04898-2. ISBN 978-3-642-04897-5.
9. ^ a b Gallager, R.; van Voorhis, D. (March 1975). "Optimal source codes for geometrically distributed integer alphabets (Corresp.)". IEEE Transactions on Information Theory. 21 (2): 228–230. doi:10.1109/TIT.1975.1055357. ISSN 0018-9448.
10. ^ Lisman, J. H. C.; Zuylen, M. C. A. van (March 1972). "Note on the generation of most probable frequency distributions". Statistica Neerlandica. 26 (1): 19–23. doi:10.1111/j.1467-9574.1972.tb00152.x. ISSN 0039-0402.
11. ^ Pitman, Jim (1993). Probability. New York, NY: Springer New York. p. 372. doi:10.1007/978-1-4612-4374-8. ISBN 978-0-387-94594-1.
12. ^ Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David (1 June 1995). "On the minimum of independent geometrically distributed random variables". Statistics & Probability Letters. 23 (4): 313–326. doi:10.1016/0167-7152(94)00130-Z. hdl:2060/19940028569. S2CID 1505801.
13. ^ a b Evans, Michael; Rosenthal, Jeffrey (2023). Probability and Statistics: The Science of Uncertainty (2nd ed.). Macmillan Learning. ISBN 978-1429224628.
14. ^ a b c Held, Leonhard; Sabanés Bové, Daniel (2020). Likelihood and Bayesian Inference: With Applications in Biology and Medicine. Statistics for Biology and Health. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-662-60792-3. ISBN 978-3-662-60791-6.
15. ^ Siegrist, Kyle (2020-05-05). "7.3: Maximum Likelihood". Statistics LibreTexts. Retrieved 2024-06-20.
16. ^ Fink, Daniel. "A Compendium of Conjugate Priors". CiteSeerX 10.1.1.157.5540.
17. ^ "3. Conjugate families of distributions" (PDF). Archived (PDF) from the original on 2010-04-08.
18. ^ a b Devroye, Luc (1986). Non-Uniform Random Variate Generation. New York, NY: Springer New York. doi:10.1007/978-1-4613-8643-8. ISBN 978-1-4613-8645-2.
19. ^ Knuth, Donald Ervin (1997). The Art of Computer Programming. Vol. 2 (3rd ed.). Reading, Mass: Addison-Wesley. p. 136. ISBN 978-0-201-89683-1.