Notation Probability density function Cumulative distribution function Beta(α, β) α > 0 shape (real)β > 0 shape (real) ${\displaystyle x\in [0,1]\!}$ or ${\displaystyle x\in (0,1)\!}$ ${\displaystyle {\frac {x^{\alpha -1}(1-x)^{\beta -1)){\mathrm {B} (\alpha ,\beta )))\!}$where ${\displaystyle \mathrm {B} (\alpha ,\beta )={\frac {\Gamma (\alpha )\Gamma (\beta )}{\Gamma (\alpha +\beta )))}$ and ${\displaystyle \Gamma }$ is the Gamma function. ${\displaystyle I_{x}(\alpha ,\beta )\!}$ (the regularized incomplete beta function) ${\displaystyle \operatorname {E} [X]={\frac {\alpha }{\alpha +\beta ))\!}$${\displaystyle \operatorname {E} [\ln X]=\psi (\alpha )-\psi (\alpha +\beta )\!}$${\displaystyle \operatorname {E} [X\,\ln X]={\frac {\alpha }{\alpha +\beta ))\,\left[\psi (\alpha +1)-\psi (\alpha +\beta +1)\right]\!}$ (see section: Geometric mean) where ${\displaystyle \psi }$ is the digamma function ${\displaystyle {\begin{matrix}I_{\frac {1}{2))^{[-1]}(\alpha ,\beta ){\text{ (in general) ))\\[0.5em]\approx {\frac {\alpha -{\tfrac {1}{3))}{\alpha +\beta -{\tfrac {2}{3)))){\text{ for ))\alpha ,\beta >1\end{matrix))}$ ${\displaystyle {\frac {\alpha -1}{\alpha +\beta -2))\!}$ for α, β > 1 any value in ${\displaystyle (0,1)}$ for α, β = 1 {0, 1} (bimodal) for α, β < 1 0 for α ≤ 1, β > 1 1 for α > 1, β ≤ 1 ${\displaystyle \operatorname {var} [X]={\frac {\alpha \beta }{(\alpha +\beta )^{2}(\alpha +\beta +1)))\!}$${\displaystyle \operatorname {var} [\ln X]=\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta )\!}$(see trigamma function and see section: Geometric variance) ${\displaystyle {\frac {2\,(\beta -\alpha ){\sqrt {\alpha +\beta +1))}{(\alpha +\beta +2){\sqrt {\alpha \beta ))))}$ ${\displaystyle {\frac {6[(\alpha -\beta )^{2}(\alpha +\beta +1)-\alpha \beta (\alpha +\beta +2)]}{\alpha \beta (\alpha +\beta +2)(\alpha +\beta +3)))}$ ${\displaystyle {\begin{matrix}\ln \mathrm {B} (\alpha ,\beta )-(\alpha -1)\psi (\alpha )-(\beta -1)\psi (\beta )\\[0.5em]+(\alpha +\beta -2)\psi (\alpha +\beta )\end{matrix))}$ ${\displaystyle 1+\sum _{k=1}^{\infty }\left(\prod _{r=0}^{k-1}{\frac {\alpha +r}{\alpha +\beta +r))\right){\frac {t^{k)){k!))}$ ${\displaystyle {}_{1}F_{1}(\alpha ;\alpha +\beta ;i\,t)\!}$ (see Confluent hypergeometric function) ${\displaystyle {\begin{bmatrix}\operatorname {var} [\ln X]&\operatorname {cov} [\ln X,\ln(1-X)]\\\operatorname {cov} [\ln X,\ln(1-X)]&\operatorname {var} [\ln(1-X)]\end{bmatrix))}$ see section: Fisher information matrix ${\displaystyle \alpha =\left({\frac {E[X](1-E[X])}{V[X]))-1\right)E[X]}$${\displaystyle \beta =\left({\frac {E[X](1-E[X])}{V[X]))-1\right)(1-E[X])}$

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.

The beta distribution has been applied to model the behavior of random variables limited to intervals of finite length in a wide variety of disciplines. The beta distribution is a suitable model for the random behavior of percentages and proportions.

In Bayesian inference, the beta distribution is the conjugate prior probability distribution for the Bernoulli, binomial, negative binomial and geometric distributions.

The formulation of the beta distribution discussed here is also known as the beta distribution of the first kind, whereas beta distribution of the second kind is an alternative name for the beta prime distribution. The generalization to multiple variables is called a Dirichlet distribution.

## Definitions

### Probability density function

An animation of the Beta distribution for different values of its parameters.

The probability density function (PDF) of the beta distribution, for 0 ≤ x ≤ 1, and shape parameters α, β > 0, is a power function of the variable x and of its reflection (1 − x) as follows:

{\displaystyle {\begin{aligned}f(x;\alpha ,\beta )&=\mathrm {constant} \cdot x^{\alpha -1}(1-x)^{\beta -1}\\[3pt]&={\frac {x^{\alpha -1}(1-x)^{\beta -1)){\displaystyle \int _{0}^{1}u^{\alpha -1}(1-u)^{\beta -1}\,du))\\[6pt]&={\frac {\Gamma (\alpha +\beta )}{\Gamma (\alpha )\Gamma (\beta )))\,x^{\alpha -1}(1-x)^{\beta -1}\\[6pt]&={\frac {1}{\mathrm {B} (\alpha ,\beta )))x^{\alpha -1}(1-x)^{\beta -1}\end{aligned))}

where Γ(z) is the gamma function. The beta function, ${\displaystyle \mathrm {B} }$, is a normalization constant to ensure that the total probability is 1. In the above equations x is a realization—an observed value that actually occurred—of a random variable X.

This definition includes both ends x = 0 and x = 1, which is consistent with definitions for other continuous distributions supported on a bounded interval which are special cases of the beta distribution, for example the arcsine distribution, and consistent with several authors, like N. L. Johnson and S. Kotz.[1][2][3][4] However, the inclusion of x = 0 and x = 1 does not work for α, β < 1; accordingly, several other authors, including W. Feller,[5][6][7] choose to exclude the ends x = 0 and x = 1, (so that the two ends are not actually part of the domain of the density function) and consider instead 0 < x < 1.

Several authors, including N. L. Johnson and S. Kotz,[1] use the symbols p and q (instead of α and β) for the shape parameters of the beta distribution, reminiscent of the symbols traditionally used for the parameters of the Bernoulli distribution, because the beta distribution approaches the Bernoulli distribution in the limit when both shape parameters α and β approach the value of zero.

In the following, a random variable X beta-distributed with parameters α and β will be denoted by:[8][9]

${\displaystyle X\sim \operatorname {Beta} (\alpha ,\beta )}$

Other notations for beta-distributed random variables used in the statistical literature are ${\displaystyle X\sim {\mathcal {B))e(\alpha ,\beta )}$[10] and ${\displaystyle X\sim \beta _{\alpha ,\beta ))$.[5]

### Cumulative distribution function

CDF for symmetric beta distribution vs. x and α = β
CDF for skewed beta distribution vs. x and β = 5α
${\displaystyle F(x;\alpha ,\beta )={\frac {\mathrm {B} {}(x;\alpha ,\beta )}{\mathrm {B} {}(\alpha ,\beta )))=I_{x}(\alpha ,\beta )}$

where ${\displaystyle \mathrm {B} (x;\alpha ,\beta )}$ is the incomplete beta function and ${\displaystyle I_{x}(\alpha ,\beta )}$ is the regularized incomplete beta function.

### Alternative parameterizations

#### Two parameters

##### Mean and sample size

The beta distribution may also be reparameterized in terms of its mean μ (0 < μ < 1) and the sum of the two shape parameters ν = α + β > 0([9] p. 83). Denoting by αPosterior and βPosterior the shape parameters of the posterior beta distribution resulting from applying Bayes theorem to a binomial likelihood function and a prior probability, the interpretation of the addition of both shape parameters to be sample size = ν = α·Posterior + β·Posterior is only correct for the Haldane prior probability Beta(0,0). Specifically, for the Bayes (uniform) prior Beta(1,1) the correct interpretation would be sample size = α·Posterior + β Posterior − 2, or ν = (sample size) + 2. For sample size much larger than 2, the difference between these two priors becomes negligible. (See section Bayesian inference for further details.) ν = α + β is referred to as the "sample size" of a Beta distribution, but one should remember that it is, strictly speaking, the "sample size" of a binomial likelihood function only when using a Haldane Beta(0,0) prior in Bayes theorem.

This parametrization may be useful in Bayesian parameter estimation. For example, one may administer a test to a number of individuals. If it is assumed that each person's score (0 ≤ θ ≤ 1) is drawn from a population-level Beta distribution, then an important statistic is the mean of this population-level distribution. The mean and sample size parameters are related to the shape parameters α and β via[9]

α = μν, β = (1 − μ)ν

Under this parametrization, one may place an uninformative prior probability over the mean, and a vague prior probability (such as an exponential or gamma distribution) over the positive reals for the sample size, if they are independent, and prior data and/or beliefs justify it.

##### Mode and concentration

Concave beta distributions, which have ${\displaystyle \alpha ,\beta >1}$, can be parametrized in terms of mode and "concentration". The mode, ${\displaystyle \omega ={\frac {\alpha -1}{\alpha +\beta -2))}$, and concentration, ${\displaystyle \kappa =\alpha +\beta }$, can be used to define the usual shape parameters as follows:[11]

{\displaystyle {\begin{aligned}\alpha &=\omega (\kappa -2)+1\\\beta &=(1-\omega )(\kappa -2)+1\end{aligned))}

For the mode, ${\displaystyle 0<\omega <1}$, to be well-defined, we need ${\displaystyle \alpha ,\beta >1}$, or equivalently ${\displaystyle \kappa >2}$. If instead we define the concentration as ${\displaystyle c=\alpha +\beta -2}$, the condition simplifies to ${\displaystyle c>0}$ and the beta density at ${\displaystyle \alpha =1+c\omega }$ and ${\displaystyle \beta =1+c(1-\omega )}$ can be written as:

${\displaystyle f(x;\omega ,c)={\frac {x^{c\omega }(1-x)^{c(1-\omega ))){\mathrm {B} {\bigl (}1+c\omega ,1+c(1-\omega ){\bigr )))))$

where ${\displaystyle c}$ directly scales the sufficient statistics, ${\displaystyle \log(x)}$ and ${\displaystyle \log(1-x)}$. Note also that in the limit, ${\displaystyle c\to 0}$, the distribution becomes flat.

##### Mean and variance

Solving the system of (coupled) equations given in the above sections as the equations for the mean and the variance of the beta distribution in terms of the original parameters α and β, one can express the α and β parameters in terms of the mean (μ) and the variance (var):

{\displaystyle {\begin{aligned}\nu &=\alpha +\beta ={\frac {\mu (1-\mu )}{\mathrm {var} ))-1,{\text{ where ))\nu =(\alpha +\beta )>0,{\text{ therefore: )){\text{var))<\mu (1-\mu )\\\alpha &=\mu \nu =\mu \left({\frac {\mu (1-\mu )}{\text{var))}-1\right),{\text{ if )){\text{var))<\mu (1-\mu )\\\beta &=(1-\mu )\nu =(1-\mu )\left({\frac {\mu (1-\mu )}{\text{var))}-1\right),{\text{ if )){\text{var))<\mu (1-\mu ).\end{aligned))}

This parametrization of the beta distribution may lead to a more intuitive understanding than the one based on the original parameters α and β. For example, by expressing the mode, skewness, excess kurtosis and differential entropy in terms of the mean and the variance:

#### Four parameters

A beta distribution with the two shape parameters α and β is supported on the range [0,1] or (0,1). It is possible to alter the location and scale of the distribution by introducing two further parameters representing the minimum, a, and maximum c (c > a), values of the distribution,[1] by a linear transformation substituting the non-dimensional variable x in terms of the new variable y (with support [a,c] or (a,c)) and the parameters a and c:

${\displaystyle y=x(c-a)+a,{\text{ therefore ))x={\frac {y-a}{c-a)).}$

The probability density function of the four parameter beta distribution is equal to the two parameter distribution, scaled by the range (c-a), (so that the total area under the density curve equals a probability of one), and with the "y" variable shifted and scaled as follows:

${\displaystyle f(y;\alpha ,\beta ,a,c)={\frac {f(x;\alpha ,\beta )}{c-a))={\frac {\left({\frac {y-a}{c-a))\right)^{\alpha -1}\left({\frac {c-y}{c-a))\right)^{\beta -1)){(c-a)B(\alpha ,\beta )))={\frac {(y-a)^{\alpha -1}(c-y)^{\beta -1)){(c-a)^{\alpha +\beta -1}B(\alpha ,\beta ))).}$

That a random variable Y is Beta-distributed with four parameters α, β, a, and c will be denoted by:

${\displaystyle Y\sim \operatorname {Beta} (\alpha ,\beta ,a,c).}$

Some measures of central location are scaled (by (c-a)) and shifted (by a), as follows:

{\displaystyle {\begin{aligned}\mu _{Y}&=\mu _{X}(c-a)+a=\left({\frac {\alpha }{\alpha +\beta ))\right)(c-a)+a={\frac {\alpha c+\beta a}{\alpha +\beta ))\\{\text{mode))(Y)&={\text{mode))(X)(c-a)+a=\left({\frac {\alpha -1}{\alpha +\beta -2))\right)(c-a)+a={\frac {(\alpha -1)c+(\beta -1)a}{\alpha +\beta -2))\ ,\qquad {\text{ if ))\alpha ,\beta >1\\{\text{median))(Y)&={\text{median))(X)(c-a)+a=\left(I_{\frac {1}{2))^{[-1]}(\alpha ,\beta )\right)(c-a)+a\\\end{aligned))}

Note: the geometric mean and harmonic mean cannot be transformed by a linear transformation in the way that the mean, median and mode can.

The shape parameters of Y can be written in term of its mean and variance as

{\displaystyle {\begin{aligned}\alpha &={\frac {(a-\mu _{Y})(a\,c-a\,\mu _{Y}-c\,\mu _{Y}+\mu _{Y}^{2}+\sigma _{Y}^{2})}{\sigma _{Y}^{2}(c-a)))\\\beta &=-{\frac {(c-\mu _{Y})(a\,c-a\,\mu _{Y}-c\,\mu _{Y}+\mu _{Y}^{2}+\sigma _{Y}^{2})}{\sigma _{Y}^{2}(c-a)))\\\end{aligned))}

The statistical dispersion measures are scaled (they do not need to be shifted because they are already centered on the mean) by the range (c-a), linearly for the mean deviation and nonlinearly for the variance:

${\displaystyle {\text{(mean deviation around mean)))(Y)=}$
${\displaystyle ({\text{(mean deviation around mean)))(X))(c-a)={\frac {2\alpha ^{\alpha }\beta ^{\beta )){\mathrm {B} (\alpha ,\beta )(\alpha +\beta )^{\alpha +\beta +1))}(c-a)}$
${\displaystyle {\text{var))(Y)={\text{var))(X)(c-a)^{2}={\frac {\alpha \beta (c-a)^{2)){(\alpha +\beta )^{2}(\alpha +\beta +1))).}$

Since the skewness and excess kurtosis are non-dimensional quantities (as moments centered on the mean and normalized by the standard deviation), they are independent of the parameters a and c, and therefore equal to the expressions given above in terms of X (with support [0,1] or (0,1)):

${\displaystyle {\text{skewness))(Y)={\text{skewness))(X)={\frac {2(\beta -\alpha ){\sqrt {\alpha +\beta +1))}{(\alpha +\beta +2){\sqrt {\alpha \beta )))).}$
${\displaystyle {\text{kurtosis excess))(Y)={\text{kurtosis excess))(X)={\frac {6[(\alpha -\beta )^{2}(\alpha +\beta +1)-\alpha \beta (\alpha +\beta +2)]}{\alpha \beta (\alpha +\beta +2)(\alpha +\beta +3)))}$

## Properties

### Measures of central tendency

#### Mode

The mode of a Beta distributed random variable X with α, β > 1 is the most likely value of the distribution (corresponding to the peak in the PDF), and is given by the following expression:[1]

${\displaystyle {\frac {\alpha -1}{\alpha +\beta -2)).}$

When both parameters are less than one (α, β < 1), this is the anti-mode: the lowest point of the probability density curve.[3]

Letting α = β, the expression for the mode simplifies to 1/2, showing that for α = β > 1 the mode (resp. anti-mode when α, β < 1), is at the center of the distribution: it is symmetric in those cases. See Shapes section in this article for a full list of mode cases, for arbitrary values of α and β. For several of these cases, the maximum value of the density function occurs at one or both ends. In some cases the (maximum) value of the density function occurring at the end is finite. For example, in the case of α = 2, β = 1 (or α = 1, β = 2), the density function becomes a right-triangle distribution which is finite at both ends. In several other cases there is a singularity at one end, where the value of the density function approaches infinity. For example, in the case α = β = 1/2, the Beta distribution simplifies to become the arcsine distribution. There is debate among mathematicians about some of these cases and whether the ends (x = 0, and x = 1) can be called modes or not.[6][8]

Mode for Beta distribution for 1 ≤ α ≤ 5 and 1 ≤ β ≤ 5
• Whether the ends are part of the domain of the density function
• Whether a singularity can ever be called a mode
• Whether cases with two maxima should be called bimodal

#### Median

Median for Beta distribution for 0 ≤ α ≤ 5 and 0 ≤ β ≤ 5
(Mean–Median) for Beta distribution versus alpha and beta from 0 to 2

The median of the beta distribution is the unique real number ${\displaystyle x=I_{\frac {1}{2))^{[-1]}(\alpha ,\beta )}$ for which the regularized incomplete beta function ${\displaystyle I_{x}(\alpha ,\beta )={\tfrac {1}{2))}$. There is no general closed-form expression for the median of the beta distribution for arbitrary values of α and β. Closed-form expressions for particular values of the parameters α and β follow:[citation needed]

• For symmetric cases α = β, median = 1/2.
• For α = 1 and β > 0, median ${\displaystyle =1-2^{-{\frac {1}{\beta ))))$ (this case is the mirror-image of the power function [0,1] distribution)
• For α > 0 and β = 1, median = ${\displaystyle 2^{-{\frac {1}{\alpha ))))$ (this case is the power function [0,1] distribution[6])
• For α = 3 and β = 2, median = 0.6142724318676105..., the real solution to the quartic equation 1 − 8x3 + 6x4 = 0, which lies in [0,1].
• For α = 2 and β = 3, median = 0.38572756813238945... = 1−median(Beta(3, 2))

The following are the limits with one parameter finite (non-zero) and the other approaching these limits:[citation needed]

{\displaystyle {\begin{aligned}\lim _{\beta \to 0}{\text{median))=\lim _{\alpha \to \infty }{\text{median))=1,\\\lim _{\alpha \to 0}{\text{median))=\lim _{\beta \to \infty }{\text{median))=0.\end{aligned))}

A reasonable approximation of the value of the median of the beta distribution, for both α and β greater or equal to one, is given by the formula[12]

${\displaystyle {\text{median))\approx {\frac {\alpha -{\tfrac {1}{3))}{\alpha +\beta -{\tfrac {2}{3)))){\text{ for ))\alpha ,\beta \geq 1.}$

When α, β ≥ 1, the relative error (the absolute error divided by the median) in this approximation is less than 4% and for both α ≥ 2 and β ≥ 2 it is less than 1%. The absolute error divided by the difference between the mean and the mode is similarly small:

#### Mean

Mean for Beta distribution for 0 ≤ α ≤ 5 and 0 ≤ β ≤ 5

The expected value (mean) (μ) of a Beta distribution random variable X with two parameters α and β is a function of only the ratio β/α of these parameters:[1]

{\displaystyle {\begin{aligned}\mu =\operatorname {E} [X]&=\int _{0}^{1}xf(x;\alpha ,\beta )\,dx\\&=\int _{0}^{1}x\,{\frac {x^{\alpha -1}(1-x)^{\beta -1)){\mathrm {B} (\alpha ,\beta )))\,dx\\&={\frac {\alpha }{\alpha +\beta ))\\&={\frac {1}{1+{\frac {\beta }{\alpha ))))\end{aligned))}

Letting α = β in the above expression one obtains μ = 1/2, showing that for α = β the mean is at the center of the distribution: it is symmetric. Also, the following limits can be obtained from the above expression:

{\displaystyle {\begin{aligned}\lim _((\frac {\beta }{\alpha ))\to 0}\mu =1\\\lim _((\frac {\beta }{\alpha ))\to \infty }\mu =0\end{aligned))}

Therefore, for β/α → 0, or for α/β → ∞, the mean is located at the right end, x = 1. For these limit ratios, the beta distribution becomes a one-point degenerate distribution with a Dirac delta function spike at the right end, x = 1, with probability 1, and zero probability everywhere else. There is 100% probability (absolute certainty) concentrated at the right end, x = 1.

Similarly, for β/α → ∞, or for α/β → 0, the mean is located at the left end, x = 0. The beta distribution becomes a 1-point Degenerate distribution with a Dirac delta function spike at the left end, x = 0, with probability 1, and zero probability everywhere else. There is 100% probability (absolute certainty) concentrated at the left end, x = 0. Following are the limits with one parameter finite (non-zero) and the other approaching these limits:

{\displaystyle {\begin{aligned}\lim _{\beta \to 0}\mu =\lim _{\alpha \to \infty }\mu =1\\\lim _{\alpha \to 0}\mu =\lim _{\beta \to \infty }\mu =0\end{aligned))}

While for typical unimodal distributions (with centrally located modes, inflexion points at both sides of the mode, and longer tails) (with Beta(αβ) such that α, β > 2) it is known that the sample mean (as an estimate of location) is not as robust as the sample median, the opposite is the case for uniform or "U-shaped" bimodal distributions (with Beta(αβ) such that α, β ≤ 1), with the modes located at the ends of the distribution. As Mosteller and Tukey remark ([13] p. 207) "the average of the two extreme observations uses all the sample information. This illustrates how, for short-tailed distributions, the extreme observations should get more weight." By contrast, it follows that the median of "U-shaped" bimodal distributions with modes at the edge of the distribution (with Beta(αβ) such that α, β ≤ 1) is not robust, as the sample median drops the extreme sample observations from consideration. A practical application of this occurs for example for random walks, since the probability for the time of the last visit to the origin in a random walk is distributed as the arcsine distribution Beta(1/2, 1/2):[5][14] the mean of a number of realizations of a random walk is a much more robust estimator than the median (which is an inappropriate sample measure estimate in this case).

#### Geometric mean

(Mean − GeometricMean) for Beta distribution versus α and β from 0 to 2, showing the asymmetry between α and β for the geometric mean
Geometric means for Beta distribution Purple = G(x), Yellow = G(1 − x), smaller values α and β in front
Geometric means for Beta distribution. purple = G(x), yellow = G(1 − x), larger values α and β in front

The logarithm of the geometric mean GX of a distribution with random variable X is the arithmetic mean of ln(X), or, equivalently, its expected value:

${\displaystyle \ln G_{X}=\operatorname {E} [\ln X]}$

For a beta distribution, the expected value integral gives:

{\displaystyle {\begin{aligned}\operatorname {E} [\ln X]&=\int _{0}^{1}\ln x\,f(x;\alpha ,\beta )\,dx\\[4pt]&=\int _{0}^{1}\ln x\,{\frac {x^{\alpha -1}(1-x)^{\beta -1)){\mathrm {B} (\alpha ,\beta )))\,dx\\[4pt]&={\frac {1}{\mathrm {B} (\alpha ,\beta )))\,\int _{0}^{1}{\frac {\partial x^{\alpha -1}(1-x)^{\beta -1)){\partial \alpha ))\,dx\\[4pt]&={\frac {1}{\mathrm {B} (\alpha ,\beta ))){\frac {\partial }{\partial \alpha ))\int _{0}^{1}x^{\alpha -1}(1-x)^{\beta -1}\,dx\\[4pt]&={\frac {1}{\mathrm {B} (\alpha ,\beta ))){\frac {\partial \mathrm {B} (\alpha ,\beta )}{\partial \alpha ))\\[4pt]&={\frac {\partial \ln \mathrm {B} (\alpha ,\beta )}{\partial \alpha ))\\[4pt]&={\frac {\partial \ln \Gamma (\alpha )}{\partial \alpha ))-{\frac {\partial \ln \Gamma (\alpha +\beta )}{\partial \alpha ))\\[4pt]&=\psi (\alpha )-\psi (\alpha +\beta )\end{aligned))}

where ψ is the digamma function.

Therefore, the geometric mean of a beta distribution with shape parameters α and β is the exponential of the digamma functions of α and β as follows:

${\displaystyle G_{X}=e^{\operatorname {E} [\ln X]}=e^{\psi (\alpha )-\psi (\alpha +\beta )))$

While for a beta distribution with equal shape parameters α = β, it follows that skewness = 0 and mode = mean = median = 1/2, the geometric mean is less than 1/2: 0 < GX < 1/2. The reason for this is that the logarithmic transformation strongly weights the values of X close to zero, as ln(X) strongly tends towards negative infinity as X approaches zero, while ln(X) flattens towards zero as X → 1.

Along a line α = β, the following limits apply:

{\displaystyle {\begin{aligned}&\lim _{\alpha =\beta \to 0}G_{X}=0\\&\lim _{\alpha =\beta \to \infty }G_{X}={\tfrac {1}{2))\end{aligned))}

Following are the limits with one parameter finite (non-zero) and the other approaching these limits:

{\displaystyle {\begin{aligned}\lim _{\beta \to 0}G_{X}=\lim _{\alpha \to \infty }G_{X}=1\\\lim _{\alpha \to 0}G_{X}=\lim _{\beta \to \infty }G_{X}=0\end{aligned))}

The accompanying plot shows the difference between the mean and the geometric mean for shape parameters α and β from zero to 2. Besides the fact that the difference between them approaches zero as α and β approach infinity and that the difference becomes large for values of α and β approaching zero, one can observe an evident asymmetry of the geometric mean with respect to the shape parameters α and β. The difference between the geometric mean and the mean is larger for small values of α in relation to β than when exchanging the magnitudes of β and α.

N. L.Johnson and S. Kotz[1] suggest the logarithmic approximation to the digamma function ψ(α) ≈ ln(α − 1/2) which results in the following approximation to the geometric mean:

${\displaystyle G_{X}\approx {\frac {\alpha \,-{\frac {1}{2))}{\alpha +\beta -{\frac {1}{2)))){\text{ if ))\alpha ,\beta >1.}$

Numerical values for the relative error in this approximation follow: [(α = β = 1): 9.39%]; [(α = β = 2): 1.29%]; [(α = 2, β = 3): 1.51%]; [(α = 3, β = 2): 0.44%]; [(α = β = 3): 0.51%]; [(α = β = 4): 0.26%]; [(α = 3, β = 4): 0.55%]; [(α = 4, β = 3): 0.24%].

Similarly, one can calculate the value of shape parameters required for the geometric mean to equal 1/2. Given the value of the parameter β, what would be the value of the other parameter, α, required for the geometric mean to equal 1/2?. The answer is that (for β > 1), the value of α required tends towards β + 1/2 as β → ∞. For example, all these couples have the same geometric mean of 1/2: [β = 1, α = 1.4427], [β = 2, α = 2.46958], [β = 3, α = 3.47943], [β = 4, α = 4.48449], [β = 5, α = 5.48756], [β = 10, α = 10.4938], [β = 100, α = 100.499].

The fundamental property of the geometric mean, which can be proven to be false for any other mean, is

${\displaystyle G\left({\frac {X_{i)){Y_{i))}\right)={\frac {G(X_{i})}{G(Y_{i})))}$

This makes the geometric mean the only correct mean when averaging normalized results, that is results that are presented as ratios to reference values.[15] This is relevant because the beta distribution is a suitable model for the random behavior of percentages and it is particularly suitable to the statistical modelling of proportions. The geometric mean plays a central role in maximum likelihood estimation, see section "Parameter estimation, maximum likelihood." Actually, when performing maximum likelihood estimation, besides the geometric mean GX based on the random variable X, also another geometric mean appears naturally: the geometric mean based on the linear transformation ––(1 − X), the mirror-image of X, denoted by G(1−X):

${\displaystyle G_{(1-X)}=e^{\operatorname {E} [\ln(1-X)]}=e^{\psi (\beta )-\psi (\alpha +\beta )))$

Along a line α = β, the following limits apply:

{\displaystyle {\begin{aligned}&\lim _{\alpha =\beta \to 0}G_{(1-X)}=0\\&\lim _{\alpha =\beta \to \infty }G_{(1-X)}={\tfrac {1}{2))\end{aligned))}

Following are the limits with one parameter finite (non-zero) and the other approaching these limits:

{\displaystyle {\begin{aligned}\lim _{\beta \to 0}G_{(1-X)}=\lim _{\alpha \to \infty }G_{(1-X)}=0\\\lim _{\alpha \to 0}G_{(1-X)}=\lim _{\beta \to \infty }G_{(1-X)}=1\end{aligned))}

It has the following approximate value:

${\displaystyle G_{(1-X)}\approx {\frac {\beta -{\frac {1}{2))}{\alpha +\beta -{\frac {1}{2)))){\text{ if ))\alpha ,\beta >1.}$

Although both GX and G(1−X) are asymmetric, in the case that both shape parameters are equal α = β, the geometric means are equal: GX = G(1−X). This equality follows from the following symmetry displayed between both geometric means:

${\displaystyle G_{X}(\mathrm {B} (\alpha ,\beta ))=G_{(1-X)}(\mathrm {B} (\beta ,\alpha )).}$

#### Harmonic mean

Harmonic mean for beta distribution for 0 < α < 5 and 0 < β < 5
Harmonic mean for beta distribution versus α and β from 0 to 2
Harmonic means for beta distribution Purple = H(X), Yellow = H(1 − X), smaller values α and β in front
Harmonic Means for Beta distribution Purple = H(X), Yellow = H(1 − X), larger values α and β in front

The inverse of the harmonic mean (HX) of a distribution with random variable X is the arithmetic mean of 1/X, or, equivalently, its expected value. Therefore, the harmonic mean (HX) of a beta distribution with shape parameters α and β is:

{\displaystyle {\begin{aligned}H_{X}&={\frac {1}{\operatorname {E} \left[{\frac {1}{X))\right]))\\&={\frac {1}{\int _{0}^{1}{\frac {f(x;\alpha ,\beta )}{x))\,dx))\\&={\frac {1}{\int _{0}^{1}{\frac {x^{\alpha -1}(1-x)^{\beta -1)){x\mathrm {B} (\alpha ,\beta )))\,dx))\\&={\frac {\alpha -1}{\alpha +\beta -1)){\text{ if ))\alpha >1{\text{ and ))\beta >0\\\end{aligned))}

The harmonic mean (HX) of a Beta distribution with α < 1 is undefined, because its defining expression is not bounded in [0, 1] for shape parameter α less than unity.

Letting α = β in the above expression one obtains

${\displaystyle H_{X}={\frac {\alpha -1}{2\alpha -1)),}$

showing that for α = β the harmonic mean ranges from 0, for α = β = 1, to 1/2, for α = β → ∞.

Following are the limits with one parameter finite (non-zero) and the other approaching these limits:

{\displaystyle {\begin{aligned}&\lim _{\alpha \to 0}H_{X}{\text{ is undefined))\\&\lim _{\alpha \to 1}H_{X}=\lim _{\beta \to \infty }H_{X}=0\\&\lim _{\beta \to 0}H_{X}=\lim _{\alpha \to \infty }H_{X}=1\end{aligned))}

The harmonic mean plays a role in maximum likelihood estimation for the four parameter case, in addition to the geometric mean. Actually, when performing maximum likelihood estimation for the four parameter case, besides the harmonic mean HX based on the random variable X, also another harmonic mean appears naturally: the harmonic mean based on the linear transformation (1 − X), the mirror-image of X, denoted by H1 − X:

${\displaystyle H_{1-X}={\frac {1}{\operatorname {E} \left[{\frac {1}{1-X))\right]))={\frac {\beta -1}{\alpha +\beta -1)){\text{ if ))\beta >1,{\text{ and ))\alpha >0.}$

The harmonic mean (H(1 − X)) of a Beta distribution with β < 1 is undefined, because its defining expression is not bounded in [0, 1] for shape parameter β less than unity.

Letting α = β in the above expression one obtains

${\displaystyle H_{(1-X)}={\frac {\beta -1}{2\beta -1)),}$

showing that for α = β the harmonic mean ranges from 0, for α = β = 1, to 1/2, for α = β → ∞.

Following are the limits with one parameter finite (non-zero) and the other approaching these limits:

{\displaystyle {\begin{aligned}&\lim _{\beta \to 0}H_{1-X}{\text{ is undefined))\\&\lim _{\beta \to 1}H_{1-X}=\lim _{\alpha \to \infty }H_{1-X}=0\\&\lim _{\alpha \to 0}H_{1-X}=\lim _{\beta \to \infty }H_{1-X}=1\end{aligned))}

Although both HX and H1−X are asymmetric, in the case that both shape parameters are equal α = β, the harmonic means are equal: HX = H1−X. This equality follows from the following symmetry displayed between both harmonic means:

${\displaystyle H_{X}(\mathrm {B} (\alpha ,\beta ))=H_{1-X}(\mathrm {B} (\beta ,\alpha )){\text{ if ))\alpha ,\beta >1.}$

### Measures of statistical dispersion

#### Variance

The variance (the second moment centered on the mean) of a Beta distribution random variable X with parameters α and β is:[1][16]

${\displaystyle \operatorname {var} (X)=\operatorname {E} [(X-\mu )^{2}]={\frac {\alpha \beta }{(\alpha +\beta )^{2}(\alpha +\beta +1)))}$

Letting α = β in the above expression one obtains

${\displaystyle \operatorname {var} (X)={\frac {1}{4(2\beta +1))),}$

showing that for α = β the variance decreases monotonically as α = β increases. Setting α = β = 0 in this expression, one finds the maximum variance var(X) = 1/4[1] which only occurs approaching the limit, at α = β = 0.

The beta distribution may also be parametrized in terms of its mean μ (0 < μ < 1) and sample size ν = α + β (ν > 0) (see subsection Mean and sample size):

{\displaystyle {\begin{aligned}\alpha &=\mu \nu ,{\text{ where ))\nu =(\alpha +\beta )>0\\\beta &=(1-\mu )\nu ,{\text{ where ))\nu =(\alpha +\beta )>0.\end{aligned))}

Using this parametrization, one can express the variance in terms of the mean μ and the sample size ν as follows:

${\displaystyle \operatorname {var} (X)={\frac {\mu (1-\mu )}{1+\nu ))}$

Since ν = α + β > 0, it follows that var(X) < μ(1 − μ).

For a symmetric distribution, the mean is at the middle of the distribution, μ = 1/2, and therefore:

${\displaystyle \operatorname {var} (X)={\frac {1}{4(1+\nu ))){\text{ if ))\mu ={\tfrac {1}{2))}$

Also, the following limits (with only the noted variable approaching the limit) can be obtained from the above expressions:

{\displaystyle {\begin{aligned}&\lim _{\beta \to 0}\operatorname {var} (X)=\lim _{\alpha \to 0}\operatorname {var} (X)=\lim _{\beta \to \infty }\operatorname {var} (X)=\lim _{\alpha \to \infty }\operatorname {var} (X)=\lim _{\nu \to \infty }\operatorname {var} (X)=\lim _{\mu \to 0}\operatorname {var} (X)=\lim _{\mu \to 1}\operatorname {var} (X)=0\\&\lim _{\nu \to 0}\operatorname {var} (X)=\mu (1-\mu )\end{aligned))}

#### Geometric variance and covariance

log geometric variances vs. α and β
log geometric variances vs. α and β

The logarithm of the geometric variance, ln(varGX), of a distribution with random variable X is the second moment of the logarithm of X centered on the geometric mean of X, ln(GX):

{\displaystyle {\begin{aligned}\ln \operatorname {var} _{GX}&=\operatorname {E} \left[(\ln X-\ln G_{X})^{2}\right]\\&=\operatorname {E} [(\ln X-\operatorname {E} \left[\ln X])^{2}\right]\\&=\operatorname {E} \left[(\ln X)^{2}\right]-(\operatorname {E} [\ln X])^{2}\\&=\operatorname {var} [\ln X]\end{aligned))}

and therefore, the geometric variance is:

${\displaystyle \operatorname {var} _{GX}=e^{\operatorname {var} [\ln X]))$

In the Fisher information matrix, and the curvature of the log likelihood function, the logarithm of the geometric variance of the reflected variable 1 − X and the logarithm of the geometric covariance between X and 1 − X appear:

{\displaystyle {\begin{aligned}\ln \operatorname {var_{G(1-X))) &=\operatorname {E} [(\ln(1-X)-\ln G_{1-X})^{2}]\\&=\operatorname {E} [(\ln(1-X)-\operatorname {E} [\ln(1-X)])^{2}]\\&=\operatorname {E} [(\ln(1-X))^{2}]-(\operatorname {E} [\ln(1-X)])^{2}\\&=\operatorname {var} [\ln(1-X)]\\&\\\operatorname {var_{G(1-X))) &=e^{\operatorname {var} [\ln(1-X)]}\\&\\\ln \operatorname {cov_{G{X,1-X))} &=\operatorname {E} [(\ln X-\ln G_{X})(\ln(1-X)-\ln G_{1-X})]\\&=\operatorname {E} [(\ln X-\operatorname {E} [\ln X])(\ln(1-X)-\operatorname {E} [\ln(1-X)])]\\&=\operatorname {E} \left[\ln X\ln(1-X)\right]-\operatorname {E} [\ln X]\operatorname {E} [\ln(1-X)]\\&=\operatorname {cov} [\ln X,\ln(1-X)]\\&\\\operatorname {cov} _{G{X,(1-X)))&=e^{\operatorname {cov} [\ln X,\ln(1-X)]}\end{aligned))}

For a beta distribution, higher order logarithmic moments can be derived by using the representation of a beta distribution as a proportion of two Gamma distributions and differentiating through the integral. They can be expressed in terms of higher order poly-gamma functions. See the section § Moments of logarithmically transformed random variables. The variance of the logarithmic variables and covariance of ln X and ln(1−X) are:

${\displaystyle \operatorname {var} [\ln X]=\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta )}$
${\displaystyle \operatorname {var} [\ln(1-X)]=\psi _{1}(\beta )-\psi _{1}(\alpha +\beta )}$
${\displaystyle \operatorname {cov} [\ln X,\ln(1-X)]=-\psi _{1}(\alpha +\beta )}$

where the trigamma function, denoted ψ1(α), is the second of the polygamma functions, and is defined as the derivative of the digamma function:

${\displaystyle \psi _{1}(\alpha )={\frac {d^{2}\ln \Gamma (\alpha )}{d\alpha ^{2))}={\frac {d\,\psi (\alpha )}{d\alpha )).}$

Therefore,

${\displaystyle \ln \operatorname {var} _{GX}=\operatorname {var} [\ln X]=\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta )}$
${\displaystyle \ln \operatorname {var} _{G(1-X)}=\operatorname {var} [\ln(1-X)]=\psi _{1}(\beta )-\psi _{1}(\alpha +\beta )}$
${\displaystyle \ln \operatorname {cov} _{GX,1-X}=\operatorname {cov} [\ln X,\ln(1-X)]=-\psi _{1}(\alpha +\beta )}$

The accompanying plots show the log geometric variances and log geometric covariance versus the shape parameters α and β. The plots show that the log geometric variances and log geometric covariance are close to zero for shape parameters α and β greater than 2, and that the log geometric variances rapidly rise in value for shape parameter values α and β less than unity. The log geometric variances are positive for all values of the shape parameters. The log geometric covariance is negative for all values of the shape parameters, and it reaches large negative values for α and β less than unity.

Following are the limits with one parameter finite (non-zero) and the other approaching these limits:

{\displaystyle {\begin{aligned}&\lim _{\alpha \to 0}\ln \operatorname {var} _{GX}=\lim _{\beta \to 0}\ln \operatorname {var} _{G(1-X)}=\infty \\&\lim _{\beta \to 0}\ln \operatorname {var} _{GX}=\lim _{\alpha \to \infty }\ln \operatorname {var} _{GX}=\lim _{\alpha \to 0}\ln \operatorname {var} _{G(1-X)}=\lim _{\beta \to \infty }\ln \operatorname {var} _{G(1-X)}=\lim _{\alpha \to \infty }\ln \operatorname {cov} _{GX,(1-X)}=\lim _{\beta \to \infty }\ln \operatorname {cov} _{GX,(1-X)}=0\\&\lim _{\beta \to \infty }\ln \operatorname {var} _{GX}=\psi _{1}(\alpha )\\&\lim _{\alpha \to \infty }\ln \operatorname {var} _{G(1-X)}=\psi _{1}(\beta )\\&\lim _{\alpha \to 0}\ln \operatorname {cov} _{GX,(1-X)}=-\psi _{1}(\beta )\\&\lim _{\beta \to 0}\ln \operatorname {cov} _{GX,(1-X)}=-\psi _{1}(\alpha )\end{aligned))}

Limits with two parameters varying:

{\displaystyle {\begin{aligned}&\lim _{\alpha \to \infty }(\lim _{\beta \to \infty }\ln \operatorname {var} _{GX})=\lim _{\beta \to \infty }(\lim _{\alpha \to \infty }\ln \operatorname {var} _{G(1-X)})=\lim _{\alpha \to \infty }(\lim _{\beta \to 0}\ln \operatorname {cov} _{GX,(1-X)})=\lim _{\beta \to \infty }(\lim _{\alpha \to 0}\ln \operatorname {cov} _{GX,(1-X)})=0\\&\lim _{\alpha \to \infty }(\lim _{\beta \to 0}\ln \operatorname {var} _{GX})=\lim _{\beta \to \infty }(\lim _{\alpha \to 0}\ln \operatorname {var} _{G(1-X)})=\infty \\&\lim _{\alpha \to 0}(\lim _{\beta \to 0}\ln \operatorname {cov} _{GX,(1-X)})=\lim _{\beta \to 0}(\lim _{\alpha \to 0}\ln \operatorname {cov} _{GX,(1-X)})=-\infty \end{aligned))}

Although both ln(varGX) and ln(varG(1 − X)) are asymmetric, when the shape parameters are equal, α = β, one has: ln(varGX) = ln(varG(1−X)). This equality follows from the following symmetry displayed between both log geometric variances:

${\displaystyle \ln \operatorname {var} _{GX}(\mathrm {B} (\alpha ,\beta ))=\ln \operatorname {var} _{G(1-X)}(\mathrm {B} (\beta ,\alpha )).}$

The log geometric covariance is symmetric:

${\displaystyle \ln \operatorname {cov} _{GX,(1-X)}(\mathrm {B} (\alpha ,\beta ))=\ln \operatorname {cov} _{GX,(1-X)}(\mathrm {B} (\beta ,\alpha ))}$

#### Mean absolute deviation around the mean

Ratio of Mean Abs.Dev. to Std.Dev. for Beta distribution with α and β ranging from 0 to 5
Ratio of Mean Abs.Dev. to Std.Dev. for Beta distribution with mean 0 ≤ μ ≤ 1 and sample size 0 < ν ≤ 10

The mean absolute deviation around the mean for the beta distribution with shape parameters α and β is:[6]

${\displaystyle \operatorname {E} [|X-E[X]|]={\frac {2\alpha ^{\alpha }\beta ^{\beta )){\mathrm {B} (\alpha ,\beta )(\alpha +\beta )^{\alpha +\beta +1))))$

The mean absolute deviation around the mean is a more robust estimator of statistical dispersion than the standard deviation for beta distributions with tails and inflection points at each side of the mode, Beta(αβ) distributions with α,β > 2, as it depends on the linear (absolute) deviations rather than the square deviations from the mean. Therefore, the effect of very large deviations from the mean are not as overly weighted.

Using Stirling's approximation to the Gamma function, N.L.Johnson and S.Kotz[1] derived the following approximation for values of the shape parameters greater than unity (the relative error for this approximation is only −3.5% for α = β = 1, and it decreases to zero as α → ∞, β → ∞):

{\displaystyle {\begin{aligned}{\frac {\text{mean abs. dev. from mean)){\text{standard deviation))}&={\frac {\operatorname {E} [|X-E[X]|]}{\sqrt {\operatorname {var} (X)))}\\&\approx {\sqrt {\frac {2}{\pi ))}\left(1+{\frac {7}{12(\alpha +\beta ))){}-{\frac {1}{12\alpha ))-{\frac {1}{12\beta ))\right),{\text{ if ))\alpha ,\beta >1.\end{aligned))}

At the limit α → ∞, β → ∞, the ratio of the mean absolute deviation to the standard deviation (for the beta distribution) becomes equal to the ratio of the same measures for the normal distribution: ${\displaystyle {\sqrt {\frac {2}{\pi ))))$. For α = β = 1 this ratio equals ${\displaystyle {\frac {\sqrt {3)){2))}$, so that from α = β = 1 to α, β → ∞ the ratio decreases by 8.5%. For α = β = 0 the standard deviation is exactly equal to the mean absolute deviation around the mean. Therefore, this ratio decreases by 15% from α = β = 0 to α = β = 1, and by 25% from α = β = 0 to α, β → ∞ . However, for skewed beta distributions such that α → 0 or β → 0, the ratio of the standard deviation to the mean absolute deviation approaches infinity (although each of them, individually, approaches zero) because the mean absolute deviation approaches zero faster than the standard deviation.

Using the parametrization in terms of mean μ and sample size ν = α + β > 0:

α = μν, β = (1−μ)ν

one can express the mean absolute deviation around the mean in terms of the mean μ and the sample size ν as follows:

${\displaystyle \operatorname {E} [|X-E[X]|]={\frac {2\mu ^{\mu \nu }(1-\mu )^{(1-\mu )\nu )){\nu \mathrm {B} (\mu \nu ,(1-\mu )\nu )))}$

For a symmetric distribution, the mean is at the middle of the distribution, μ = 1/2, and therefore:

{\displaystyle {\begin{aligned}\operatorname {E} [|X-E[X]|]={\frac {2^{1-\nu )){\nu \mathrm {B} ({\tfrac {\nu }{2)),{\tfrac {\nu }{2)))))&={\frac {2^{1-\nu }\Gamma (\nu )}{\nu (\Gamma ({\tfrac {\nu }{2))))^{2))}\\\lim _{\nu \to 0}\left(\lim _{\mu \to {\frac {1}{2))}\operatorname {E} [|X-E[X]|]\right)&={\tfrac {1}{2))\\\lim _{\nu \to \infty }\left(\lim _{\mu \to {\frac {1}{2))}\operatorname {E} [|X-E[X]|]\right)&=0\end{aligned))}

Also, the following limits (with only the noted variable approaching the limit) can be obtained from the above expressions:

{\displaystyle {\begin{aligned}\lim _{\beta \to 0}\operatorname {E} [|X-E[X]|]&=\lim _{\alpha \to 0}\operatorname {E} [|X-E[X]|]=0\\\lim _{\beta \to \infty }\operatorname {E} [|X-E[X]|]&=\lim _{\alpha \to \infty }\operatorname {E} [|X-E[X]|]=0\\\lim _{\mu \to 0}\operatorname {E} [|X-E[X]|]&=\lim _{\mu \to 1}\operatorname {E} [|X-E[X]|]=0\\\lim _{\nu \to 0}\operatorname {E} [|X-E[X]|]&={\sqrt {\mu (1-\mu )))\\\lim _{\nu \to \infty }\operatorname {E} [|X-E[X]|]&=0\end{aligned))}

#### Mean absolute difference

The mean absolute difference for the Beta distribution is:

${\displaystyle \mathrm {MD} =\int _{0}^{1}\int _{0}^{1}f(x;\alpha ,\beta )\,f(y;\alpha ,\beta )\,|x-y|\,dx\,dy=\left({\frac {4}{\alpha +\beta ))\right){\frac {B(\alpha +\beta ,\alpha +\beta )}{B(\alpha ,\alpha )B(\beta ,\beta )))}$

The Gini coefficient for the Beta distribution is half of the relative mean absolute difference:

${\displaystyle \mathrm {G} =\left({\frac {2}{\alpha ))\right){\frac {B(\alpha +\beta ,\alpha +\beta )}{B(\alpha ,\alpha )B(\beta ,\beta )))}$

### Skewness

Skewness for Beta Distribution as a function of variance and mean

The skewness (the third moment centered on the mean, normalized by the 3/2 power of the variance) of the beta distribution is[1]

${\displaystyle \gamma _{1}={\frac {\operatorname {E} [(X-\mu )^{3}]}{(\operatorname {var} (X))^{3/2))}={\frac {2(\beta -\alpha ){\sqrt {\alpha +\beta +1))}{(\alpha +\beta +2){\sqrt {\alpha \beta )))).}$

Letting α = β in the above expression one obtains γ1 = 0, showing once again that for α = β the distribution is symmetric and hence the skewness is zero. Positive skew (right-tailed) for α < β, negative skew (left-tailed) for α > β.

Using the parametrization in terms of mean μ and sample size ν = α + β:

{\displaystyle {\begin{aligned}\alpha &{}=\mu \nu ,{\text{ where ))\nu =(\alpha +\beta )>0\\\beta &{}=(1-\mu )\nu ,{\text{ where ))\nu =(\alpha +\beta )>0.\end{aligned))}

one can express the skewness in terms of the mean μ and the sample size ν as follows:

${\displaystyle \gamma _{1}={\frac {\operatorname {E} [(X-\mu )^{3}]}{(\operatorname {var} (X))^{3/2))}={\frac {2(1-2\mu ){\sqrt {1+\nu ))}{(2+\nu ){\sqrt {\mu (1-\mu ))))).}$

The skewness can also be expressed just in terms of the variance var and the mean μ as follows:

${\displaystyle \gamma _{1}={\frac {\operatorname {E} [(X-\mu )^{3}]}{(\operatorname {var} (X))^{3/2))}={\frac {2(1-2\mu ){\sqrt {\text{ var )))){\mu (1-\mu )+\operatorname {var} )){\text{ if ))\operatorname {var} <\mu (1-\mu )}$

The accompanying plot of skewness as a function of variance and mean shows that maximum variance (1/4) is coupled with zero skewness and the symmetry condition (μ = 1/2), and that maximum skewness (positive or negative infinity) occurs when the mean is located at one end or the other, so that the "mass" of the probability distribution is concentrated at the ends (minimum variance).

The following expression for the square of the skewness, in terms of the sample size ν = α + β and the variance var, is useful for the method of moments estimation of four parameters:

${\displaystyle (\gamma _{1})^{2}={\frac {(\operatorname {E} [(X-\mu )^{3}])^{2)){(\operatorname {var} (X))^{3))}={\frac {4}{(2+\nu )^{2))}{\bigg (}{\frac {1}{\text{var))}-4(1+\nu ){\bigg )))$

This expression correctly gives a skewness of zero for α = β, since in that case (see § Variance): ${\displaystyle \operatorname {var} ={\frac {1}{4(1+\nu )))}$.

For the symmetric case (α = β), skewness = 0 over the whole range, and the following limits apply:

${\displaystyle \lim _{\alpha =\beta \to 0}\gamma _{1}=\lim _{\alpha =\beta \to \infty }\gamma _{1}=\lim _{\nu \to 0}\gamma _{1}=\lim _{\nu \to \infty }\gamma _{1}=\lim _{\mu \to {\frac {1}{2))}\gamma _{1}=0}$

For the asymmetric cases (α ≠ β) the following limits (with only the noted variable approaching the limit) can be obtained from the above expressions:

{\displaystyle {\begin{aligned}&\lim _{\alpha \to 0}\gamma _{1}=\lim _{\mu \to 0}\gamma _{1}=\infty \\&\lim _{\beta \to 0}\gamma _{1}=\lim _{\mu \to 1}\gamma _{1}=-\infty \\&\lim _{\alpha \to \infty }\gamma _{1}=-{\frac {2}{\sqrt {\beta ))},\quad \lim _{\beta \to 0}(\lim _{\alpha \to \infty }\gamma _{1})=-\infty ,\quad \lim _{\beta \to \infty }(\lim _{\alpha \to \infty }\gamma _{1})=0\\&\lim _{\beta \to \infty }\gamma _{1}={\frac {2}{\sqrt {\alpha ))},\quad \lim _{\alpha \to 0}(\lim _{\beta \to \infty }\gamma _{1})=\infty ,\quad \lim _{\alpha \to \infty }(\lim _{\beta \to \infty }\gamma _{1})=0\\&\lim _{\nu \to 0}\gamma _{1}={\frac {1-2\mu }{\sqrt {\mu (1-\mu )))},\quad \lim _{\mu \to 0}(\lim _{\nu \to 0}\gamma _{1})=\infty ,\quad \lim _{\mu \to 1}(\lim _{\nu \to 0}\gamma _{1})=-\infty \end{aligned))}

### Kurtosis

Excess Kurtosis for Beta Distribution as a function of variance and mean

The beta distribution has been applied in acoustic analysis to assess damage to gears, as the kurtosis of the beta distribution has been reported to be a good indicator of the condition of a gear.[17] Kurtosis has also been used to distinguish the seismic signal generated by a person's footsteps from other signals. As persons or other targets moving on the ground generate continuous signals in the form of seismic waves, one can separate different targets based on the seismic waves they generate. Kurtosis is sensitive to impulsive signals, so it's much more sensitive to the signal generated by human footsteps than other signals generated by vehicles, winds, noise, etc.[18] Unfortunately, the notation for kurtosis has not been standardized. Kenney and Keeping[19] use the symbol γ2 for the excess kurtosis, but Abramowitz and Stegun[20] use different terminology. To prevent confusion[21] between kurtosis (the fourth moment centered on the mean, normalized by the square of the variance) and excess kurtosis, when using symbols, they will be spelled out as follows:[6][7]

{\displaystyle {\begin{aligned}{\text{excess kurtosis))&={\text{kurtosis))-3\\&={\frac {\operatorname {E} [(X-\mu )^{4}]}{(\operatorname {var} (X))^{2))}-3\\&={\frac {6[\alpha ^{3}-\alpha ^{2}(2\beta -1)+\beta ^{2}(\beta +1)-2\alpha \beta (\beta +2)]}{\alpha \beta (\alpha +\beta +2)(\alpha +\beta +3)))\\&={\frac {6[(\alpha -\beta )^{2}(\alpha +\beta +1)-\alpha \beta (\alpha +\beta +2)]}{\alpha \beta (\alpha +\beta +2)(\alpha +\beta +3))).\end{aligned))}

Letting α = β in the above expression one obtains

${\displaystyle {\text{excess kurtosis))=-{\frac {6}{3+2\alpha )){\text{ if ))\alpha =\beta }$.

Therefore, for symmetric beta distributions, the excess kurtosis is negative, increasing from a minimum value of −2 at the limit as {α = β} → 0, and approaching a maximum value of zero as {α = β} → ∞. The value of −2 is the minimum value of excess kurtosis that any distribution (not just beta distributions, but any distribution of any possible kind) can ever achieve. This minimum value is reached when all the probability density is entirely concentrated at each end x = 0 and x = 1, with nothing in between: a 2-point Bernoulli distribution with equal probability 1/2 at each end (a coin toss: see section below "Kurtosis bounded by the square of the skewness" for further discussion). The description of kurtosis as a measure of the "potential outliers" (or "potential rare, extreme values") of the probability distribution, is correct for all distributions including the beta distribution. When rare, extreme values can occur in the beta distribution, the higher its kurtosis; otherwise, the kurtosis is lower. For α ≠ β, skewed beta distributions, the excess kurtosis can reach unlimited positive values (particularly for α → 0 for finite β, or for β → 0 for finite α) because the side away from the mode will produce occasional extreme values. Minimum kurtosis takes place when the mass density is concentrated equally at each end (and therefore the mean is at the center), and there is no probability mass density in between the ends.

Using the parametrization in terms of mean μ and sample size ν = α + β:

{\displaystyle {\begin{aligned}\alpha &{}=\mu \nu ,{\text{ where ))\nu =(\alpha +\beta )>0\\\beta &{}=(1-\mu )\nu ,{\text{ where ))\nu =(\alpha +\beta )>0.\end{aligned))}

one can express the excess kurtosis in terms of the mean μ and the sample size ν as follows:

${\displaystyle {\text{excess kurtosis))={\frac {6}{3+\nu )){\bigg (}{\frac {(1-2\mu )^{2}(1+\nu )}{\mu (1-\mu )(2+\nu )))-1{\bigg )))$

The excess kurtosis can also be expressed in terms of just the following two parameters: the variance var, and the sample size ν as follows:

${\displaystyle {\text{excess kurtosis))={\frac {6}{(3+\nu )(2+\nu )))\left({\frac {1}{\text{ var ))}-6-5\nu \right){\text{ if )){\text{ var ))<\mu (1-\mu )}$

and, in terms of the variance var and the mean μ as follows:

${\displaystyle {\text{excess kurtosis))={\frac {6{\text{ var ))(1-{\text{ var ))-5\mu (1-\mu ))}{({\text{var ))+\mu (1-\mu ))(2{\text{ var ))+\mu (1-\mu )))){\text{ if )){\text{ var ))<\mu (1-\mu )}$

The plot of excess kurtosis as a function of the variance and the mean shows that the minimum value of the excess kurtosis (−2, which is the minimum possible value for excess kurtosis for any distribution) is intimately coupled with the maximum value of variance (1/4) and the symmetry condition: the mean occurring at the midpoint (μ = 1/2). This occurs for the symmetric case of α = β = 0, with zero skewness. At the limit, this is the 2 point Bernoulli distribution with equal probability 1/2 at each Dirac delta function end x = 0 and x = 1 and zero probability everywhere else. (A coin toss: one face of the coin being x = 0 and the other face being x = 1.) Variance is maximum because the distribution is bimodal with nothing in between the two modes (spikes) at each end. Excess kurtosis is minimum: the probability density "mass" is zero at the mean and it is concentrated at the two peaks at each end. Excess kurtosis reaches the minimum possible value (for any distribution) when the probability density function has two spikes at each end: it is bi-"peaky" with nothing in between them.

On the other hand, the plot shows that for extreme skewed cases, where the mean is located near one or the other end (μ = 0 or μ = 1), the variance is close to zero, and the excess kurtosis rapidly approaches infinity when the mean of the distribution approaches either end.

Alternatively, the excess kurtosis can also be expressed in terms of just the following two parameters: the square of the skewness, and the sample size ν as follows:

${\displaystyle {\text{excess kurtosis))={\frac {6}{3+\nu )){\bigg (}{\frac {(2+\nu )}{4))({\text{skewness)))^{2}-1{\bigg )}{\text{ if (skewness)))^{2}-2<{\text{excess kurtosis))<{\frac {3}{2))({\text{skewness)))^{2))$

From this last expression, one can obtain the same limits published practically a century ago by Karl Pearson in his paper,[22] for the beta distribution (see section below titled "Kurtosis bounded by the square of the skewness"). Setting α + β= ν = 0 in the above expression, one obtains Pearson's lower boundary (values for the skewness and excess kurtosis below the boundary (excess kurtosis + 2 − skewness2 = 0) cannot occur for any distribution, and hence Karl Pearson appropriately called the region below this boundary the "impossible region"). The limit of α + β = ν → ∞ determines Pearson's upper boundary.

{\displaystyle {\begin{aligned}&\lim _{\nu \to 0}{\text{excess kurtosis))=({\text{skewness)))^{2}-2\\&\lim _{\nu \to \infty }{\text{excess kurtosis))={\tfrac {3}{2))({\text{skewness)))^{2}\end{aligned))}

therefore:

${\displaystyle ({\text{skewness)))^{2}-2<{\text{excess kurtosis))<{\tfrac {3}{2))({\text{skewness)))^{2))$

Values of ν = α + β such that ν ranges from zero to infinity, 0 < ν < ∞, span the whole region of the beta distribution in the plane of excess kurtosis versus squared skewness.

For the symmetric case (α = β), the following limits apply:

{\displaystyle {\begin{aligned}&\lim _{\alpha =\beta \to 0}{\text{excess kurtosis))=-2\\&\lim _{\alpha =\beta \to \infty }{\text{excess kurtosis))=0\\&\lim _{\mu \to {\frac {1}{2))}{\text{excess kurtosis))=-{\frac {6}{3+\nu ))\end{aligned))}

For the unsymmetric cases (α ≠ β) the following limits (with only the noted variable approaching the limit) can be obtained from the above expressions:

{\displaystyle {\begin{aligned}&\lim _{\alpha \to 0}{\text{excess kurtosis))=\lim _{\beta \to 0}{\text{excess kurtosis))=\lim _{\mu \to 0}{\text{excess kurtosis))=\lim _{\mu \to 1}{\text{excess kurtosis))=\infty \\&\lim _{\alpha \to \infty }{\text{excess kurtosis))={\frac {6}{\beta )),{\text{ ))\lim _{\beta \to 0}(\lim _{\alpha \to \infty }{\text{excess kurtosis)))=\infty ,{\text{ ))\lim _{\beta \to \infty }(\lim _{\alpha \to \infty }{\text{excess kurtosis)))=0\\&\lim _{\beta \to \infty }{\text{excess kurtosis))={\frac {6}{\alpha )),{\text{ ))\lim _{\alpha \to 0}(\lim _{\beta \to \infty }{\text{excess kurtosis)))=\infty ,{\text{ ))\lim _{\alpha \to \infty }(\lim _{\beta \to \infty }{\text{excess kurtosis)))=0\\&\lim _{\nu \to 0}{\text{excess kurtosis))=-6+{\frac {1}{\mu (1-\mu ))),{\text{ ))\lim _{\mu \to 0}(\lim _{\nu \to 0}{\text{excess kurtosis)))=\infty ,{\text{ ))\lim _{\mu \to 1}(\lim _{\nu \to 0}{\text{excess kurtosis)))=\infty \end{aligned))}

### Characteristic function

Re(characteristic function) symmetric case α = β ranging from 25 to 0
Re(characteristic function) symmetric case α = β ranging from 0 to 25
Re(characteristic function) β = α + 1/2; α ranging from 25 to 0
Re(characteristic function) α = β + 1/2; β ranging from 25 to 0
Re(characteristic function) α = β + 1/2; β ranging from 0 to 25

The characteristic function is the Fourier transform of the probability density function. The characteristic function of the beta distribution is Kummer's confluent hypergeometric function (of the first kind):[1][20][23]

{\displaystyle {\begin{aligned}\varphi _{X}(\alpha ;\beta ;t)&=\operatorname {E} \left[e^{itX}\right]\\&=\int _{0}^{1}e^{itx}f(x;\alpha ,\beta )dx\\&={}_{1}F_{1}(\alpha ;\alpha +\beta ;it)\!\\&=\sum _{n=0}^{\infty }{\frac {\alpha ^{(n)}(it)^{n)){(\alpha +\beta )^{(n)}n!))\\&=1+\sum _{k=1}^{\infty }\left(\prod _{r=0}^{k-1}{\frac {\alpha +r}{\alpha +\beta +r))\right){\frac {(it)^{k)){k!))\end{aligned))}

where

${\displaystyle x^{(n)}=x(x+1)(x+2)\cdots (x+n-1)}$

is the rising factorial, also called the "Pochhammer symbol". The value of the characteristic function for t = 0, is one:

${\displaystyle \varphi _{X}(\alpha ;\beta ;0)={}_{1}F_{1}(\alpha ;\alpha +\beta ;0)=1}$.

Also, the real and imaginary parts of the characteristic function enjoy the following symmetries with respect to the origin of variable t:

${\displaystyle {\textrm {Re))\left[{}_{1}F_{1}(\alpha ;\alpha +\beta ;it)\right]={\textrm {Re))\left[{}_{1}F_{1}(\alpha ;\alpha +\beta ;-it)\right]}$
${\displaystyle {\textrm {Im))\left[{}_{1}F_{1}(\alpha ;\alpha +\beta ;it)\right]=-{\textrm {Im))\left[{}_{1}F_{1}(\alpha ;\alpha +\beta ;-it)\right]}$

The symmetric case α = β simplifies the characteristic function of the beta distribution to a Bessel function, since in the special case α + β = 2α the confluent hypergeometric function (of the first kind) reduces to a Bessel function (the modified Bessel function of the first kind ${\displaystyle I_{\alpha -{\frac {1}{2))))$ ) using Kummer's second transformation as follows:

Another example of the symmetric case α = β = n/2 for beamforming applications can be found in Figure 11 of [24]

{\displaystyle {\begin{aligned}{}_{1}F_{1}(\alpha ;2\alpha ;it)&=e^{\frac {it}{2)){}_{0}F_{1}\left(;\alpha +{\tfrac {1}{2));{\frac {(it)^{2)){16))\right)\\&=e^{\frac {it}{2))\left({\frac {it}{4))\right)^((\frac {1}{2))-\alpha }\Gamma \left(\alpha +{\tfrac {1}{2))\right)I_{\alpha -{\frac {1}{2))}\left({\frac {it}{2))\right).\end{aligned))}

In the accompanying plots, the real part (Re) of the characteristic function of the beta distribution is displayed for symmetric (α = β) and skewed (α ≠ β) cases.

### Other moments

#### Moment generating function

It also follows[1][6] that the moment generating function is

{\displaystyle {\begin{aligned}M_{X}(\alpha ;\beta ;t)&=\operatorname {E} \left[e^{tX}\right]\\[4pt]&=\int _{0}^{1}e^{tx}f(x;\alpha ,\beta )\,dx\\[4pt]&={}_{1}F_{1}(\alpha ;\alpha +\beta ;t)\\[4pt]&=\sum _{n=0}^{\infty }{\frac {\alpha ^{(n))){(\alpha +\beta )^{(n)))}{\frac {t^{n)){n!))\\[4pt]&=1+\sum _{k=1}^{\infty }\left(\prod _{r=0}^{k-1}{\frac {\alpha +r}{\alpha +\beta +r))\right){\frac {t^{k)){k!))\end{aligned))}

In particular MX(α; β; 0) = 1.

#### Higher moments

Using the moment generating function, the k-th raw moment is given by[1] the factor

${\displaystyle \prod _{r=0}^{k-1}{\frac {\alpha +r}{\alpha +\beta +r))}$

multiplying the (exponential series) term ${\displaystyle \left({\frac {t^{k)){k!))\right)}$ in the series of the moment generating function

${\displaystyle \operatorname {E} [X^{k}]={\frac {\alpha ^{(k))){(\alpha +\beta )^{(k)))}=\prod _{r=0}^{k-1}{\frac {\alpha +r}{\alpha +\beta +r))}$

where (x)(k) is a Pochhammer symbol representing rising factorial. It can also be written in a recursive form as

${\displaystyle \operatorname {E} [X^{k}]={\frac {\alpha +k-1}{\alpha +\beta +k-1))\operatorname {E} [X^{k-1}].}$

Since the moment generating function ${\displaystyle M_{X}(\alpha ;\beta ;\cdot )}$ has a positive radius of convergence, the beta distribution is determined by its moments.[25]

#### Moments of transformed random variables

##### Moments of linearly transformed, product and inverted random variables

One can also show the following expectations for a transformed random variable,[1] where the random variable X is Beta-distributed with parameters α and β: X ~ Beta(α, β). The expected value of the variable 1 − X is the mirror-symmetry of the expected value based on X:

{\displaystyle {\begin{aligned}&\operatorname {E} [1-X]={\frac {\beta }{\alpha +\beta ))\\&\operatorname {E} [X(1-X)]=\operatorname {E} [(1-X)X]={\frac {\alpha \beta }{(\alpha +\beta )(\alpha +\beta +1)))\end{aligned))}

Due to the mirror-symmetry of the probability density function of the beta distribution, the variances based on variables X and 1 − X are identical, and the covariance on X(1 − X is the negative of the variance:

${\displaystyle \operatorname {var} [(1-X)]=\operatorname {var} [X]=-\operatorname {cov} [X,(1-X)]={\frac {\alpha \beta }{(\alpha +\beta )^{2}(\alpha +\beta +1)))}$

These are the expected values for inverted variables, (these are related to the harmonic means, see § Harmonic mean):

{\displaystyle {\begin{aligned}&\operatorname {E} \left[{\frac {1}{X))\right]={\frac {\alpha +\beta -1}{\alpha -1)){\text{ if ))\alpha >1\\&\operatorname {E} \left[{\frac {1}{1-X))\right]={\frac {\alpha +\beta -1}{\beta -1)){\text{ if ))\beta >1\end{aligned))}

The following transformation by dividing the variable X by its mirror-image X/(1 − X) results in the expected value of the "inverted beta distribution" or beta prime distribution (also known as beta distribution of the second kind or Pearson's Type VI):[1]

{\displaystyle {\begin{aligned}&\operatorname {E} \left[{\frac {X}{1-X))\right]={\frac {\alpha }{\beta -1)){\text{ if ))\beta >1\\&\operatorname {E} \left[{\frac {1-X}{X))\right]={\frac {\beta }{\alpha -1)){\text{ if ))\alpha >1\end{aligned))}

Variances of these transformed variables can be obtained by integration, as the expected values of the second moments centered on the corresponding variables:

${\displaystyle \operatorname {var} \left[{\frac {1}{X))\right]=\operatorname {E} \left[\left({\frac {1}{X))-\operatorname {E} \left[{\frac {1}{X))\right]\right)^{2}\right]=}$
${\displaystyle \operatorname {var} \left[{\frac {1-X}{X))\right]=\operatorname {E} \left[\left({\frac {1-X}{X))-\operatorname {E} \left[{\frac {1-X}{X))\right]\right)^{2}\right]={\frac {\beta (\alpha +\beta -1)}{(\alpha -2)(\alpha -1)^{2))}{\text{ if ))\alpha >2}$

The following variance of the variable X divided by its mirror-image (X/(1−X) results in the variance of the "inverted beta distribution" or beta prime distribution (also known as beta distribution of the second kind or Pearson's Type VI):[1]

${\displaystyle \operatorname {var} \left[{\frac {1}{1-X))\right]=\operatorname {E} \left[\left({\frac {1}{1-X))-\operatorname {E} \left[{\frac {1}{1-X))\right]\right)^{2}\right]=\operatorname {var} \left[{\frac {X}{1-X))\right]=}$
${\displaystyle \operatorname {E} \left[\left({\frac {X}{1-X))-\operatorname {E} \left[{\frac {X}{1-X))\right]\right)^{2}\right]={\frac {\alpha (\alpha +\beta -1)}{(\beta -2)(\beta -1)^{2))}{\text{ if ))\beta >2}$

The covariances are:

${\displaystyle \operatorname {cov} \left[{\frac {1}{X)),{\frac {1}{1-X))\right]=\operatorname {cov} \left[{\frac {1-X}{X)),{\frac {X}{1-X))\right]=\operatorname {cov} \left[{\frac {1}{X)),{\frac {X}{1-X))\right]=\operatorname {cov} \left[{\frac {1-X}{X)),{\frac {1}{1-X))\right]={\frac {\alpha +\beta -1}{(\alpha -1)(\beta -1))){\text{ if ))\alpha ,\beta >1}$

These expectations and variances appear in the four-parameter Fisher information matrix (§ Fisher information.)

##### Moments of logarithmically transformed random variables
Plot of logit(X) = ln(X/(1−X)) (vertical axis) vs. X in the domain of 0 to 1 (horizontal axis). Logit transformations are interesting, as they usually transform various shapes (including J-shapes) into (usually skewed) bell-shaped densities over the logit variable, and they may remove the end singularities over the original variable

Expected values for logarithmic transformations (useful for maximum likelihood estimates, see § Parameter estimation, Maximum likelihood) are discussed in this section. The following logarithmic linear transformations are related to the geometric means GX and G(1−X) (see § Geometric Mean):

{\displaystyle {\begin{aligned}\operatorname {E} [\ln(X)]&=\psi (\alpha )-\psi (\alpha +\beta )=-\operatorname {E} \left[\ln \left({\frac {1}{X))\right)\right],\\\operatorname {E} [\ln(1-X)]&=\psi (\beta )-\psi (\alpha +\beta )=-\operatorname {E} \left[\ln \left({\frac {1}{1-X))\right)\right].\end{aligned))}

Where the digamma function ψ(α) is defined as the logarithmic derivative of the gamma function:[20]

${\displaystyle \psi (\alpha )={\frac {d\ln \Gamma (\alpha )}{d\alpha ))}$

Logit transformations are interesting,[26] as they usually transform various shapes (including J-shapes) into (usually skewed) bell-shaped densities over the logit variable, and they may remove the end singularities over the original variable:

{\displaystyle {\begin{aligned}\operatorname {E} \left[\ln \left({\frac {X}{1-X))\right)\right]&=\psi (\alpha )-\psi (\beta )=\operatorname {E} [\ln(X)]+\operatorname {E} \left[\ln \left({\frac {1}{1-X))\right)\right],\\\operatorname {E} \left[\ln \left({\frac {1-X}{X))\right)\right]&=\psi (\beta )-\psi (\alpha )=-\operatorname {E} \left[\ln \left({\frac {X}{1-X))\right)\right].\end{aligned))}

Johnson[27] considered the distribution of the logit - transformed variable ln(X/1−X), including its moment generating function and approximations for large values of the shape parameters. This transformation extends the finite support [0, 1] based on the original variable X to infinite support in both directions of the real line (−∞, +∞).

Higher order logarithmic moments can be derived by using the representation of a beta distribution as a proportion of two Gamma distributions and differentiating through the integral. They can be expressed in terms of higher order poly-gamma functions as follows:

{\displaystyle {\begin{aligned}\operatorname {E} \left[\ln ^{2}(X)\right]&=(\psi (\alpha )-\psi (\alpha +\beta ))^{2}+\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta ),\\\operatorname {E} \left[\ln ^{2}(1-X)\right]&=(\psi (\beta )-\psi (\alpha +\beta ))^{2}+\psi _{1}(\beta )-\psi _{1}(\alpha +\beta ),\\\operatorname {E} \left[\ln(X)\ln(1-X)\right]&=(\psi (\alpha )-\psi (\alpha +\beta ))(\psi (\beta )-\psi (\alpha +\beta ))-\psi _{1}(\alpha +\beta ).\end{aligned))}

therefore the variance of the logarithmic variables and covariance of ln(X) and ln(1−X) are:

{\displaystyle {\begin{aligned}\operatorname {cov} [\ln(X),\ln(1-X)]&=\operatorname {E} \left[\ln(X)\ln(1-X)\right]-\operatorname {E} [\ln(X)]\operatorname {E} [\ln(1-X)]=-\psi _{1}(\alpha +\beta )\\&\\\operatorname {var} [\ln X]&=\operatorname {E} [\ln ^{2}(X)]-(\operatorname {E} [\ln(X)])^{2}\\&=\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta )\\&=\psi _{1}(\alpha )+\operatorname {cov} [\ln(X),\ln(1-X)]\\&\\\operatorname {var} [\ln(1-X)]&=\operatorname {E} [\ln ^{2}(1-X)]-(\operatorname {E} [\ln(1-X)])^{2}\\&=\psi _{1}(\beta )-\psi _{1}(\alpha +\beta )\\&=\psi _{1}(\beta )+\operatorname {cov} [\ln(X),\ln(1-X)]\end{aligned))}

where the trigamma function, denoted ψ1(α), is the second of the polygamma functions, and is defined as the derivative of the digamma function:

${\displaystyle \psi _{1}(\alpha )={\frac {d^{2}\ln \Gamma (\alpha )}{d\alpha ^{2))}={\frac {d\psi (\alpha )}{d\alpha ))}$.

The variances and covariance of the logarithmically transformed variables X and (1−X) are different, in general, because the logarithmic transformation destroys the mirror-symmetry of the original variables X and (1−X), as the logarithm approaches negative infinity for the variable approaching zero.

These logarithmic variances and covariance are the elements of the Fisher information matrix for the beta distribution. They are also a measure of the curvature of the log likelihood function (see section on Maximum likelihood estimation).

The variances of the log inverse variables are identical to the variances of the log variables:

{\displaystyle {\begin{aligned}\operatorname {var} \left[\ln \left({\frac {1}{X))\right)\right]&=\operatorname {var} [\ln(X)]=\psi _{1}(\alpha )-\psi _{1}(\alpha +\beta ),\\\operatorname {var} \left[\ln \left({\frac {1}{1-X))\right)\right]&=\operatorname {var} [\ln(1-X)]=\psi _{1}(\beta )-\psi _{1}(\alpha +\beta ),\\\operatorname {cov} \left[\ln \left({\frac {1}{X))\right),\ln \left({\frac {1}{1-X))\right)\right]&=\operatorname {cov} [\ln(X),\ln(1-X)]=-\psi _{1}(\alpha +\beta ).\end{aligned))}

It also follows that the variances of the logit transformed variables are:

${\displaystyle \operatorname {var} \left[\ln \left({\frac {X}{1-X))\right)\right]=\operatorname {var} \left[\ln \left({\frac {1-X}{X))\right)\right]=-\operatorname {cov} \left[\ln \left({\frac {X}{1-X))\right),\ln \left({\frac {1-X}{X))\right)\right]=\psi _{1}(\alpha )+\psi _{1}(\beta )}$

### Quantities of information (entropy)

Given a beta distributed random variable, X ~ Beta(α, β), the differential entropy of X is (measured in nats),[28] the expected value of the negative of the logarithm of the probability density function:

{\displaystyle {\begin{aligned}h(X)&=\operatorname {E} [-\ln(f(x;\alpha ,\beta ))]\\[4pt]&=\int _{0}^{1}-f(x;\alpha ,\beta )\ln(f(x;\alpha ,\beta ))\,dx\\[4pt]&=\ln(\mathrm {B} (\alpha ,\beta ))-(\alpha -1)\psi (\alpha )-(\beta -1)\psi (\beta )+(\alpha +\beta -2)\psi (\alpha +\beta )\end{aligned))}

where f(x; α, β) is the probability density function of the beta distribution:

${\displaystyle f(x;\alpha ,\beta )={\frac {1}{\mathrm {B} (\alpha ,\beta )))x^{\alpha -1}(1-x)^{\beta -1))$

The digamma function ψ appears in the formula for the differential entropy as a consequence of Euler's integral formula for the harmonic numbers which follows from the integral:

${\displaystyle \int _{0}^{1}{\frac {1-x^{\alpha -1)){1-x))\,dx=\psi (\alpha )-\psi (1)}$

The differential entropy of the beta distribution is negative for all values of α and β greater than zero, except at α = β = 1 (for which values the beta distribution is the same as the uniform distribution), where the differential entropy reaches its maximum value of zero. It is to be expected that the maximum entropy should take place when the beta distribution becomes equal to the uniform distribution, since uncertainty is maximal when all possible events are equiprobable.

For α or β approaching zero, the differential entropy approaches its minimum value of negative infinity. For (either or both) α or β approaching zero, there is a maximum amount of order: all the probability density is concentrated at the ends, and there is zero probability density at points located between the ends. Similarly for (either or both) α or β approaching infinity, the differential entropy approaches its minimum value of negative infinity, and a maximum amount of order. If either α or β approaches infinity (and the other is finite) all the probability density is concentrated at an end, and the probability density is zero everywhere else. If both shape parameters are equal (the symmetric case), α = β, and they approach infinity simultaneously, the probability density becomes a spike (Dirac delta function) concentrated at the middle x = 1/2, and hence there is 100% probability at the middle x = 1/2 and zero probability everywhere else.

The (continuous case) differential entropy was introduced by Shannon in his original paper (where he named it the "entropy of a continuous distribution"), as the concluding part of the same paper where he defined the discrete entropy.[29] It is known since then that the differential entropy may differ from the infinitesimal limit of the discrete entropy by an infinite offset, therefore the differential entropy can be negative (as it is for the beta distribution). What really matters is the relative value of entropy.

Given two beta distributed random variables, X1 ~ Beta(α, β) and X2 ~ Beta(α′, β′), the cross entropy is (measured in nats)[30]

{\displaystyle {\begin{aligned}H(X_{1},X_{2})&=\int _{0}^{1}-f(x;\alpha ,\beta )\ln(f(x;\alpha ',\beta '))\,dx\\[4pt]&=\ln \left(\mathrm {B} (\alpha ',\beta ')\right)-(\alpha '-1)\psi (\alpha )-(\beta '-1)\psi (\beta )+(\alpha '+\beta '-2)\psi (\alpha +\beta ).\end{aligned))}

The cross entropy has been used as an error metric to measure the distance between two hypotheses.[31][32] Its absolute value is minimum when the two distributions are identical. It is the information measure most closely related to the log maximum likelihood [30](see section on "Parameter estimation. Maximum likelihood estimation")).

The relative entropy, or Kullback–Leibler divergence DKL(X1 || X2), is a measure of the inefficiency of assuming that the distribution is X2 ~ Beta(α′, β′) when the distribution is really X1 ~ Beta(α, β). It is defined as follows (measured in nats).

{\displaystyle {\begin{aligned}D_{\mathrm {KL} }(X_{1}||X_{2})&=\int _{0}^{1}f(x;\alpha ,\beta )\ln \left({\frac {f(x;\alpha ,\beta )}{f(x;\alpha ',\beta ')))\right)\,dx\\[4pt]&=\left(\int _{0}^{1}f(x;\alpha ,\beta )\ln(f(x;\alpha ,\beta ))\,dx\right)-\left(\int _{0}^{1}f(x;\alpha ,\beta )\ln(f(x;\alpha ',\beta '))\,dx\right)\\[4pt]&=-h(X_{1})+H(X_{1},X_{2})\\[4pt]&=\ln \left({\frac {\mathrm {B} (\alpha ',\beta ')}{\mathrm {B} (\alpha ,\beta )))\right)+(\alpha -\alpha ')\psi (\alpha )+(\beta -\beta ')\psi (\beta )+(\alpha '-\alpha +\beta '-\beta )\psi (\alpha +\beta ).\end{aligned))}

The relative entropy, or Kullback–Leibler divergence, is always non-negative. A few numerical examples follow:

• X1 ~ Beta(1, 1) and X2 ~ Beta(3, 3); DKL(X1 || X2) = 0.598803; DKL(X2 || X1) = 0.267864; h(X1) = 0; h(X2) = −0.267864
• X1 ~ Beta(3, 0.5) and X2 ~ Beta(0.5, 3); DKL(X1 || X2) = 7.21574; DKL(X2 || X1) = 7.21574; h(X1) = −1.10805; h(X2) = −1.10805.

The Kullback–Leibler divergence is not symmetric DKL(X1 || X2) ≠ DKL(X2 || X1) for the case in which the individual beta distributions Beta(1, 1) and Beta(3, 3) are symmetric, but have different entropies h(X1) ≠ h(X2). The value of the Kullback divergence depends on the direction traveled: whether going from a higher (differential) entropy to a lower (differential) entropy or the other way around. In the numerical example above, the Kullback divergence measures the inefficiency of assuming that the distribution is (bell-shaped) Beta(3, 3), rather than (uniform) Beta(1, 1). The "h" entropy of Beta(1, 1) is higher than the "h" entropy of Beta(3, 3) because the uniform distribution Beta(1, 1) has a maximum amount of disorder. The Kullback divergence is more than two times higher (0.598803 instead of 0.267864) when measured in the direction of decreasing entropy: the direction that assumes that the (uniform) Beta(1, 1) distribution is (bell-shaped) Beta(3, 3) rather than the other way around. In this restricted sense, the Kullback divergence is consistent with the second law of thermodynamics.

The Kullback–Leibler divergence is symmetric DKL(X1 || X2) = DKL(X2 || X1) for the skewed cases Beta(3, 0.5) and Beta(0.5, 3) that have equal differential entropy h(X1) = h(X2).

The symmetry condition:

${\displaystyle D_{\mathrm {KL} }(X_{1}||X_{2})=D_{\mathrm {KL} }(X_{2}||X_{1}),{\text{ if ))h(X_{1})=h(X_{2}),{\text{ for (skewed) ))\alpha \neq \beta }$

follows from the above definitions and the mirror-symmetry f(x; α, β) = f(1−x; α, β) enjoyed by the beta distribution.

### Relationships between statistical measures

#### Mean, mode and median relationship

If 1 < α < β then mode ≤ median ≤ mean.[12] Expressing the mode (only for α, β > 1), and the mean in terms of α and β:

${\displaystyle {\frac {\alpha -1}{\alpha +\beta -2))\leq {\text{median))\leq {\frac {\alpha }{\alpha +\beta )),}$

If 1 < β < α then the order of the inequalities are reversed. For α, β > 1 the absolute distance between the mean and the median is less than 5% of the distance between the maximum and minimum values of x. On the other hand, the absolute distance between the mean and the mode can reach 50% of the distance between the maximum and minimum values of x, for the (pathological) case of α = 1 and β = 1, for which values the beta distribution approaches the uniform distribution and the differential entropy approaches its maximum value, and hence maximum "disorder".

For example, for α = 1.0001 and β = 1.00000001:

• mode = 0.9999; PDF(mode) = 1.00010
• mean = 0.500025; PDF(mean) = 1.00003
• median = 0.500035; PDF(median) = 1.00003
• mean − mode = −0.499875
• mean − median = −9.65538 × 10−6

where PDF stands for the value of the probability density function.

#### Mean, geometric mean and harmonic mean relationship

:Mean, Median, Geometric Mean and Harmonic Mean for Beta distribution with 0 < α = β < 5

It is known from the inequality of arithmetic and geometric means that the geometric mean is lower than the mean. Similarly, the harmonic mean is lower than the geometric mean. The accompanying plot shows that for α = β, both the mean and the median are exactly equal to 1/2, regardless of the value of α = β, and the mode is also equal to 1/2 for α = β > 1, however the geometric and harmonic means are lower than 1/2 and they only approach this value asymptotically as α = β → ∞.

#### Kurtosis bounded by the square of the skewness

Beta distribution α and β parameters vs. excess Kurtosis and squared Skewness

As remarked by Feller,[5] in the Pearson system the beta probability density appears as type I (any difference between the beta distribution and Pearson's type I distribution is only superficial and it makes no difference for the following discussion regarding the relationship between kurtosis and skewness). Karl Pearson showed, in Plate 1 of his paper [22] published in 1916, a graph with the kurtosis as the vertical axis (ordinate) and the square of the skewness as the horizontal axis (abscissa), in which a number of distributions were displayed.[33] The region occupied by the beta distribution is bounded by the following two lines in the (skewness2,kurtosis) plane, or the (skewness2,excess kurtosis) plane:

${\displaystyle ({\text{skewness)))^{2}+1<{\text{kurtosis))<{\frac {3}{2))({\text{skewness)))^{2}+3}$

or, equivalently,

${\displaystyle ({\text{skewness)))^{2}-2<{\text{excess kurtosis))<{\frac {3}{2))({\text{skewness)))^{2))$

At a time when there were no powerful digital computers, Karl Pearson accurately computed further boundaries,[4][22] for example, separating the "U-shaped" from the "J-shaped" distributions. The lower boundary line (excess kurtosis + 2 − skewness2 = 0) is produced by skewed "U-shaped" beta distributions with both values of shape parameters α and β close to zero. The upper boundary line (excess kurtosis − (3/2) skewness2 = 0) is produced by extremely skewed distributions with very large values of one of the parameters and very small values of the other parameter. Karl Pearson showed[22] that this upper boundary line (excess kurtosis − (3/2) skewness2 = 0) is also the intersection with Pearson's distribution III, which has unlimited support in one direction (towards positive infinity), and can be bell-shaped or J-shaped. His son, Egon Pearson, showed[33] that the region (in the kurtosis/squared-skewness plane) occupied by the beta distribution (equivalently, Pearson's distribution I) as it approaches this boundary (excess kurtosis − (3/2) skewness2 = 0) is shared with the noncentral chi-squared distribution. Karl Pearson[34] (Pearson 1895, pp. 357, 360, 373–376) also showed that the gamma distribution is a Pearson type III distribution. Hence this boundary line for Pearson's type III distribution is known as the gamma line. (This can be shown from the fact that the excess kurtosis of the gamma distribution is 6/k and the square of the skewness is 4/k, hence (excess kurtosis − (3/2) skewness2 = 0) is identically satisfied by the gamma distribution regardless of the value of the parameter "k"). Pearson later noted that the chi-squared distribution is a special case of Pearson's type III and also shares this boundary line (as it is apparent from the fact that for the chi-squared distribution the excess kurtosis is 12/k and the square of the skewness is 8/k, hence (excess kurtosis − (3/2) skewness2 = 0) is identically satisfied regardless of the value of the parameter "k"). This is to be expected, since the chi-squared distribution X ~ χ2(k) is a special case of the gamma distribution, with parametrization X ~ Γ(k/2, 1/2) where k is a positive integer that specifies the "number of degrees of freedom" of the chi-squared distribution.

An example of a beta distribution near the upper boundary (excess kurtosis − (3/2) skewness2 = 0) is given by α = 0.1, β = 1000, for which the ratio (excess kurtosis)/(skewness2) = 1.49835 approaches the upper limit of 1.5 from below. An example of a beta distribution near the lower boundary (excess kurtosis + 2 − skewness2 = 0) is given by α= 0.0001, β = 0.1, for which values the expression (excess kurtosis + 2)/(skewness2) = 1.01621 approaches the lower limit of 1 from above. In the infinitesimal limit for both α and β approaching zero symmetrically, the excess kurtosis reaches its minimum value at −2. This minimum value occurs at the point at which the lower boundary line intersects the vertical axis (ordinate). (However, in Pearson's original chart, the ordinate is kurtosis, instead of excess kurtosis, and it increases downwards rather than upwards).

Values for the skewness and excess kurtosis below the lower boundary (excess kurtosis + 2 − skewness2 = 0) cannot occur for any distribution, and hence Karl Pearson appropriately called the region below this boundary the "impossible region". The boundary for this "impossible region" is determined by (symmetric or skewed) bimodal "U"-shaped distributions for which the parameters α and β approach zero and hence all the probability density is concentrated at the ends: x = 0, 1 with practically nothing in between them. Since for α ≈ β ≈ 0 the probability density is concentrated at the two ends x = 0 and x = 1, this "impossible boundary" is determined by a Bernoulli distribution, where the two only possible outcomes occur with respective probabilities p and q = 1−p. For cases approaching this limit boundary with symmetry α = β, skewness ≈ 0, excess kurtosis ≈ −2 (this is the lowest excess kurtosis possible for any distribution), and the probabilities are pq ≈ 1/2. For cases approaching this limit boundary with skewness, excess kurtosis ≈ −2 + skewness2, and the probability density is concentrated more at one end than the other end (with practically nothing in between), with probabilities ${\displaystyle p={\tfrac {\beta }{\alpha +\beta ))}$ at the left end x = 0 and ${\displaystyle q=1-p={\tfrac {\alpha }{\alpha +\beta ))}$ at the right end x = 1.

### Symmetry

All statements are conditional on α, β > 0

${\displaystyle f(x;\alpha ,\beta )=f(1-x;\beta ,\alpha )}$
${\displaystyle F(x;\alpha ,\beta )=I_{x}(\alpha ,\beta )=1-F(1-x;\beta ,\alpha )=1-I_{1-x}(\beta ,\alpha )}$
${\displaystyle \operatorname {mode} (\mathrm {B} (\alpha ,\beta ))=1-\operatorname {mode} (\mathrm {B} (\beta ,\alpha )),{\text{ if ))\mathrm {B} (\beta ,\alpha )\neq \mathrm {B} (1,1)}$
${\displaystyle \operatorname {median} (\mathrm {B} (\alpha ,\beta ))=1-\operatorname {median} (\mathrm {B} (\beta ,\alpha ))}$
${\displaystyle \mu (\mathrm {B} (\alpha ,\beta ))=1-\mu (\mathrm {B} (\beta ,\alpha ))}$
• Geometric Means each is individually asymmetric, the following symmetry applies between the geometric mean based on X and the geometric mean based on its reflection (1-X)
${\displaystyle G_{X}(\mathrm {B} (\alpha ,\beta ))=G_{(1-X)}(\mathrm {B} (\beta ,\alpha ))}$
• Harmonic means each is individually asymmetric, the following symmetry applies between the harmonic mean based on X and the harmonic mean based on its reflection (1-X)
${\displaystyle H_{X}(\mathrm {B} (\alpha ,\beta ))=H_{(1-X)}(\mathrm {B} (\beta ,\alpha )){\text{ if ))\alpha ,\beta >1}$ .
• Variance symmetry
${\displaystyle \operatorname {var} (\mathrm {B} (\alpha ,\beta ))=\operatorname {var} (\mathrm {B} (\beta ,\alpha ))}$
• Geometric variances each is individually asymmetric, the following symmetry applies between the log geometric variance based on X and the log geometric variance based on its reflection (1-X)
${\displaystyle \ln(\operatorname {var_{GX)) (\mathrm {B} (\alpha ,\beta )))=\ln(\operatorname {var_{G(1-X))) (\mathrm {B} (\beta ,\alpha )))}$
• Geometric covariance symmetry
${\displaystyle \ln \operatorname {cov_{GX,(1-X))) (\mathrm {B} (\alpha ,\beta ))=\ln \operatorname {cov_{GX,(1-X))) (\mathrm {B} (\beta ,\alpha ))}$
${\displaystyle \operatorname {E} [|X-E[X]|](\mathrm {B} (\alpha ,\beta ))=\operatorname {E} [|X-E[X]|](\mathrm {B} (\beta ,\alpha ))}$
${\displaystyle \operatorname {skewness} (\mathrm {B} (\alpha ,\beta ))=-\operatorname {skewness} (\mathrm {B} (\beta ,\alpha ))}$
• Excess kurtosis symmetry
${\displaystyle {\text{excess kurtosis))(\mathrm {B} (\alpha ,\beta ))={\text{excess kurtosis))(\mathrm {B} (\beta ,\alpha ))}$
• Characteristic function symmetry of Real part (with respect to the origin of variable "t")
${\displaystyle {\text{Re))[{}_{1}F_{1}(\alpha ;\alpha +\beta ;it)]={\text{Re))[{}_{1}F_{1}(\alpha ;\alpha +\beta ;-it)]}$
${\displaystyle {\text{Im))[{}_{1}F_{1}(\alpha ;\alpha +\beta ;it)]=-{\text{Im))[{}_{1}F_{1}(\alpha ;\alpha +\beta ;-it)]}$
• Characteristic function symmetry of Absolute value (with respect to the origin of variable "t")
${\displaystyle {\text{Abs))[{}_{1}F_{1}(\alpha ;\alpha +\beta ;it)]={\text{Abs))[{}_{1}F_{1}(\alpha ;\alpha +\beta ;-it)]}$
• Differential entropy symmetry
${\displaystyle h(\mathrm {B} (\alpha ,\beta ))=h(\mathrm {B} (\beta ,\alpha ))}$
${\displaystyle D_{\mathrm {KL} }(X_{1}||X_{2})=D_{\mathrm {KL} }(X_{2}||X_{1}),{\text{ if ))h(X_{1})=h(X_{2}){\text{, for (skewed) ))\alpha \neq \beta }$
• Fisher information matrix symmetry
${\displaystyle {\mathcal {I))_{i,j}={\mathcal {I))_{j,i))$

### Geometry of the probability density function

#### Inflection points

Inflection point location versus α and β showing regions with one inflection point
Inflection point location versus α and β showing region with two inflection points

For certain values of the shape parameters α and β, the probability density function has inflection points, at which the curvature changes sign. The position of these inflection points can be useful as a measure of the dispersion or spread of the distribution.

Defining the following quantity:

${\displaystyle \kappa ={\frac {\sqrt {\frac {(\alpha -1)(\beta -1)}{\alpha +\beta -3))}{\alpha +\beta -2))}$

Points of inflection occur,[1][3][6][7] depending on the value of the shape parameters α and β, as follows:

• (α > 2, β > 2) The distribution is bell-shaped (symmetric for α = β and skewed otherwise), with two inflection points, equidistant from the mode:
${\displaystyle x={\text{mode))\pm \kappa ={\frac {\alpha -1\pm {\sqrt {\frac {(\alpha -1)(\beta -1)}{\alpha +\beta -3)))){\alpha +\beta -2))}$
• (α = 2, β > 2) The distribution is unimodal, positively skewed, right-tailed, with one inflection point, located to the right of the mode:
${\displaystyle x={\text{mode))+\kappa ={\frac {2}{\beta ))}$
• (α > 2, β = 2) The distribution is unimodal, negatively skewed, left-tailed, with one inflection point, located to the left of the mode:
${\displaystyle x={\text{mode))-\kappa =1-{\frac {2}{\alpha ))}$
• (1 < α < 2, β > 2, α+β>2) The distribution is unimodal, positively skewed, right-tailed, with one inflection point, located to the right of the mode:
${\displaystyle x={\text{mode))+\kappa ={\frac {\alpha -1+{\sqrt {\frac {(\alpha -1)(\beta -1)}{\alpha +\beta -3)))){\alpha +\beta -2))}$
• (0 < α < 1, 1 < β < 2) The distribution has a mode at the left end x = 0 and it is positively skewed, right-tailed. There is one inflection point, located to the right of the mode:
${\displaystyle x={\frac {\alpha -1+{\sqrt {\frac {(\alpha -1)(\beta -1)}{\alpha +\beta -3)))){\alpha +\beta -2))}$
• (α > 2, 1 < β < 2) The distribution is unimodal negatively skewed, left-tailed, with one inflection point, located to the left of the mode:
${\displaystyle x={\text{mode))-\kappa ={\frac {\alpha -1-{\sqrt {\frac {(\alpha -1)(\beta -1)}{\alpha +\beta -3)))){\alpha +\beta -2))}$
• (1 < α < 2, 0 < β < 1) The distribution has a mode at the right end x=1 and it is negatively skewed, left-tailed. There is one inflection point, located to the left of the mode:
${\displaystyle x={\frac {\alpha -1-{\sqrt {\frac {(\alpha -1)(\beta -1)}{\alpha +\beta -3)))){\alpha +\beta -2))}$

There are no inflection points in the remaining (symmetric and skewed) regions: U-shaped: (α, β < 1) upside-down-U-shaped: (1 < α < 2, 1 < β < 2), reverse-J-shaped (α < 1, β > 2) or J-shaped: (α > 2, β < 1)

The accompanying plots show the inflection point locations (shown vertically, ranging from 0 to 1) versus α and β (the horizontal axes ranging from 0 to 5). There are large cuts at surfaces intersecting the lines α = 1, β = 1, α = 2, and β = 2 because at these values the beta distribution change from 2 modes, to 1 mode to no mode.

#### Shapes

PDF for symmetric beta distribution vs. x and α = β from 0 to 30
PDF for symmetric beta distribution vs. x and α = β from 0 to 2
PDF for skewed beta distribution vs. x and β = 2.5α from 0 to 9
PDF for skewed beta distribution vs. x and β = 5.5α from 0 to 9
PDF for skewed beta distribution vs. x and β = 8α from 0 to 10

The beta density function can take a wide variety of different shapes depending on the values of the two parameters α and β. The ability of the beta distribution to take this great diversity of shapes (using only two parameters) is partly responsible for finding wide application for modeling actual measurements:

##### Symmetric (α = β)
• the density function is symmetric about 1/2 (blue & teal plots).
• median = mean = 1/2.
• skewness = 0.
• variance = 1/(4(2α + 1))
• α = β < 1
• U-shaped (blue plot).
• bimodal: left mode = 0, right mode =1, anti-mode = 1/2
• 1/12 < var(X) < 1/4[1]
• −2 < excess kurtosis(X) < −6/5
• α = β = 1/2 is the arcsine distribution
• var(X) = 1/8
• excess kurtosis(X) = −3/2
• CF = Rinc (t) [35]
• α = β → 0 is a 2-point Bernoulli distribution with equal probability 1/2 at each Dirac delta function end x = 0 and x = 1 and zero probability everywhere else. A coin toss: one face of the coin being x = 0 and the other face being x = 1.
• ${\displaystyle \lim _{\alpha =\beta \to 0}\operatorname {var} (X)={\tfrac {1}{4))}$
• ${\displaystyle \lim _{\alpha =\beta \to 0}\operatorname {excess\ kurtosis} (X)=-2}$ a lower value than this is impossible for any distribution to reach.
• The differential entropy approaches a minimum value of −∞
• α = β = 1
• α = β > 1
• symmetric unimodal
• mode = 1/2.
• 0 < var(X) < 1/12[1]
• −6/5 < excess kurtosis(X) < 0
• α = β = 3/2 is a semi-elliptic [0, 1] distribution, see: Wigner semicircle distribution[36]
• var(X) = 1/16.
• excess kurtosis(X) = −1
• CF = 2 Jinc (t)
• α = β = 2 is the parabolic [0, 1] distribution
• var(X) = 1/20
• excess kurtosis(X) = −6/7
• CF = 3 Tinc (t) [37]
• α = β > 2 is bell-shaped, with inflection points located to either side of the mode
• 0 < var(X) < 1/20
• −6/7 < excess kurtosis(X) < 0
• α = β → ∞ is a 1-point Degenerate distribution with a Dirac delta function spike at the midpoint x = 1/2 with probability 1, and zero probability everywhere else. There is 100% probability (absolute certainty) concentrated at the single point x = 1/2.
• ${\displaystyle \lim _{\alpha =\beta \to \infty }\operatorname {var} (X)=0}$
• ${\displaystyle \lim _{\alpha =\beta \to \infty }\operatorname {excess\ kurtosis} (X)=0}$
• The differential entropy approaches a minimum value of −∞
##### Skewed (α ≠ β)

The density function is skewed. An interchange of parameter values yields the mirror image (the reverse) of the initial curve, some more specific cases:

• α < 1, β < 1
• U-shaped
• Positive skew for α < β, negative skew for α > β.
• bimodal: left mode = 0, right mode = 1, anti-mode = ${\displaystyle {\tfrac {\alpha -1}{\alpha +\beta -2))}$
• 0 < median < 1.
• 0 < var(X) < 1/4
• α > 1, β > 1
• unimodal (magenta & cyan plots),
• Positive skew for α < β, negative skew for α > β.
• ${\displaystyle {\text{mode ))={\tfrac {\alpha -1}{\alpha +\beta -2))}$
• 0 < median < 1
• 0 < var(X) < 1/12
• α < 1, β ≥ 1
• reverse J-shaped with a right tail,
• positively skewed,
• strictly decreasing, convex
• mode = 0
• 0 < median < 1/2.
• ${\displaystyle 0<\operatorname {var} (X)<{\tfrac {-11+5{\sqrt {5))}{2)),}$ (maximum variance occurs for ${\displaystyle \alpha ={\tfrac {-1+{\sqrt {5))}{2)),\beta =1}$, or α = Φ the golden ratio conjugate)
• α ≥ 1, β < 1
• J-shaped with a left tail,
• negatively skewed,
• strictly increasing, convex
• mode = 1
• 1/2 < median < 1
• ${\displaystyle 0<\operatorname {var} (X)<{\tfrac {-11+5{\sqrt {5))}{2)),}$ (maximum variance occurs for ${\displaystyle \alpha =1,\beta ={\tfrac {-1+{\sqrt {5))}{2))}$, or β = Φ the golden ratio conjugate)
• α = 1, β > 1
• positively skewed,
• strictly decreasing (red plot),
• a reversed (mirror-image) power function [0,1] distribution
• mean = 1 / (β + 1)
• median = 1 - 1/21/β
• mode = 0
• α = 1, 1 < β < 2
• concave
• ${\displaystyle 1-{\tfrac {1}{\sqrt {2))}<{\text{median))<{\tfrac {1}{2))}$
• 1/18 < var(X) < 1/12.
• α = 1, β = 2
• a straight line with slope −2, the right-triangular distribution with right angle at the left end, at x = 0
• ${\displaystyle {\text{median))=1-{\tfrac {1}{\sqrt {2))))$
• var(X) = 1/18
• α = 1, β > 2
• reverse J-shaped with a right tail,
• convex
• ${\displaystyle 0<{\text{median))<1-{\tfrac {1}{\sqrt {2))))$
• 0 < var(X) < 1/18
• α > 1, β = 1
• negatively skewed,
• strictly increasing (green plot),
• the power function [0, 1] distribution[6]
• mean = α / (α + 1)
• median = 1/21/α
• mode = 1
• 2 > α > 1, β = 1
• concave
• ${\displaystyle {\tfrac {1}{2))<{\text{median))<{\tfrac {1}{\sqrt {2))))$
• 1/18 < var(X) < 1/12
• α = 2, β = 1
• a straight line with slope +2, the right-triangular distribution with right angle at the right end, at x = 1
• ${\displaystyle {\text{median))={\tfrac {1}{\sqrt {2))))$
• var(X) = 1/18
• α > 2, β = 1
• J-shaped with a left tail, convex
• ${\displaystyle {\tfrac {1}{\sqrt {2))}<{\text{median))<1}$
• 0 < var(X) < 1/18

## Related distributions

### Transformations

• If X ~ Beta(α, β) then 1 − X ~ Beta(β, α) mirror-image symmetry
• If X ~ Beta(α, β) then ${\displaystyle {\tfrac {X}{1-X))\sim {\beta '}(\alpha ,\beta )}$. The beta prime distribution, also called "beta distribution of the second kind".
• If ${\displaystyle X\sim {\text{Beta))(\alpha ,\beta )}$, then ${\displaystyle Y=\log {\frac {X}{1-X))}$ has a generalized logistic distribution, with density ${\displaystyle {\frac {\sigma (y)^{\alpha }\sigma (-y)^{\beta )){B(\alpha ,\beta )))}$, where ${\displaystyle \sigma }$ is the logistic sigmoid.
• If X ~ Beta(α, β) then ${\displaystyle {\tfrac {1}{X))-1\sim {\beta '}(\beta ,\alpha )}$.
• If X ~ Beta(n/2, m/2) then ${\displaystyle {\tfrac {mX}{n(1-X)))\sim F(n,m)}$ (assuming n > 0 and m > 0), the Fisher–Snedecor F distribution.
• If ${\displaystyle X\sim \operatorname {Beta} \left(1+\lambda {\tfrac {m-\min }{\max -\min )),1+\lambda {\tfrac {\max -m}{\max -\min ))\right)}$ then min + X(max − min) ~ PERT(min, max, m, λ) where PERT denotes a PERT distribution used in PERT analysis, and m=most likely value.[38] Traditionally[39] λ = 4 in PERT analysis.
• If X ~ Beta(1, β) then X ~ Kumaraswamy distribution with parameters (1, β)
• If X ~ Beta(α, 1) then X ~ Kumaraswamy distribution with parameters (α, 1)
• If X ~ Beta(α, 1) then −ln(X) ~ Exponential(α)

### Special and limiting cases

Example of eight realizations of a random walk in one dimension starting at 0: the probability for the time of the last visit to the origin is distributed as Beta(1/2, 1/2)
Beta(1/2, 1/2): The arcsine distribution probability density was proposed by Harold Jeffreys to represent uncertainty for a Bernoulli or a binomial distribution in Bayesian inference, and is now commonly referred to as Jeffreys prior: p−1/2(1 − p)−1/2. This distribution also appears in several random walk fundamental theorems
• Beta(1, 1) ~ U(0, 1).
• Beta(n, 1) ~ Maximum of n independent rvs. with U(0, 1), sometimes called a a standard power function distribution with density n xn-1 on that interval.
• Beta(1, n) ~ Minimum of n independent rvs. with U(0, 1)
• If X ~ Beta(3/2, 3/2) and r > 0 then 2rX − r ~ Wigner semicircle distribution.
• Beta(1/2, 1/2) is equivalent to the arcsine distribution. This distribution is also Jeffreys prior probability for the Bernoulli and binomial distributions. The arcsine probability density is a distribution that appears in several random-walk fundamental theorems. In a fair coin toss random walk, the probability for the time of the last visit to the origin is distributed as an (U-shaped) arcsine distribution.[5][14] In a two-player fair-coin-toss game, a player is said to be in the lead if the random walk (that started at the origin) is above the origin. The most probable number of times that a given player will be in the lead, in a game of length 2N, is not N. On the contrary, N is the least likely number of times that the player will be in the lead. The most likely number of times in the lead is 0 or 2N (following the arcsine distribution).
• ${\displaystyle \lim _{n\to \infty }n\operatorname {Beta} (1,n)=\operatorname {Exponential} (1)}$ the exponential distribution.
• ${\displaystyle \lim _{n\to \infty }n\operatorname {Beta} (k,n)=\operatorname {Gamma} (k,1)}$ the gamma distribution.
• For large ${\displaystyle n}$, ${\displaystyle \operatorname {Beta} (\alpha n,\beta n)\to {\mathcal {N))\left({\frac {\alpha }{\alpha +\beta )),{\frac {\alpha \beta }{(\alpha +\beta )^{3))}{\frac {1}{n))\right)}$ the normal distribution. More precisely, if ${\displaystyle X_{n}\sim \operatorname {Beta} (\alpha n,\beta n)}$ then ${\displaystyle {\sqrt {n))\left(X_{n}-{\tfrac {\alpha }{\alpha +\beta ))\right)}$ converges in distribution to a normal distribution with mean 0 and variance ${\displaystyle {\tfrac {\alpha \beta }{(\alpha +\beta )^{3))))$ as n increases.

### Derived from other distributions

• The kth order statistic of a sample of size n from the uniform distribution is a beta random variable, U(k) ~ Beta(k, n+1−k).[40]
• If X ~ Gamma(α, θ) and Y ~ Gamma(β, θ) are independent, then ${\displaystyle {\tfrac {X}{X+Y))\sim \operatorname {Beta} (\alpha ,\beta )\,}$.
• If ${\displaystyle X\sim \chi ^{2}(\alpha )\,}$ and ${\displaystyle Y\sim \chi ^{2}(\beta )\,}$ are independent, then ${\displaystyle {\tfrac {X}{X+Y))\sim \operatorname {Beta} ({\tfrac {\alpha }{2)),{\tfrac {\beta }{2)))}$.
• If X ~ U(0, 1) and α > 0 then X1/α ~ Beta(α, 1). The power function distribution.
• If ${\displaystyle X\sim \operatorname {Binom} (k;n;p)}$, then ${\displaystyle {X/(n+1)}\sim \operatorname {Beta} (\alpha ,\beta )}$ for discrete values of n and k where ${\displaystyle \alpha =k+1}$ and ${\displaystyle \beta =n-k+1}$.[41]
• If X ~ Cauchy(0, 1) then ${\displaystyle {\tfrac {1}{1+X^{2))}\sim \operatorname {Beta} \left({\tfrac {1}{2)),{\tfrac {1}{2))\right)\,}$

### Combination with other distributions

• X ~ Beta(α, β) and Y ~ F(2β,2α) then ${\displaystyle \Pr(X\leq {\tfrac {\alpha }{\alpha +\beta x)))=\Pr(Y\geq x)\,}$ for all x > 0.

## Statistical inference

### Parameter estimation

#### Method of moments

##### Two unknown parameters

Two unknown parameters (${\displaystyle ({\hat {\alpha )),{\hat {\beta )))}$ of a beta distribution supported in the [0,1] interval) can be estimated, using the method of moments, with the first two moments (sample mean and sample variance) as follows. Let:

${\displaystyle {\text{sample mean(X)))={\bar {x))={\frac {1}{N))\sum _{i=1}^{N}X_{i))$

be the sample mean estimate and

${\displaystyle {\text{sample variance(X)))={\bar {v))={\frac {1}{N-1))\sum _{i=1}^{N}(X_{i}-{\bar {x)))^{2))$

be the sample variance estimate. The method-of-moments estimates of the parameters are

${\displaystyle {\hat {\alpha ))={\bar {x))\left({\frac ((\bar {x))(1-{\bar {x)))}{\bar {v))}-1\right),}$ if ${\displaystyle {\bar {v))<{\bar {x))(1-{\bar {x))),}$
${\displaystyle {\hat {\beta ))=(1-{\bar {x)))\left({\frac ((\bar {x))(1-{\bar {x)))}{\bar {v))}-1\right),}$ if ${\displaystyle {\bar {v))<{\bar {x))(1-{\bar {x))).}$

When the distribution is required over a known interval other than [0, 1] with random variable X, say [a, c] with random variable Y, then replace ${\displaystyle {\bar {x))}$ with ${\displaystyle {\frac ((\bar {y))-a}{c-a)),}$ and ${\displaystyle {\bar {v))}$ with ${\displaystyle {\frac {\bar {v_{Y))}{(c-a)^{2))))$ in the above couple of equations for the shape parameters (see the "Alternative parametrizations, four parameters" section below).,[42] where:

${\displaystyle {\text{sample mean(Y)))={\bar {y))={\frac {1}{N))\sum _{i=1}^{N}Y_{i))$