In statistics, a likelihood function (often simply a likelihood) is a particular function of the parameter of a statistical model given data. Likelihood functions play a key role in statistical inference.

In informal contexts, "likelihood" is often used as a synonym for "probability". In statistics, the two terms have different meanings. Probability is used to describe the plausibility of some data, given a value for the parameter. Likelihood is used to describe the plausibility of a value for the parameter, given some data.

Likelihood is used with each of the main proposed foundations of statistics: frequentism, Bayesianism, likelihoodism, and AIC-based.[1] The case for using likelihood in the foundation of statistics was first made by the founder of modern statistics, R. A. Fisher; a relevant quotation is below.

What has now appeared is that the mathematical concept of probability is ... inadequate to express our mental confidence or [lack of confidence] in making ... inferences, and that the mathematical quantity which usually appears to be appropriate for measuring our order of preference among different possible populations does not in fact obey the laws of probability. To distinguish it from probability, I have used the term "likelihood" to designate this quantity....

## Definition

The likelihood function is usually defined differently for discrete and continuous probability distributions. A general definition is also possible, as discussed below.

### Discrete probability distribution

Let ${\displaystyle X}$ be a discrete random variable with probability mass function ${\displaystyle p}$ depending on a parameter ${\displaystyle \theta }$. Then the function

${\displaystyle {\mathcal {L))(\theta \mid x)=p_{\theta }(x)=P_{\theta }(X=x),}$

considered as a function of ${\displaystyle \theta }$, is the likelihood function (of ${\displaystyle \theta }$), given the outcome ${\displaystyle x}$ of the random variable ${\displaystyle X}$. Sometimes the probability of "the value ${\displaystyle x}$ of ${\displaystyle X}$ for the parameter value ${\displaystyle \theta }$ " is written as P(X = x | θ) or P(X = x; θ).

### Continuous probability distribution

Let ${\displaystyle X}$ be a random variable following an absolutely continuous probability distribution with density function ${\displaystyle f}$ depending on a parameter ${\displaystyle \theta }$. Then the function

${\displaystyle {\mathcal {L))(\theta \mid x)=f_{\theta }(x),\,}$

considered as a function of ${\displaystyle \theta }$, is the likelihood function (of ${\displaystyle \theta }$, given the outcome ${\displaystyle x}$ of ${\displaystyle X}$). Sometimes the density function for "the value ${\displaystyle x}$ of ${\displaystyle X}$ for the parameter value ${\displaystyle \theta }$ " is written as ${\displaystyle f(x\mid \theta )}$; this should not be confused with ${\displaystyle {\mathcal {L))(\theta \mid x)}$, which should not be considered a conditional probability density.

### In general

In measure-theoretic probability theory, the density function is defined as the Radon–Nikodym derivative of the probability distribution relative to a common dominating measure.[3] The likelihood function is that density interpreted as a function of the parameter (possibly a vector), rather than the possible outcomes.[4] This provides a likelihood function for any probability model with all distributions, whether discrete, absolutely continuous, a mixture or something else. (Likelihoods will be comparable, e.g. for parameter estimation, only if they are Radon–Nikodym derivatives with respect to the same dominating measure.)

The discussion above of likelihood with discrete probabilities is a special case of this using the counting measure, which makes the probability of any single outcome equal to the probability density for that outcome.

Note that given no event (no data), the probability and thus likelihood is 1;[citation needed] any non-trivial event will have lower likelihood.

## Example 1

Consider a simple statistical model of a coin flip: a single parameter ${\displaystyle p_{\text{H))}$ that expresses the "fairness" of the coin. The parameter is the probability that a coin lands heads up ("H") when tossed. ${\displaystyle p_{\text{H))}$ can take on any value within the range 0.0 to 1.0. For a perfectly fair coin, ${\displaystyle p_{\text{H))}$ = 0.5.

Imagine flipping a fair coin twice, and observing the following data: two heads in two tosses ("HH"). Assuming that each successive coin flip is i.i.d., then the probability of observing HH is

${\displaystyle P({\text{HH))\mid p_{\text{H))=0.5)=0.5^{2}=0.25.}$

Hence, given the observed data HH, the likelihood that the model parameter ${\displaystyle p_{\text{H))}$ equals 0.5 is 0.25. Mathematically, this is written as

${\displaystyle {\mathcal {L))(p_{\text{H))=0.5\mid {\text{HH)))=0.25.}$

This is not the same as saying that the probability that ${\displaystyle p_{\text{H))=0.5}$, given the observation HH, is 0.25. (For that, we could apply Bayes' theorem, which implies that the posterior probability is proportional to the likelihood times the prior probability.)

Suppose that the coin is not a fair coin, but instead it has ${\displaystyle p_{\text{H))=0.3}$. Then the probability of getting two heads is

${\displaystyle P({\text{HH))\mid p_{\text{H))=0.3)=0.3^{2}=0.09.}$

Hence

${\displaystyle {\mathcal {L))(p_{\text{H))=0.3\mid {\text{HH)))=0.09.}$

More generally, for each value of ${\displaystyle p_{\text{H))}$, we can calculate the corresponding likelihood. The result of such calculations is displayed in Figure 1.

In Figure 1, the integral of the likelihood over the interval [0, 1] is 1/3. That illustrates an important aspect of likelihoods: likelihoods do not have to integrate (or sum) to 1, unlike probabilities.

## Interpretations under different foundations

Among statisticians, there is no consensus about what the foundation of statistics should be. There are four main paradigms that have been proposed for the foundation: frequentism, Bayesianism, likelihoodism, and AIC-based.[1] For each of the proposed foundations, the interpretation of likelihood is different. The four interpretations are described in the subsections below.

### Frequentist interpretation

This section is empty. You can help by adding to it. (March 2019)

### Bayesian interpretation

In Bayesian inference, although one can speak about the likelihood of any proposition or random variable given another random variable: for example the likelihood of a parameter value or of a statistical model (see marginal likelihood), given specified data or other evidence,[5][6][7][8] the likelihood function remains the same entity, with the additional interpretations of (i) a conditional density of the data given the parameter (since the parameter is then a random variable) and (ii) a measure or amount of information brought by the data about the parameter value or even the model.[5][6][7][8][9] Due to the introduction of a probability structure on the parameter space or on the collection of models, it is a possible that a parameter value or a statistical model have a large likelihood value for given data, and yet have a low probability, or vice versa.[7][9] This is often the case in medical contexts.[10] Following Bayes' Rule, the likelihood when seen as a conditional density can be multiplied by the prior probability density of the parameter and then normalized, to give a posterior probability density.[5][6][7][8][9]. More generally, the likelihood of an unknown quantity ${\displaystyle X}$ given another unknown quantity ${\displaystyle Y}$ is the probability of ${\displaystyle Y}$ given ${\displaystyle X}$[5][6][7][8][9].

### Likelihoodist interpretation

This section is empty. You can help by adding to it. (March 2019)

### AIC-based interpretation

This section needs expansion. You can help by adding to it. (March 2019)

Under the AIC paradigm, likelihood is interpreted within the context of information theory.[11][12][13]

## Likelihood ratio

A likelihood ratio is the ratio of any two specified likelihoods: ${\displaystyle {\mathcal {L))(\theta _{1}\mid x)/{\mathcal {L))(\theta _{2}\mid x)}$. Likelihood ratios are frequently written as ${\displaystyle \Lambda }$, as follows.

${\displaystyle \Lambda (\theta _{1}:\theta _{2}\mid x)={\frac ((\mathcal {L))(\theta _{1}\mid x)}((\mathcal {L))(\theta _{2}\mid x)))}$

The likelihood ratio of two models, given the same event, may be contrasted with the odds of two events, given the same model. In terms of a parametrized probability mass function ${\displaystyle p_{\theta }(x)}$, the likelihood ratio of two values of the parameter ${\displaystyle \theta _{1))$ and ${\displaystyle \theta _{2))$, given an outcome ${\displaystyle x}$ is:

${\displaystyle \Lambda (\theta _{1}:\theta _{2}\mid x)=p_{\theta _{1))(x):p_{\theta _{2))(x),}$

while the odds of two outcomes, ${\displaystyle x_{1))$ and ${\displaystyle x_{2))$, given a value of the parameter ${\displaystyle \theta }$, is:

${\displaystyle O(x_{1}:x_{2}\mid \theta )=p_{\theta }(x_{1}):p_{\theta }(x_{2}).}$

This highlights the difference between likelihood and odds: in likelihood, one compares models (parameters), holding data fixed; while in odds, one compares events (outcomes, data), holding the model fixed.

The odds ratio is a ratio of two conditional odds (of an event, given another event being present or absent). However, the odds ratio can also be interpreted as a ratio of two likelihoods ratios, if one considers one of the events to be more easily observable than the other. See diagnostic odds ratio, where the result of a diagnostic test is more easily observable than the presence or absence of an underlying medical condition.

Given no event (no data), the likelihoods are both 1, and thus the likelihood ratio is also 1: in the absence of data, there is no evidence to distinguish two models.

### Purposes

The likelihood ratio is central to likelihoodist statistics: the law of likelihood states that degree to which data (considered as evidence) supports one parameter value versus another is measured by the likelihood ratio.

The likelihood ratio is also of central importance in Bayesian inference, where it is known as the Bayes factor, and is used in Bayes' rule. Stated in terms of odds, Bayes' rule is that the posterior odds of two alternatives, ${\displaystyle A_{1))$ and ${\displaystyle A_{2))$, given an event ${\displaystyle B}$, is the prior odds, times the likelihood ratio. As an equation:

${\displaystyle O(A_{1}:A_{2}\mid B)=O(A_{1}:A_{2})\cdot \Lambda (A_{1}:A_{2}\mid B).}$

The likelihood ratio is also used in frequentist inference as a test statistic in the likelihood-ratio test. By the Neyman–Pearson lemma, this is the most powerful test for comparing two simple hypotheses at a given significance level. The likelihood ratio is thus of great interest in frequentist inference, but is not as central as in Bayesian statistics. Numerous other tests can be viewed as likelihood-ratio tests or approximations thereof. The asymptotic distribution of the log-likelihood ratio, considered as a test statistic, is given by Wilks' theorem.

The likelihood ratio is not directly used in AIC-based statistics. Instead, what is used is the relative likelihood of models (see below).

## Products of likelihoods

The likelihood, given two or more independent events, is the product of the likelihoods of each of the individual events:

${\displaystyle \Lambda (A\mid X_{1}\land X_{2})=\Lambda (A\mid X_{1})\cdot \Lambda (A\mid X_{2})}$

This follows from the definition of independence in probability: the probabilities of two independent events happening, given a model, is the product of the probabilities.

This is particularly important when the events are from independent and identically distributed random variables, such as independent observations or sampling with replacement. In such a situation, the likelihood function factors into a product of individual likelihood functions.

The empty product has value 1, which corresponds to the likelihood, given no event, being 1: before any data, the likelihood is always 1. This is similar to a uniform prior in Bayesian statistics, but in likelihoodist statistics this is not an improper prior because likelihoods are not integrated.

## Log-likelihood

Because one is primarily interested in ratios and products of likelihoods, the logarithm of the likelihood function is often easier to work with, since logarithms convert multiplication to addition: ratios become differences, and products become sums. This is called the log-likelihood, the loglihood[14] or the support.[15] Often the log-likelihood is denoted by a lowercase l or ${\displaystyle \ell }$, to contrast with the uppercase L or ${\displaystyle {\mathcal {L))}$ for the likelihood.

In addition to the mathematical convenience, the log-likelihood has an intuitive interpretation, as suggested by the term "support". Given independent events, the overall log-likelihood is the sum of the log-likelihoods of the individual events, just as the overall log-probability is the sum of the log-probability of the individual events. Viewing data as evidence, this is interpreted as "support from independent evidence adds", and the log-likelihood is the "weight of evidence". Interpreting negative log-probability as information content or surprisal, the support (log-likelihood) of a model, given an event, is the negative of the surprisal of the event, given the model: a model is supported by an event to the extent that the event is unsurprising, given the model.

The choice of base b for the logarithm corresponds to a choice of scale;[a] generally the natural logarithm is used and the base is fixed, but sometimes the base is varied, in which case, writing the base as ${\displaystyle b=e^{\beta ))$, the factor β can be interpreted as the coldness.[b]

A logarithm of a likelihood ratio is equal to the difference of the log-likelihoods:

${\displaystyle \log L(A)/L(B)=\log L(A)-\log L(B)=l(A)-l(B).}$

The log-likelihood is particularly convenient for maximum likelihood estimation. Because logarithms are strictly increasing functions, maximizing the likelihood is equivalent to maximizing the log-likelihood. The basic way to maximize a differentiable function is to find the stationary points (the points where the derivative is zero); since the derivative of a sum is just the sum of the derivatives, but the derivative of a product requires the product rule, it is easier to compute the stationary points of the log-likelihood of independent events than for the likelihood of independent events.

Just as the likelihood, given no event, being 1, the log-likelihood, given no event, is 0, which corresponds to the value of the empty sum: without any data, there is no support for any models.

### Exponential families

The log-likelihood is also particularly useful for exponential families of distributions, which include many of the common parametric probability distributions. The probability distribution function (and thus likelihood function) for exponential families contain products of factors involving exponentiation. The logarithm of such a function is a sum of products, again easier to differentiate than the original function.

An exponential family is one whose probability density function is of the form (for some functions, writing ${\displaystyle \langle -,-\rangle }$ for the inner product):

${\displaystyle p(x\mid {\boldsymbol {\theta )))=h(x)\exp {\Big (}\langle {\boldsymbol {\eta ))({\boldsymbol {\theta ))),\mathbf {T} (x)\rangle -A({\boldsymbol {\theta ))){\Big )}.}$

Each of these terms has an interpretation,[c] but simply switching from probability to likelihood and taking logarithms yields the sum:

${\displaystyle l({\boldsymbol {\theta ))\mid x)=\langle {\boldsymbol {\eta ))({\boldsymbol {\theta ))),\mathbf {T} (x)\rangle -A({\boldsymbol {\theta )))+\log h(x).}$

The ${\displaystyle {\boldsymbol {\eta ))({\boldsymbol {\theta )))}$ and ${\displaystyle h(x)}$ each correspond to a change of coordinates, so in these coordinates, the log-likelihood of an exponential family is given by the simple formula:

${\displaystyle l({\boldsymbol {\eta ))\mid x)=\langle {\boldsymbol {\eta )),\mathbf {T} (x)\rangle -A({\boldsymbol {\eta ))).}$

In words, the log-likelihood of an exponential family is inner product of the natural parameter ${\displaystyle {\boldsymbol {\eta ))}$ and the sufficient statistic ${\displaystyle \mathbf {T} (x)}$, minus the normalization factor (log-partition function) ${\displaystyle A({\boldsymbol {\eta )))}$. Thus for example the maximum likelihood estimate can be computed by taking derivatives of the sufficient statistic T and the log-partition function A.

#### Example: the gamma distribution

The gamma distribution is an exponential family with two parameters, ${\displaystyle \alpha }$ and ${\displaystyle \beta }$. The likelihood function is

${\displaystyle {\mathcal {L))(\alpha ,\beta \mid x)={\frac {\beta ^{\alpha )){\Gamma (\alpha )))x^{\alpha -1}e^{-\beta x}.}$

Finding the maximum likelihood estimate of ${\displaystyle \beta }$ for a single observed value ${\displaystyle x}$ looks rather daunting. Its logarithm is much simpler to work with:

${\displaystyle \log {\mathcal {L))(\alpha ,\beta \mid x)=\alpha \log \beta -\log \Gamma (\alpha )+(\alpha -1)\log x-\beta x.\,}$

To maximize the log-likelihood, we first take the partial derivative with respect to ${\displaystyle \beta }$:

${\displaystyle {\frac {\partial \log {\mathcal {L))(\alpha ,\beta \mid x)}{\partial \beta ))={\frac {\alpha }{\beta ))-x.}$

If there are a number of independent observations ${\displaystyle x_{1},\ldots ,x_{n))$, then the joint log-likelihood will be the sum of individual log-likelihoods, and the derivative of this sum will be a sum of derivatives of each individual log-likelihood:

{\displaystyle {\begin{aligned}&{\frac {\partial \log {\mathcal {L))(\alpha ,\beta \mid x_{1},\ldots ,x_{n})}{\partial \beta ))\\={}&{\frac {\partial \log {\mathcal {L))(\alpha ,\beta \mid x_{1})}{\partial \beta ))+\cdots +{\frac {\partial \log {\mathcal {L))(\alpha ,\beta \mid x_{n})}{\partial \beta ))={\frac {n\alpha }{\beta ))-\sum _{i=1}^{n}x_{i}.\end{aligned))}

To complete the maximization procedure for the joint log-likelihood, the equation is set to zero and solved for ${\displaystyle \beta }$:

${\displaystyle {\widehat {\beta ))={\frac {\alpha }{\bar {x))}.}$

Here ${\displaystyle {\widehat {\beta ))}$ denotes the maximum-likelihood estimate, and ${\displaystyle \textstyle {\bar {x))={\frac {1}{n))\sum _{i=1}^{n}x_{i))$ is the sample mean of the observations.

## Likelihood function of a parameterized model

Among many applications, we consider here one of broad theoretical and practical importance. Given a parameterized family of probability density functions (or probability mass functions in the case of discrete distributions)

${\displaystyle x\mapsto f(x\mid \theta ),\!}$

where ${\displaystyle \theta }$ is the parameter, the likelihood function is

${\displaystyle \theta \mapsto f(x\mid \theta ),\!}$

written

${\displaystyle {\mathcal {L))(\theta \mid x)=f(x\mid \theta ),\!}$

where ${\displaystyle x}$ is the observed outcome of an experiment. In other words, when ${\displaystyle f(x|\theta )}$ is viewed as a function of ${\displaystyle x}$ with ${\displaystyle \theta }$ fixed, it is a probability density function, and when viewed as a function of ${\displaystyle \theta }$ with ${\displaystyle x}$ fixed, it is a likelihood function.

This is not the same as the probability that those parameters are the right ones, given the observed sample. Attempting to interpret the likelihood of a hypothesis given observed evidence as the probability of the hypothesis is a common error, with potentially disastrous consequences in medicine, engineering or jurisprudence. See prosecutor's fallacy for an example of this.

From a geometric standpoint, if we consider ${\displaystyle f(x|\theta )}$ as a function of two variables then the family of probability distributions can be viewed as a family of curves parallel to the ${\displaystyle x}$-axis, while the family of likelihood functions is the orthogonal curves parallel to the ${\displaystyle \theta }$-axis.

### Likelihoods for continuous distributions

The use of the probability density in specifying the likelihood function above is justified as follows. Given an observation ${\displaystyle x_{j))$, the likelihood for the interval ${\displaystyle [x_{j},x_{j}+h]}$, where ${\displaystyle h>0}$ is a constant, is given by ${\displaystyle {\mathcal {L))(\theta \mid x\in [x_{j},x_{j}+h])}$. Observe that ${\displaystyle \operatorname {argmax} _{\theta }{\mathcal {L))(\theta \mid x\in [x_{j},x_{j}+h])=\operatorname {argmax} _{\theta }{\frac {1}{h)){\mathcal {L))(\theta \mid x\in [x_{j},x_{j}+h])}$,

since ${\displaystyle h}$ is positive and constant. Because

${\displaystyle \operatorname {argmax} _{\theta }{\frac {1}{h)){\mathcal {L))(\theta \mid x\in [x_{j},x_{j}+h])=\operatorname {argmax} _{\theta }{\frac {1}{h))\Pr(x_{j}\leq x\leq x_{j}+h\mid \theta )=\operatorname {argmax} _{\theta }{\frac {1}{h))\int _{x_{j))^{x_{j}+h}f(x\mid \theta )\,dx,}$

where ${\displaystyle f(x\mid \theta )}$ is the probability density function, it follows that

${\displaystyle \operatorname {argmax} _{\theta }{\mathcal {L))(\theta \mid x\in [x_{j},x_{j}+h])=\operatorname {argmax} _{\theta }{\frac {1}{h))\int _{x_{j))^{x_{j}+h}f(x\mid \theta )\,dx}$.

The first fundamental theorem of calculus and the l'Hôpital's rule together provide that

{\displaystyle {\begin{aligned}&\lim _{h\to 0^{+)){\frac {1}{h))\int _{x_{j))^{x_{j}+h}f(x\mid \theta )\,dx=\lim _{h\to 0^{+)){\frac ((\frac {d}{dh))\int _{x_{j))^{x_{j}+h}f(x\mid \theta )\,dx}{\frac {dh}{dh))}\\[4pt]={}&\lim _{h\to 0^{+)){\frac {f(x_{j}+h\mid \theta )}{1))=f(x_{j}\mid \theta ).\end{aligned))}

Then

{\displaystyle {\begin{aligned}&\operatorname {argmax} _{\theta }{\mathcal {L))(\theta \mid x_{j})=\operatorname {argmax} _{\theta }\left[\lim _{h\to 0^{+)){\mathcal {L))(\theta \mid x\in [x_{j},x_{j}+h])\right]\\[4pt]={}&\operatorname {argmax} _{\theta }\left[\lim _{h\to 0^{+)){\frac {1}{h))\int _{x_{j))^{x_{j}+h}f(x\mid \theta )\,dx\right]=\operatorname {argmax} _{\theta }f(x_{j}\mid \theta ).\end{aligned))}

Therefore,

${\displaystyle \operatorname {argmax} _{\theta }{\mathcal {L))(\theta \mid x_{j})=\operatorname {argmax} _{\theta }f(x_{j}\mid \theta ),\!}$

and so maximizing the probability density at ${\displaystyle x_{j))$ amounts to maximizing the likelihood of the specific observation ${\displaystyle x_{j))$.

### Likelihoods for mixed continuous–discrete distributions

The above can be extended in a simple way to allow consideration of distributions which contain both discrete and continuous components. Suppose that the distribution consists of a number of discrete probability masses ${\displaystyle p_{k}\theta }$ and a density ${\displaystyle f(x|\theta )}$, where the sum of all the ${\displaystyle p}$'s added to the integral of ${\displaystyle f}$ is always one. Assuming that it is possible to distinguish an observation corresponding to one of the discrete probability masses from one which corresponds to the density component, the likelihood function for an observation from the continuous component can be dealt with in the manner shown above. For an observation from the discrete component, the likelihood function for an observation from the discrete component is simply

${\displaystyle {\mathcal {L))(\theta \mid x)=p_{k}(\theta ),\!}$

where ${\displaystyle k}$ is the index of the discrete probability mass corresponding to observation ${\displaystyle x}$, because maximizing the probability mass (or probability) at ${\displaystyle x}$ amounts to maximizing the likelihood of the specific observation.

The fact that the likelihood function can be defined in a way that includes contributions that are not commensurate (the density and the probability mass) arises from the way in which the likelihood function is defined up to a constant of proportionality, where this "constant" can change with the observation ${\displaystyle x}$, but not with the parameter ${\displaystyle \theta }$.

## Example 2

Consider a jar containing N lottery tickets numbered from 1 through N. If you pick a ticket randomly, then you get positive integer n, with probability 1/N if n ≤ N and with probability 0 if n > N. This can be written

${\displaystyle P(n\mid N)={\frac {[n\leq N]}{N))}$

where the Iverson bracket [n ≤ N] is 1 when n ≤ N and 0 otherwise. When considered a function of n for fixed N, this is the probability distribution. When considered a function of N for fixed n, this is a likelihood function. The maximum likelihood estimate for N is n (by contrast, the unbiased estimate is 2n − 1).

This likelihood function is not a probability distribution for ${\displaystyle N}$. To see this, note that the total

${\displaystyle \sum _{N=1}^{\infty }P(n\mid N)=\sum _{N}{\frac {[n\leq N]}{N))=\sum _{N=n}^{\infty }{\frac {1}{N))}$

is a divergent series, and so is ${\displaystyle \infty }$, not 1 as it would have to be if they were probabilities.

Suppose, however, that you pick two tickets (without replacement), rather than one. Then the probability of the outcome {n1n2}, where n1 < n2, is

${\displaystyle P(\{n_{1},n_{2}\}\mid N)={\frac {[n_{2}\leq N]}{\binom {N}{2))}.}$

When considered a function of N for fixed n2, this is a likelihood function. The maximum likelihood estimate for N is n2. The total

${\displaystyle \sum _{N=1}^{\infty }P(\{n_{1},n_{2}\}\mid N)=\sum _{N}{\frac {[N\geq n_{2}]}{\binom {N}{2))}={\frac {2}{n_{2}-1))}$

is a convergent series, and so this likelihood function can be normalized into a probability distribution.

If you pick 3 or more tickets, the likelihood function has a well defined mean value, which is larger than the maximum likelihood estimate. If you pick 4 or more tickets, the likelihood function has a well defined standard deviation too.

With 2 or more tickets, the probability distributions just derived match the results from a Bayesian analysis assuming an improper, uniform prior for N over all positive integers. The use of improper priors is often justified by saying that the information from the data dominates the information from the prior. If only a very few tickets are available, and a precise answer is important, this can justify the work of collecting relevant information from other sources to use as an informative prior.

## Relative likelihood

### Relative likelihood function

Suppose that the maximum likelihood estimate for the parameter θ is ${\displaystyle {\hat {\theta ))}$. Relative plausibilities of other θ values may be found by comparing the likelihoods of those other values with the likelihood of ${\displaystyle {\hat {\theta ))}$. The relative likelihood of θ is defined to be[16][17][18][19][20]

${\displaystyle {\mathcal {L))(\theta \mid x)/{\mathcal {L))({\hat {\theta ))\mid x).}$

Thus, the relative likelihood is the likelihood ratio (discussed above) with the fixed denominator ${\displaystyle {\mathcal {L))({\hat {\theta )))}$. This corresponds to normalizing the likelihood to have a maximum of 1.

### Likelihood region

A likelihood region is the set of all values of θ whose relative likelihood is greater than or equal to a given threshold. In terms of percentages, a p% likelihood region for θ is defined to be[16][18]

${\displaystyle \left\{\theta :{\frac ((\mathcal {L))(\theta \mid x)}((\mathcal {L))({\hat {\theta \,))\mid x)))\geq {\frac {p}{100))\right\}.}$

If θ is a single real parameter, a p% likelihood region will usually comprise an interval of real values. If the region does comprise an interval, then it is called a likelihood interval.[16][18][21]

Likelihood intervals, and more generally likelihood regions, are used for interval estimation within likelihoodist statistics: they are similar to confidence intervals in frequentist statistics and credible intervals in Bayesian statistics. Likelihood intervals are interpreted directly in terms of relative likelihood, not in terms of coverage probability (frequentism) or posterior probability (Bayesianism).

Given a model, likelihood intervals can be compared to confidence intervals. If θ is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) for θ will be the same as a 95% confidence interval (19/20 coverage probability).[16] In a slightly different formulation suited to the use of log-likelihoods (see Wilks' theorem), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a chi-squared distribution with degrees-of-freedom (df) equal to the difference in df's between the two models (therefore, the e−2 likelihood interval is the same as the 0.954 confidence interval; assuming difference in df's to be 1).[21]

### Relative likelihood of models

The definition of relative likelihood can be generalized to compare different statistical models. This generalization is based on AIC (Akaike information criterion), or sometimes AICc (Akaike Information Criterion with correction).

Suppose that, for some dataset, we have two statistical models, M1 and M2. Also suppose that AIC(M1 ) ≤ AIC(M2 ). Then the relative likelihood of M2 with respect to M1 is defined as follows.[22]

${\displaystyle \exp \left({\frac {\operatorname {AIC} (M_{1})-\operatorname {AIC} (M_{2})}{2))\right)}$

To see that this is a generalization of the earlier definition, suppose that we have some model M with a (possibly multivariate) parameter θ. Then for any θ, set M2 = M(θ), and also set M1 = M(${\displaystyle {\hat {\theta ))}$). The general definition now gives the same result as the earlier definition.

## Likelihoods that eliminate nuisance parameters

In many cases, the likelihood is a function of more than one parameter but interest focuses on the estimation of only one, or at most a few of them, with the others being considered as nuisance parameters. Several alternative approaches have been developed to eliminate such nuisance parameters, so that a likelihood can be written as a function of only the parameter (or parameters) of interest: the main approaches are marginal, conditional, and profile likelihoods.[23][24]

These approaches are useful because standard likelihood methods can become unreliable or fail entirely when there are many nuisance parameters or when the nuisance parameters are high-dimensional. This is particularly true when the nuisance parameters can be considered to be "missing data"; they represent a non-negligible fraction of the number of observations and this fraction does not decrease when the sample size increases. Often these approaches can be used to derive closed-form formulae for statistical tests when direct use of maximum likelihood requires iterative numerical methods. These approaches find application in some specialized topics such as sequential analysis.

### Conditional likelihood

Sometimes it is possible to find a sufficient statistic for the nuisance parameters, and conditioning on this statistic results in a likelihood which does not depend on the nuisance parameters.

One example occurs in 2×2 tables, where conditioning on all four marginal totals leads to a conditional likelihood based on the non-central hypergeometric distribution. This form of conditioning is also the basis for Fisher's exact test.

### Marginal likelihood

Sometimes we can remove the nuisance parameters by considering a likelihood based on only part of the information in the data, for example by using the set of ranks rather than the numerical values. Another example occurs in linear mixed models, where considering a likelihood for the residuals only after fitting the fixed effects leads to residual maximum likelihood estimation of the variance components.

### Profile likelihood

When the likelihood function depends on many parameters, depending on the application, we might be interested in only a subset of these parameters. It is often possible to reduce the number of the uninteresting (nuisance) parameters by writing them as functions of the parameters of interest.[25][26][27] For example, the functions might be the value of the nuisance parameter which maximizes the likelihood given the value of the other (interesting) parameters.

This procedure is called concentration of the parameters and results in the concentrated likelihood function,[28] also occasionally known as the maximized likelihood function, but most often called the profile likelihood function. It is then possible (and simpler) to find the values of the parameters which maximizes the profile likelihood function (similar to the maximum likelihood).

For example, consider a regression analysis model with normally distributed errors. The most likely value of the error variance is the variance of the residuals. The residuals depend on all other parameters. Hence the variance parameter can be written as a function of the other parameters.

Unlike conditional and marginal likelihoods, profile likelihood methods can always be used, even when the profile likelihood cannot be written down explicitly. However, the profile likelihood is not a true likelihood, as it is not based directly on a probability distribution, and this leads to some less satisfactory properties. Attempts have been made to improve this, resulting in modified profile likelihood.[citation needed]

The idea of profile likelihood can also be used to compute confidence intervals that often have better small-sample properties than those based on asymptotic standard errors calculated from the full likelihood. In the case of parameter estimation in partially observed systems, the profile likelihood can be also used for identifiability analysis.[29] Results from profile likelihood analysis can be incorporated in uncertainty analysis of model predictions.[30]

### Partial likelihood

A partial likelihood is an adaption of the full likelihood such that only a part of the parameters (the parameters of interest) occur in it.[31] It is a key component of the proportional hazards model: using a restriction on the hazard function, the likelihood does not contain the shape of the hazard over time.

## Historical remarks

The term "likelihood" has been in use in English since at least late Middle English.[32] Its formal use to refer to a specific function in mathematical statistics was proposed by Ronald Fisher,[33] in two research papers published in 1921[34] and 1922.[35] The 1921 paper introduced what is today called a "likelihood interval"; the 1922 paper introduced the term "method of maximum likelihood". Quoting Fisher:

[I]n 1922, I proposed the term ‘likelihood,’ in view of the fact that, with respect to [the parameter], it is not a probability, and does not obey the laws of probability, while at the same time it bears to the problem of rational choice among the possible values of [the parameter] a relation similar to that which probability bears to the problem of predicting events in games of chance. . . .Whereas, however, in relation to psychological judgment, likelihood has some resemblance to probability, the two concepts are wholly distinct. . . .”[36]

The concept of likelihood should not be confused with probability as mentioned by Sir Ronald Fisher "I stress this because in spite of the emphasis that I have always laid upon the difference between probability and likelihood there is still a tendency to treat likelihood as though it were a sort of probability. The first result is thus that there are two different measures of rational belief appropriate to different cases. Knowing the population we can express our incomplete knowledge of, or expectation of, the sample in terms of probability; knowing the sample we can express our incomplete knowledge of the population in terms of likelihood".[37] Fisher's invention of statistical likelihood was in reaction against an earlier form of reasoning called inverse probability.[38] His use of the term "likelihood" fixed the meaning of the term within mathematical statistics.

A. W. F. Edwards established the axiomatic basis for use of the log-likelihood ratio as a measure of relative support for one hypothesis against another.[39] The support function is then the natural logarithm of the likelihood function. Both terms are used in phylogenetics, but were not adopted in a general treatment of the topic of statistical evidence.[40]

## Notes

1. ^ The scale factor is ${\displaystyle \log _{a}b}$; see Logarithm § Change of base
2. ^ "Coldness" is also known as thermodynamic beta or inverse temperature; See Watanabe–Akaike information criterion and Softmax function § Statistical mechanics for examples of varying the coldness.
3. ^

## References

1. ^ a b Bandyopadhyay, P. S.; Forster, M. R., eds. (2011), Philosophy of Statistics, North-Holland Publishing.
2. ^ The quotation is from §1.2 of the book. The wording of the quotation varies slightly among editions of the book; the wording presented here is from the last edition. The phrase "[lack of confidence]" is, in all editions of the book, "diffidence", the usual definition of which makes little sense in the context to modern readers. A rare/obsolete definition of "diffidence", though, is "lack of confidence" (see e.g. SOED), which makes excellent sense in the context. Ergo, the quotation is presented as here.
3. ^ Billingsley, Patrick (1995). Probability and Measure (Third ed.). John Wiley & Sons. pp. 422–423.
4. ^ Shao, Jun (2003). Mathematical Statistics (2nd ed.). Springer. §4.4.1.
5. ^ a b c d I. J. Good: Probability and the Weighing of Evidence (Griffin 1950), §6.1
6. ^ a b c d H. Jeffreys: Theory of Probability (3rd ed., Oxford University Press 1983), §1.22
7. E. T. Jaynes: Probability Theory: The Logic of Science (Cambridge University Press 2003), §4.1
8. ^ a b c d D. V. Lindley: Introduction to Probability and Statistics from a Bayesian Viewpoint. Part 1: Probability (Cambridge University Press 1980), §1.6
9. ^ a b c d A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, D. B. Rubin: Bayesian Data Analysis (3rd ed., Chapman & Hall/CRC 2014), §1.3
10. ^ H. C. Sox, M. C. Higgins, D. K. Owens: Medical Decision Making (2nd ed., Wiley, 2013), http://doi.org/10.1002/9781118341544, chapters 3–4
11. ^ Akaike, H. (1985), "Prediction and entropy", in Atkinson, A. C.; Fienberg, S. E. (eds.), A Celebration of Statistics, Springer, pp. 1–24.
12. ^ Sakamoto, Y.; Ishiguro, M.; Kitagawa, G. (1986), Akaike Information Criterion Statistics, D. Reidel, Part I.
13. ^ Burnham, K. P.; Anderson, D. R. (2002), Model Selection and Multimodel Inference: A practical information-theoretic approach (2nd ed.), Springer-Verlag, chap. 7.
14. ^ Lee, Youngjo; Nelder, John A. (2005). "Likelihood for random-effect models". Statistics & Operations Research Transactions. 29 (2): 143. ... log likelihood which we shall abbreviate to loglihood (a useful contraction which we owe to Michael Healy).
15. ^ Edwards 1972, p. 12.
16. ^ a b c d Kalbfleisch, J. G. (1985), Probability and Statistical Inference, Springer (§9.3).
17. ^ Azzalini, A. (1996), Statistical Inference—Based on the likelihood, Chapman & Hall, ISBN 9780412606502 (§1.4.2).
18. ^ a b c Sprott, D. A. (2000), Statistical Inference in Science, Springer (chap. 2).
19. ^ Davison, A. C. (2008), Statistical Models, Cambridge University Press (§4.1.2).
20. ^ Held, L.; Sabanés Bové, D. S. (2014), Applied Statistical Inference—Likelihood and Bayes, Springer (§2.1).
21. ^ a b Hudson, D. J. (1971), "Interval estimation from the likelihood function", Journal of the Royal Statistical Society, Series B, 33 (2): 256–262.
22. ^ Burnham K. P. & Anderson D.R. (2002), Model Selection and Multimodel Inference: A practical information-theoretic approach, Springer (§2.8).
23. ^ Pawitan, Yudi (2001). In All Likelihood: Statistical Modelling and Inference Using Likelihood. Oxford University Press. ISBN 978-0-19-850765-9.
24. ^ Wen Hsiang Wei. "Generalized Linear Model - course notes". Tunghai University, Taichung, Taiwan. pp. Chapter 5. Retrieved 2017-10-01.
25. ^ Amemiya, Takeshi (1985). "Concentrated Likelihood Function". Advanced Econometrics. Cambridge: Harvard University Press. pp. 125–127. ISBN 978-0-674-00560-0. ((cite book)): External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)
26. ^ Davidson, Russell; MacKinnon, James G. (1993). "Concentrating the Loglikelihood Function". Estimation and Inference in Econometrics. New York: Oxford University Press. pp. 267–269. ISBN 978-0-19-506011-9.
27. ^ Gourieroux, Christian; Monfort, Alain (1995). "Concentrated Likelihood Function". Statistics and Econometric Models. New York: Cambridge University Press. pp. 170–175. ISBN 978-0-521-40551-5. ((cite book)): External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)
28. ^ Montoya, Jose A.; Díaz-Francés, Eloísa; Sprott, David A. (2009). "On a criticism of the profile likelihood function". Statistical Papers. 50 (1): 195–202. doi:10.1007/s00362-007-0056-5.
29. ^ Raue, A; Kreutz, C; Maiwald, T; Bachmann, J; Schilling, M; Klingmüller, U; Timmer, J (2009). "Structural and practical identifiability analysis of partially observed dynamical models by exploiting the profile likelihood". Bioinformatics. 25 (15): 1923–29. doi:10.1093/bioinformatics/btp358. PMID 19505944.
30. ^ Vanlier, J; Tiemann, C; Hilbers, P; van Riel, N (2012). "An integrated strategy for prediction uncertainty analysis". Bioinformatics. 28 (8): 1130–35. doi:10.1093/bioinformatics/bts088. PMC 3324512. PMID 22355081.
31. ^ Cox, D. R. (1975). "Partial likelihood". Biometrika. 62 (2): 269–276. doi:10.1093/biomet/62.2.269. MR 0400509.
32. ^ "likelihood", Shorter Oxford English Dictionary (2007).
33. ^
34. ^ Fisher, R.A. (1921), "On the "probable error" of a coefficient of correlation deduced from a small sample", Metron, 1: 3–32.
35. ^ Fisher, R.A. (1922), "On the mathematical foundations of theoretical statistics", Philosophical Transactions of the Royal Society A, 222 (594–604): 309–368, doi:10.1098/rsta.1922.0009, JFM 48.1280.02, JSTOR 91208.
36. ^ Klemens, Ben (2008). Modeling with Data: Tools and Techniques for Scientific Computing. Princeton University Press. p. 329.
37. ^ Fisher, Ronald (1930). "Inverse Probability". Mathematical Proceedings of the Cambridge Philosophical Society. 26 (4): 528–535. doi:10.1017/S0305004100016297.
38. ^ Fienberg, Stephen E (1997). "Introduction to R.A. Fisher on inverse probability and likelihood". Statistical Science. 12 (3): 161. doi:10.1214/ss/1030037905.
39. ^ Edwards, A. W. F. (1972). Likelihood. Cambridge University Press. ISBN 978-0-8018-4443-0, (expanded edition, 1992, Johns Hopkins University Press).((cite book)): CS1 maint: postscript (link)
40. ^ Royall, R. (1997). Statistical Evidence. Chapman & Hall.