Part of a series on |

Regression analysis |
---|

Models |

Estimation |

Background |

In statistics, **binomial regression** is a regression analysis technique in which the response (often referred to as *Y*) has a binomial distribution: it is the number of successes in a series of independent Bernoulli trials, where each trial has probability of success .^{[1]} In binomial regression, the probability of a success is related to explanatory variables: the corresponding concept in ordinary regression is to relate the mean value of the unobserved response to explanatory variables.

Binomial regression is closely related to binary regression: if the response is a binary variable (two possible outcomes), then it can be considered as a binomial distribution with trial by considering one of the outcomes as "success" and the other as "failure", counting the outcomes as either 1 or 0: counting a success as 1 success out of 1 trial, and counting a failure as 0 successes out of 1 trial. Binomial regression models are essentially the same as binary choice models, one type of discrete choice model. The primary difference is in the theoretical motivation.

In machine learning, binomial regression is considered a special case of probabilistic classification, and thus a generalization of binary classification.

In one published example of an application of binomial regression,^{[2]} the details were as follows. The observed outcome variable was whether or not a fault occurred in an industrial process. There were two explanatory variables: the first was a simple two-case factor representing whether or not a modified version of the process was used and the second was an ordinary quantitative variable measuring the purity of the material being supplied for the process.

The response variable *Y* is assumed to be binomially distributed conditional on the explanatory variables *X*. The number of trials *n* is known, and the probability of success for each trial *p* is specified as a function *θ(X)*. This implies that the conditional expectation and conditional variance of the observed fraction of successes, *Y/n*, are

The goal of binomial regression is to estimate the function *θ(X)*. Typically the statistician assumes , for a known function *m*, and estimates *β*. Common choices for *m* include the logistic function.^{[1]}

The data are often fitted as a generalised linear model where the predicted values μ are the probabilities that any individual event will result in a success. The likelihood of the predictions is then given by

where *1 _{A}* is the indicator function which takes on the value one when the event

Models used in binomial regression can often be extended to multinomial data.

There are many methods of generating the values of *μ* in systematic ways that allow for interpretation of the model; they are discussed below.

There is a requirement that the modelling linking the probabilities μ to the explanatory variables should be of a form which only produces values in the range 0 to 1. Many models can be fitted into the form

Here *η* is an intermediate variable representing a linear combination, containing the regression parameters, of the explanatory variables. The function
*g* is the cumulative distribution function (cdf) of some probability distribution. Usually this probability distribution has a support from minus infinity to plus infinity so that any finite value of *η* is transformed by the function *g* to a value inside the range 0 to 1.

In the case of logistic regression, the link function is the log of the odds ratio or logistic function. In the case of probit, the link is the cdf of the normal distribution. The linear probability model is not a proper binomial regression specification because predictions need not be in the range of zero to one; it is sometimes used for this type of data when the probability space is where interpretation occurs or when the analyst lacks sufficient sophistication to fit or calculate approximate linearizations of probabilities for interpretation.

A binary choice model assumes a latent variable *U _{n}*, the utility (or net benefit) that person

where is a set of regression coefficients and is a set of independent variables (also known as "features") describing person *n*, which may be either discrete "dummy variables" or regular continuous variables. is a random variable specifying "noise" or "error" in the prediction, assumed to be distributed according to some distribution. Normally, if there is a mean or variance parameter in the distribution, it cannot be identified, so the parameters are set to convenient values — by convention usually mean 0, variance 1.

The person takes the action, *y _{n}* = 1, if

The specification is written succinctly as:

Let us write it slightly differently:

Here we have made the substitution *e _{n}* = −

Denote the cumulative distribution function (CDF) of as and the quantile function (inverse CDF) of as

Note that

Since is a Bernoulli trial, where we have

or equivalently

Note that this is exactly equivalent to the binomial regression model expressed in the formalism of the generalized linear model.

If i.e. distributed as a standard normal distribution, then

which is exactly a probit model.

If i.e. distributed as a standard logistic distribution with mean 0 and scale parameter 1, then the corresponding quantile function is the logit function, and

which is exactly a logit model.

Note that the two different formalisms — generalized linear models (GLM's) and discrete choice models — are equivalent in the case of simple binary choice models, but can be extended if differing ways:

- GLM's can easily handle arbitrarily distributed response variables (dependent variables), not just categorical variables or ordinal variables, which discrete choice models are limited to by their nature. GLM's are also not limited to link functions that are quantile functions of some distribution, unlike the use of an error variable, which must by assumption have a probability distribution.
- On the other hand, because discrete choice models are described as types of generative models, it is conceptually easier to extend them to complicated situations with multiple, possibly correlated, choices for each person, or other variations.

A latent variable model involving a binomial observed variable *Y* can be constructed such that *Y* is related to the latent variable *Y** via

The latent variable *Y** is then related to a set of regression variables *X* by the model

This results in a binomial regression model.

The variance of *ϵ* can not be identified and when it is not of interest is often assumed to be equal to one. If *ϵ* is normally distributed, then a probit is the appropriate model and if *ϵ* is log-Weibull distributed, then a logit is appropriate. If *ϵ* is uniformly distributed, then a linear probability model is appropriate.