Notation Probability density function Cumulative distribution function ${\displaystyle {\tilde {\chi ))^{2}({\boldsymbol {w)),{\boldsymbol {k)),{\boldsymbol {\lambda )),m,s)}$ ${\displaystyle {\boldsymbol {w))}$, vector of weights of noncentral chi-square components${\displaystyle {\boldsymbol {k))}$, vector of degrees of freedom of noncentral chi-square components${\displaystyle {\boldsymbol {\lambda ))}$, vector of non-centrality parameters of chi-square components${\displaystyle m}$, mean of normal term${\displaystyle s}$, sd of normal term ${\displaystyle x\in \mathbb {R} }$ ${\displaystyle \sum _{j}w_{j}(k_{j}+\lambda _{j})+m}$ ${\displaystyle 2\sum _{j}w_{j}^{2}(k_{j}+2\lambda _{j})+s^{2))$ ${\displaystyle {\frac {\exp \left(it\sum _{j}{\frac {w_{j}\lambda _{j)){1-2iw_{j}t))-{\frac {s^{2}t^{2)){2))\right)}{\prod _{j}\left(1-2iw_{j}t\right)^{k_{j}/2))))$

In probability theory and statistics, the generalized chi-squared distribution (or generalized chi-square distribution) is the distribution of a quadratic form of a multinormal variable (normal vector), or a linear combination of different normal variables and squares of normal variables. Equivalently, it is also a linear sum of independent noncentral chi-square variables and a normal variable. There are several other such generalizations for which the same term is sometimes used; some of them are special cases of the family discussed here, for example the gamma distribution.

## Definition

The generalized chi-squared variable may be described in multiple ways. One is to write it as a linear sum of independent noncentral chi-square variables and a normal variable:[1][2]

${\displaystyle \xi =\sum _{i}w_{i}y_{i}+x,\quad y_{i}\sim \chi '^{2}(k_{i},\lambda _{i}),\quad x\sim N(m,s^{2}).}$

Here the parameters are the weights ${\displaystyle w_{i))$, the degrees of freedom ${\displaystyle k_{i))$ and non-centralities ${\displaystyle \lambda _{i))$ of the constituent chi-squares, and the normal parameters ${\displaystyle m}$ and ${\displaystyle s}$. Some important special cases of this have all weights ${\displaystyle w_{i))$ of the same sign, or have central chi-squared components, or omit the normal term.

Since a non-central chi-squared variable is a sum of squares of normal variables with different means, the generalized chi-square variable is also defined as a sum of squares of independent normal variables, plus an independent normal variable: that is, a quadratic in normal variables.

Another equivalent way is to formulate it as a quadratic form of a normal vector ${\displaystyle {\boldsymbol {x))}$:[3] [4]

${\displaystyle \xi =q({\boldsymbol {x)))={\boldsymbol {x))'\mathbf {Q_{2)) {\boldsymbol {x))+{\boldsymbol {q_{1))}'{\boldsymbol {x))+q_{0))$.

Here ${\displaystyle \mathbf {Q_{2)) }$ is a matrix, ${\displaystyle {\boldsymbol {q_{1))))$ is a vector, and ${\displaystyle q_{0))$ is a scalar. These, together with the mean ${\displaystyle {\boldsymbol {\mu ))}$ and covariance matrix ${\displaystyle \mathbf {\Sigma } }$ of the normal vector ${\displaystyle {\boldsymbol {x))}$, parameterize the distribution. The parameters of the former expression (in terms of non-central chi-squares, a normal and a constant) can be calculated in terms of the parameters of the latter expression (quadratic form of a normal vector). [4] If (and only if) ${\displaystyle \mathbf {Q_{2)) }$ in this formulation is positive-definite, then all the ${\displaystyle w_{i))$ in the first formulation will have the same sign.

For the most general case, a reduction towards a common standard form can be made by using a representation of the following form:[5]

${\displaystyle X=(z+a)^{\mathrm {T} }A(z+a)+c^{\mathrm {T} }z=(x+b)^{\mathrm {T} }D(x+b)+d^{\mathrm {T} }x+e,}$

where D is a diagonal matrix and where x represents a vector of uncorrelated standard normal random variables.

## Computing the pdf/cdf/inverse cdf/random numbers

The probability density, cumulative distribution, and inverse cumulative distribution functions of a generalized chi-squared variable do not have simple closed-form expressions. However, numerical algorithms [5][2][6][4]and computer code (Fortran and C, Matlab, R, Python) have been published to evaluate some of these, and to generate random samples.

## Applications

The generalized chi-squared is the distribution of statistical estimates in cases where the usual statistical theory does not hold, as in the examples below.

### In model fitting and selection

If a predictive model is fitted by least squares, but the residuals have either autocorrelation or heteroscedasticity, then alternative models can be compared (in model selection) by relating changes in the sum of squares to an asymptotically valid generalized chi-squared distribution.[3]

### Classifying normal vectors using Gaussian discriminant analysis

If ${\displaystyle {\boldsymbol {x))}$ is a normal vector, its log likelihood is a quadratic form of ${\displaystyle {\boldsymbol {x))}$, and is hence distributed as a generalized chi-squared. The log likelihood ratio that ${\displaystyle {\boldsymbol {x))}$ arises from one normal distribution versus another is also a quadratic form, so distributed as a generalized chi-squared.[4]

In Gaussian discriminant analysis, samples from multinormal distributions are optimally separated by using a quadratic classifier, a boundary that is a quadratic function (e.g. the curve defined by setting the likelihood ratio between two Gaussians to 1). The classification error rates of different types (false positives and false negatives) are integrals of the normal distributions within the quadratic regions defined by this classifier. Since this is mathematically equivalent to integrating a quadratic form of a normal vector, the result is an integral of a generalized-chi-squared variable.[4]

### In signal processing

The following application arises in the context of Fourier analysis in signal processing, renewal theory in probability theory, and multi-antenna systems in wireless communication. The common factor of these areas is that the sum of exponentially distributed variables is of importance (or identically, the sum of squared magnitudes of circularly-symmetric centered complex Gaussian variables).

If ${\displaystyle Z_{i))$ are k independent, circularly-symmetric centered complex Gaussian random variables with mean 0 and variance ${\displaystyle \sigma _{i}^{2))$, then the random variable

${\displaystyle {\tilde {Q))=\sum _{i=1}^{k}|Z_{i}|^{2))$

has a generalized chi-squared distribution of a particular form. The difference from the standard chi-squared distribution is that ${\displaystyle Z_{i))$ are complex and can have different variances, and the difference from the more general generalized chi-squared distribution is that the relevant scaling matrix A is diagonal. If ${\displaystyle \mu =\sigma _{i}^{2))$ for all i, then ${\displaystyle {\tilde {Q))}$, scaled down by ${\displaystyle \mu /2}$ (i.e. multiplied by ${\displaystyle 2/\mu }$), has a chi-squared distribution, ${\displaystyle \chi ^{2}(2k)}$, also known as an Erlang distribution. If ${\displaystyle \sigma _{i}^{2))$ have distinct values for all i, then ${\displaystyle {\tilde {Q))}$ has the pdf[7]

${\displaystyle f(x;k,\sigma _{1}^{2},\ldots ,\sigma _{k}^{2})=\sum _{i=1}^{k}{\frac {e^{-{\frac {x}{\sigma _{i}^{2))))}{\sigma _{i}^{2}\prod _{j=1,j\neq i}^{k}\left(1-{\frac {\sigma _{j}^{2)){\sigma _{i}^{2))}\right)))\quad {\text{for ))x\geq 0.}$

If there are sets of repeated variances among ${\displaystyle \sigma _{i}^{2))$, assume that they are divided into M sets, each representing a certain variance value. Denote ${\displaystyle \mathbf {r} =(r_{1},r_{2},\dots ,r_{M})}$ to be the number of repetitions in each group. That is, the mth set contains ${\displaystyle r_{m))$ variables that have variance ${\displaystyle \sigma _{m}^{2}.}$ It represents an arbitrary linear combination of independent ${\displaystyle \chi ^{2))$-distributed random variables with different degrees of freedom:

${\displaystyle {\tilde {Q))=\sum _{m=1}^{M}\sigma _{m}^{2}/2*Q_{m},\quad Q_{m}\sim \chi ^{2}(2r_{m})\,.}$

The pdf of ${\displaystyle {\tilde {Q))}$ is[8]

${\displaystyle f(x;\mathbf {r} ,\sigma _{1}^{2},\dots \sigma _{M}^{2})=\prod _{m=1}^{M}{\frac {1}{\sigma _{m}^{2r_{m))))\sum _{k=1}^{M}\sum _{l=1}^{r_{k)){\frac {\Psi _{k,l,\mathbf {r} )){(r_{k}-l)!))(-x)^{r_{k}-l}e^{-{\frac {x}{\sigma _{k}^{2)))),\quad {\text{ for ))x\geq 0,}$

where

${\displaystyle \Psi _{k,l,\mathbf {r} }=(-1)^{r_{k}-1}\sum _{\mathbf {i} \in \Omega _{k,l))\prod _{j\neq k}{\binom {i_{j}+r_{j}-1}{i_{j))}\left({\frac {1}{\sigma _{j}^{2))}\!-\!{\frac {1}{\sigma _{k}^{2))}\right)^{-(r_{j}+i_{j})},}$

with ${\displaystyle \mathbf {i} =[i_{1},\ldots ,i_{M}]^{T))$ from the set ${\displaystyle \Omega _{k,l))$ of all partitions of ${\displaystyle l-1}$ (with ${\displaystyle i_{k}=0}$) defined as

${\displaystyle \Omega _{k,l}=\left\{[i_{1},\ldots ,i_{m}]\in \mathbb {Z} ^{m};\sum _{j=1}^{M}i_{j}\!=l-1,i_{k}=0,i_{j}\geq 0{\text{ for all ))j\right\}.}$