This article's tone or style may not reflect the encyclopedic tone used on Wikipedia. See Wikipedia's guide to writing better articles for suggestions. (April 2023) (Learn how and when to remove this template message)

In statistics, generalized least squares (GLS) is a method used to estimate the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in the regression model. Least squares and weighted least squares may need to be more statistically efficient and prevent misleading inferences. GLS was first described by Alexander Aitken in 1935.

## Method outline

In standard linear regression models one observes data $\{y_{i},x_{ij}\}_{i=1,\dots ,n,j=2,\dots ,k)$ on n statistical units. The response values are placed in a vector $\mathbf {y} =\left(y_{1},\dots ,y_{n}\right)^{\mathsf {T))$ , and the predictor values are placed in the design matrix $\mathbf {X} =\left(\mathbf {x} _{1}^{\mathsf {T)),\dots ,\mathbf {x} _{n}^{\mathsf {T))\right)^{\mathsf {T))$ , where $\mathbf {x} _{i}=\left(1,x_{i2},\dots ,x_{ik}\right)$ is a vector of the k predictor variables (including a constant) for the ith unit. The model forces the conditional mean of $\mathbf {y}$ given $\mathbf {X}$ to be a linear function of $\mathbf {X}$ and assumes the conditional variance of the error term given $\mathbf {X}$ is a known nonsingular covariance matrix $\mathbf {\Omega }$ . This is usually written as

$\mathbf {y} =\mathbf {X} \mathbf {\beta } +\mathbf {\varepsilon } ,\qquad \operatorname {E} [\varepsilon \mid \mathbf {X} ]=0,\ \operatorname {Cov} [\varepsilon \mid \mathbf {X} ]=\mathbf {\Omega } .$ Here $\beta \in \mathbb {R} ^{k)$ is a vector of unknown constants (known as “regression coefficients”) that must be estimated from the data.

Suppose $\mathbf {b}$ is a candidate estimate for $\mathbf {\beta }$ . Then the residual vector for $\mathbf {b}$ will be $\mathbf {y} -\mathbf {X} \mathbf {b}$ . The generalized least squares method estimates $\mathbf {\beta }$ by minimizing the squared Mahalanobis length of this residual vector:

{\begin{aligned}\mathbf {\hat {\beta )) &={\underset {b}{\operatorname {argmin} ))\,(\mathbf {y} -\mathbf {X} \mathbf {b} )^{\mathsf {T))\mathbf {\Omega } ^{-1}(\mathbf {y} -\mathbf {X} \mathbf {b} )\\&={\underset {b}{\operatorname {argmin} ))\,\mathbf {y} ^{\mathsf {T))\,\mathbf {\Omega } ^{-1}\mathbf {y} +(\mathbf {X} \mathbf {b} )^{\mathsf {T))\mathbf {\Omega } ^{-1}\mathbf {X} \mathbf {b} -\mathbf {y} ^{\mathsf {T))\mathbf {\Omega } ^{-1}\mathbf {X} \mathbf {b} -(\mathbf {X} \mathbf {b} )^{\mathsf {T))\mathbf {\Omega } ^{-1}\mathbf {y} \,,\end{aligned)) where the last two terms evaluate to scalars, resulting in

$\mathbf {\hat {\beta )) ={\underset {b}{\operatorname {argmin} ))\,\mathbf {y} ^{\mathsf {T))\,\mathbf {\Omega } ^{-1}\mathbf {y} +\mathbf {b} ^{\mathsf {T))\mathbf {X} ^{\mathsf {T))\mathbf {\Omega } ^{-1}\mathbf {X} \mathbf {b} -2\mathbf {b} ^{\mathsf {T))\mathbf {X} ^{\mathsf {T))\mathbf {\Omega } ^{-1}\mathbf {y} \,.$ This objective is a quadratic form in $\mathbf {b}$ .

Taking the gradient of this quadratic form with respect to $\mathbf {b}$ and equating it to zero (when $\mathbf {b} ={\hat {\beta ))$ ) gives

$2\mathbf {X} ^{\mathsf {T))\mathbf {\Omega } ^{-1}\mathbf {X} {\hat {\beta ))-2\mathbf {X} ^{\mathsf {T))\mathbf {\Omega } ^{-1}\mathbf {y} =0$ Therefore, the minimum of the objective function can be computed yielding the explicit formula:

$\mathbf {\hat {\beta )) =\left(\mathbf {X} ^{\mathsf {T))\mathbf {\Omega } ^{-1}\mathbf {X} \right)^{-1}\mathbf {X} ^{\mathsf {T))\mathbf {\Omega } ^{-1}\mathbf {y} .$ The quantity $\mathbf {\Omega } ^{-1)$ is known as the precision matrix (or dispersion matrix), a generalization of the diagonal weight matrix.

### Properties

The GLS estimator is unbiased, consistent, efficient, and asymptotically normal with $\operatorname {E} [{\hat {\beta ))\mid \mathbf {X} ]=\beta$ and $\operatorname {Cov} [{\hat {\beta ))\mid \mathbf {X} ]=(\mathbf {X} ^{\mathsf {T))\Omega ^{-1}\mathbf {X} )^{-1)$ . GLS is equivalent to applying ordinary least squares to a linearly transformed version of the data. To see this, factor $\mathbf {\Omega } =\mathbf {C} \mathbf {C} ^{\mathsf {T))$ , for instance using the Cholesky decomposition. Then if one pre-multiplies both sides of the equation $\mathbf {y} =\mathbf {X} \mathbf {\beta } +\mathbf {\varepsilon }$ by $\mathbf {C} ^{-1)$ , we get an equivalent linear model $\mathbf {y} ^{*}=\mathbf {X} ^{*}\mathbf {\beta } +\mathbf {\varepsilon } ^{*)$ where $\mathbf {y} ^{*}=\mathbf {C} ^{-1}\mathbf {y}$ , $\mathbf {X} ^{*}=\mathbf {C} ^{-1}\mathbf {X}$ , and $\mathbf {\varepsilon } ^{*}=\mathbf {C} ^{-1}\mathbf {\varepsilon }$ . In this model $\operatorname {Var} [\varepsilon ^{*}\mid \mathbf {X} ]=\mathbf {C} ^{-1}\mathbf {\Omega } \left(\mathbf {C} ^{-1}\right)^{\mathsf {T))=\mathbf {I}$ , where $\mathbf {I}$ is the identity matrix. Thus one can efficiently estimate $\mathbf {\beta }$ by applying Ordinary least squares (OLS) to the transformed data, which requires minimizing:

$\left(\mathbf {y} ^{*}-\mathbf {X} ^{*}\mathbf {\beta } \right)^{\mathsf {T))(\mathbf {y} ^{*}-\mathbf {X} ^{*}\mathbf {\beta } )=(\mathbf {y} -\mathbf {X} \mathbf {b} )^{\mathsf {T))\,\mathbf {\Omega } ^{-1}(\mathbf {y} -\mathbf {X} \mathbf {b} ).$ This has the effect of standardizing the scale of the errors and “de-correlating” them. When OLS is applied to data with homoscedastic errors, the Gauss–Markov theorem applies, and therefore the GLS estimate is the best linear unbiased estimator for β.

## Weighted least squares

 Main article: Weighted least squares

A special case of GLS called weighted least squares (WLS) occurs when all the off-diagonal entries of Ω are 0. This situation arises when the variances of the observed values are unequal or when heteroscedasticity is present but no correlations exist among the observed variances. The weight for unit i is proportional to the reciprocal of the variance of the response for unit i.

## Feasible generalized least squares of x

If the covariance of the errors $\Omega$ is unknown, one can get a consistent estimate of $\Omega$ , say ${\widehat {\Omega ))$ , using an implementable version of GLS known as the feasible generalized least squares (FGLS) estimator.

In FGLS, modeling proceeds in two stages:

(1) the model is estimated by OLS or another consistent (but inefficient) estimator, and the residuals are used to build a consistent estimator of the errors covariance matrix (to do so, one often needs to examine the model adding additional constraints; for example, if the errors follow a time series process, a statistician generally needs some theoretical assumptions on this process to ensure that a consistent estimator is available); and

(2) using the consistent estimator of the covariance matrix of the errors, one can implement GLS ideas.

Whereas GLS is more efficient than OLS under heteroscedasticity (also spelled heteroskedasticity) or autocorrelation, this is not true for FGLS. The feasible estimator is asymptotically more efficient, provided the errors covariance matrix is consistently estimated, but for a small to medium size sample, it can be actually less efficient than OLS. This is why some authors prefer to use OLS, and reformulate their inferences by simply considering an alternative estimator for the variance of the estimator robust to heteroscedasticity or serial autocorrelation. However, for large samples FGLS is preferred over OLS under heteroskedasticity or serial correlation. A cautionary note is that the FGLS estimator is not always consistent. One case in which FGLS might be inconsistent is if there are individual specific fixed effects.

In general this estimator has different properties than GLS. For large samples (i.e., asymptotically) all properties are (under appropriate conditions) common with respect to GLS, but for finite samples the properties of FGLS estimators are unknown: they vary dramatically with each particular model, and as a general rule their exact distributions cannot be derived analytically. For finite samples, FGLS may be less efficient than OLS in some cases. Thus, while GLS can be made feasible, it is not always wise to apply this method when the sample is small. A method used to improve accuracy of the estimators in finite samples is to iterate, i.e., to take the residuals from FGLS to update the errors' covariance estimator and then update the FGLS estimation, applying the same idea iteratively until the estimators vary less than some tolerance. But this method does not necessarily improve the efficiency of the estimator very much if the original sample was small. A reasonable option when samples are not too large is to apply OLS, but discard the classical variance estimator

$\sigma ^{2}*(X'X)^{-1)$ (which is inconsistent in this framework) and instead use a HAC (Heteroskedasticity and Autocorrelation Consistent) estimator. For example, in the context of autocorrelation we can use the Bartlett estimator (often known as Newey–West estimator since these authors popularized the use of this estimator among econometricians in their 1987 Econometrica article), and in heteroscedastic contexts we can use the Eicker–White estimator. This approach is much safer, and it is the appropriate path to take unless the sample is large, where "large" is sometimes a slippery issue (e.g. if the error distribution is asymmetric the required sample will be much larger).

The ordinary least squares (OLS) estimator is calculated by

${\widehat {\beta ))_{\text{OLS))=(X'X)^{-1}X'y$ and estimates of the residuals ${\widehat {u))_{j}=(Y-X{\widehat {\beta ))_{\text{OLS)))_{j)$ are constructed.

For simplicity, consider the model for heteroscedastic and non-autocorrelated errors. Assume that the variance-covariance matrix $\Omega$ of the error vector is diagonal, or equivalently that errors from distinct observations are uncorrelated. Then each diagonal entry may be estimated by the fitted residuals ${\widehat {u))_{j)$ so ${\widehat {\Omega ))_{OLS)$ may be constructed by

${\widehat {\Omega ))_{\text{OLS))=\operatorname {diag} ({\widehat {\sigma ))_{1}^{2},{\widehat {\sigma ))_{2}^{2},\dots ,{\widehat {\sigma ))_{n}^{2}).$ It is important to notice that the squared residuals cannot be used in the previous expression; we need an estimator of the errors' variances. To do so, we can use a parametric heteroskedasticity model, or a nonparametric estimator. Once this step is fulfilled, we can proceed:

Estimate $\beta _{FGLS1)$ using ${\widehat {\Omega ))_{\text{OLS))$ using weighted least squares

${\widehat {\beta ))_{FGLS1}=(X'{\widehat {\Omega ))_{\text{OLS))^{-1}X)^{-1}X'{\widehat {\Omega ))_{\text{OLS))^{-1}y$ The procedure can be iterated. The first iteration is given by

${\widehat {u))_{FGLS1}=Y-X{\widehat {\beta ))_{FGLS1)$ ${\widehat {\Omega ))_{FGLS1}=\operatorname {diag} ({\widehat {\sigma ))_{FGLS1,1}^{2},{\widehat {\sigma ))_{FGLS1,2}^{2},\dots ,{\widehat {\sigma ))_{FGLS1,n}^{2})$ ${\widehat {\beta ))_{FGLS2}=(X'{\widehat {\Omega ))_{FGLS1}^{-1}X)^{-1}X'{\widehat {\Omega ))_{FGLS1}^{-1}y$ This estimation of ${\widehat {\Omega ))$ can be iterated to convergence.

Under regularity conditions the FGLS estimator (or the estimator of its iterations, if we iterate a finite number of times) is asymptotically distributed as

${\sqrt {n))({\hat {\beta ))_{FGLS}-\beta )\ \xrightarrow {d} \ {\mathcal {N))\!\left(0,\,V\right),$ where n is the sample size and

$V=\operatorname {p-lim} (X'\Omega ^{-1}X/n)$ here p-lim means limit in probability.