Why does the generalized least squares require a known set of variances for the error terms?

1 Answer
Apr 14, 2016

Simply, the mathematics of extending an Ordinary Least Squares (OLS) to a generalized case, including non-linear systems, requires a constant variance of errors in order to be calculable.

Explanation:

“If the variance is not homoskedastic [assuming that the variance of the errors is a constant ] the matrix M will have non-constant entries in the diagonal and hence the matrix M will not drop out during the minimization process.

Likewise, if the observations exhibit correlation (typically autocorrelation) the off-diagonal elements of M will be non-zero also. In linear models it is still possible to obtain a closed form estimate of the parameters in this case.”
http://www.statsref.com/HTML/index.html?least_squares.html

The GLS estimator is unbiased, consistent, efficient, and asymptotically normal. GLS is equivalent to applying ordinary least squares to a linearly transformed version of the data.

“Classical conditions need not hold in practice. Although these conditions have no effect on the OLS method per se, they do affect the properties of the OLS estimators and resulting test statistics. In particular, when the elements of 'y' have unequal variances and/or are correlated, var(y) is no longer a scalar variance-covariance matrix, and hence there is no guarantee that the OLS estimator is the most efficient within the class of linear unbiased (or the class of unbiased) estimators.

Moreover, hypothesis testing based on the standard OLS estimator of the variance-covariance matrix becomes invalid. In practice, we hardly know the true properties of 'y'.

A drawback of the GLS method is that it is difficult to implement. In practice, certain structures (assumptions) must be imposed on var(y) so that a feasible GLS estimator can be computed.”
http://homepage.ntu.edu.tw/~ckuan/pdf/et01/et_Ch4.pdf

http://halweb.uc3m.es/esp/Personal/personas/durban/esp/web/notes/gls.pdf