Why is the ordinary least squares method used in a linear regression?
If the Gauss-Markof assumptions hold then OLS provides the lowest standard error of any linear estimator so best linear unbiased estimator
Given these assumptions
Parameter co-efficents are linear, this just means that
#beta_0 and beta_1#are linear but the #x#variable doesn't have to be linear it can be #x^2#
The data has been taken from a random sample
There is no perfect multi-collinearity so two variables are not perfectly correlated.
#E(u#/ #x_j)=0#mean conditional assumption is zero, meaning that the #x_j#variables provide no information about the mean of the unobserved variables.
The variances are equal for any given level of
Then OLS is the best linear estimator in the population of linear estimators or (Best Linear Unbiased Estimator) BLUE.
If you have this additional assumption:
- The variances are normally distributed
Then the OLS estimator becomes the best estimator regardless if it is a linear or non-linear estimator.
What this essentially means is that if assumptions 1-5 hold then OLS provides the lowest standard error of any linear estimator and if 1-6 hold then it provides the lowest standard error of any estimator.
Impact of this question
Creative Commons License