What is the residual sum of squares?

Dec 5, 2016

It's the remaining variance unaccounted for by explainable sources of variation in data.

Explanation:

All data sets have what's known as a "total sum of squares" (or perhaps a "corrected total sum of squares"), which is usually denoted something like $S {S}_{\text{Total}}$ or $S {S}_{T}$. This is the grand sum of all the squared data values (minus a "mean"-based correction factor, if you're using the corrected $S {S}_{T}$). $S {S}_{T}$ quantifies the total amount of variance for any given data set.

Using some formulas, $S {S}_{T}$ can be split into other sums of squares—the sources that attempt to explain where all that variance in $S {S}_{T}$ comes from. These sources may be:

• regression (line slopes, like how a server's tips increase with the price of a meal), denoted $S {S}_{R}$;
• main effects (category averages, like how women tip more than men, female servers get more tips than male servers, etc.), denoted $S {S}_{A}$, $S {S}_{B}$, etc;
• interaction effects between two explanatory variables (like how men tip more than women if their server is female), denoted $S {S}_{A B}$;
• lack of fit (repeated observations when all explanatory variables are the same, like if a customer dines at a restaurant twice with the same server), denoted $S {S}_{\text{LOF}}$;
• and many others.

Most of the time, these explainable sources do not account for all of the total variance in the data. We certainly hope they come close, but there is almost always a little bit of variance left over that has no explainable source.

This leftover bit is called the residual sum of squares or the sum of squares due to error and is usually denoted by $S {S}_{\text{Error}}$ or $S {S}_{E}$. It's the remaining variance in the data that can't be attributed to any of the other sources in our model.

We usually write an equation like this:

$S {S}_{T} = S {S}_{\text{Source 1"+SS_"Source 2}} + \ldots + S {S}_{E}$

It's that last term, the $S {S}_{E}$, that contains all the variance in the data that has no explainable source. It's the sum of all the squared distances between each observed data point and the point the model predicts at the corresponding explanatory values. These distances are also called the residuals, hence the term "residual sum of squares". In this way, $S {S}_{E}$ is the best value to help us estimate ${\sigma}^{2}$, the variance of the residuals.

Note: $S {S}_{E}$ on its own does not estimate ${\sigma}^{2}$; we must first divide $S {S}_{E}$ by its degrees of freedom, ${\mathrm{df}}_{E}$, to get our "mean squared error":

$M {S}_{E} = \frac{S {S}_{E}}{{\mathrm{df}}_{E}}$

Unfortunately, explaining degrees of freedom would make this answer a lot longer, so I have left it out for the sake of keeping this response (relatively) short.