What causes the null hypothesis to be rejected in an F-test?

1 Answer
Mar 3, 2016

The null hypothesis is rejected if the value of #F# falls outside the range defined by the critical value of the F-distribution.

Explanation:

An F-test is used to test if the variances of two populations are equal. Thus, the null hypothesis is that the two variances are equal:

#H_o: sigma_1^2 = sigma_2^2#

The statistic we define to test this is the ratio of the two variances:

#F = s_1^2/s_2^2#

Where #s_1# and #s_2# are the sample variances. The further this value deviates from 1, the more likely that the underlying variances are actually different. The F-distribution is used to quantify this likelihood for differing sample sizes and the confidence or significance we would like the answer to hold.

We define #F_(alpha, N_1-1, N_2-1)# as the critical value of the F distribution with #N_1-1# and #N_2-1# degrees of freedom and a significance level of #alpha#. This test can be a two-tailed test or a one-tailed test. The two-tailed version tests against the alternative that the variances are not equal.

The two tailed test is arranged as follows. Reject the null hypothesis if:

#F < F_(1-alpha//2, N_1-1, N_2-1)#

or

#F > F_(alpha//2, N_1-1, N_2-1)#

The one-tailed versions only test in one direction, that is the variance from the first population is either greater than or less than (but not both) the second population variance.

Taken from:
http://www.itl.nist.gov/div898/handbook/eda/section3/eda359.htm