How is a z-score useful in comparing two different distributions?

1 Answer
Write your answer here...
Start with a one sentence answer
Then teach the underlying concepts
Don't copy without citing sources


Write a one sentence answer...



Explain in detail...


I want someone to double check my answer

Describe your changes (optional) 200

Aug 16, 2015

This is a somewhat complicated and nuanced question.

First, one must know what hypothesis test they are performing. In addition, if one knows the true distribution, then it is simply a matter of comparing the means of these parameters, because the true distribution is a constant.

Where z-scores become most helpful is in comparing two samples to see if they are from the same distribution or not. The z-score will be most helpful in comparing samples from normally distributed distributions, but the Central Limit Theorem also states that for large enough samples, comparing the mean approaches a normal distribution.

The calculations are different if the two samples are matched or unmatched. For both, you can compare the differences between Sample 1 and Sample 2 to a normal distribution with mean 0 and standard error based on the sample standard deviation(s) and size(s). The major difference is how you calculate the standard error.

Once you have the mean difference between the two distributions (#bar(X)#) and the standard error SE, then your z-statistic is #z = bar(X)/(SE)#. You can use this to calculate a p-value. For example, if #|z|# > 1.96, then the p-value is <0.05.

Was this helpful? Let the contributor know!
Impact of this question
5983 views around the world
You can reuse this answer
Creative Commons License