A z-score indicates how many standard deviations a data point is from the mean. It is calculated with the following formula:
#z = (X - μ) / σ#, where #X# is the value of tha data point, #μ# is the mean, and # σ# is the standard deviation.
Besides telling us where a data point lies compared with the rest of the data set in relation to the mean, a z-score also allows comparisons of data points across different normal distributions. (For example, we can compare the scores obtained by a student in two exams whose scores are normally distributed).
This is a somewhat complicated and nuanced question.
First, one must know what hypothesis test they are performing. In addition, if one knows the true distribution, then it is simply a matter of comparing the means of these parameters, because the true distribution is a constant.
Where z-scores become most helpful is in comparing two samples to see if they are from the same distribution or not. The z-score will be most helpful in comparing samples from normally distributed distributions, but the Central Limit Theorem also states that for large enough samples, comparing the mean approaches a normal distribution.
The calculations are different if the two samples are matched or unmatched. For both, you can compare the differences between Sample 1 and Sample 2 to a normal distribution with mean 0 and standard error based on the sample standard deviation(s) and size(s). The major difference is how you calculate the standard error.
Once you have the mean difference between the two distributions (#bar(X)#) and the standard error SE, then your z-statistic is #z = bar(X)/(SE)#. You can use this to calculate a p-value. For example, if #|z|# > 1.96, then the p-value is <0.05.