# Assessing Normality

Assessing Normality
10:41 — by Brian Steffen

Tip: This isn't the place to ask a question because the teacher can't reply.

## Key Questions

• I expect that what you would like to know with your question is the mathematical / statistical meaning of the world.

A "normal distribution" of data generally follows the Gaussian curve.

That is the statistically more common occurrences are towards the middle, both ends occurrences becoming rarer and rarer.

In geometry "normal" simply means perpendicular. Two normal surfaces are perpendicular to one another.

To "normalise" a set of non uniform data, you would firstly set your standard limits and then reduce all data to within those limits keeping their relative differences unchanged.

I am sure there are other meaning of the word? These are the ones I could come up with.

• You may use some descriptive statistics such as kurtosis and skewness.

But it is more appropriate to use some statistical test like Kolmogorv-Smirnov test and Shapiro Test. You may use some statistical softwares like SPSS and SAS for easy analyses.

• The Normal curve is a very important distribution pattern in that several if not most statistical data seem to follow an approximately "Normal" pattern. Examples of these are height and weight measurements in humans, blood pressure, price indices, stock market indices, and many others.

The normal curve describes a distribution pattern. Specifically, this pattern is shaped like a bell. Pictured below is the Standard Normal Curve. It has a single central peak. This is where the mean of a data set is located. In the Standard Normal Curve, the mean is located at zero (0). Note that there are an infinite number of normal curves. The Standard normal curve is just one and is one of the most important. It is defined by the probability density function:

Looking at either side of the mean, 50% of the observations fall below it and 50% are above it. Thus, the normal curve is said to be symmetric about its mean.

Why is it important to know if a data set is approximately normally distributed? The following are several reasons:

1. In the statistical sciences, a data set that is verified to be approximately normally distributed better allows a researcher to utilize classical, that is, parametric techniques to analyze it. For example, the two-sample mean tests and analyses of variance require that the data be normally distributed. And if the data are, then the so called parametric tests become the most "powerful" tests to use.

2. In Inferential statistics, many of the familiar parametric tests such as t-tests, F-tests, Chi-square tests, etc., have test statistics that can be derived from the normal distribution.

## Questions

• 4 days ago
• 1 week ago
• · 2 weeks ago
• 3 weeks ago
• · 11 months ago
• · 11 months ago
• · 1 year ago
• · 2 years ago
• · 2 years ago
• · 2 years ago
• · 2 years ago
• · 2 years ago
• · 2 years ago
• · 2 years ago
• · 2 years ago
• · 2 years ago
• · 3 years ago
• · 3 years ago
• · 3 years ago
• · 3 years ago

## Videos on topic View all (1)

• No other videos available at this time.