# Question #81194

##### 1 Answer

#### Answer:

It really depends on the experimental procedures.

#### Explanation:

In most experiments, there are repeats, and we believe that each trial is *independent* from others.

From the Law of Large Numbers, we know that as the number of trials increase, the *sample mean* (the measured average) will tend towards the *expected value* (the true value that we want to find in the experiment). Therefore as sample size increases, the result becomes more **accurate**.

Moreover, from the Central Limit Theorem, when the number of trials is sufficiently large, the distribution of the sample mean will be approximately Normal (bell-curve shape). As the sample size increase, the *variance* of the sample mean decreases. This means that the "bell" becomes thinner and taller, and the probability of finding the sample mean within a certain distance away from the expected value is higher. Therefore, as the sample size increases, the result becomes more **precise** .

However, care has to be taken that the above only describes experiment with independent trials. To illustrate this point, let's look at a titration experiment. The chemist draws 5 samples of analytes from the same beaker.

But unfortunately, the beaker was already contaminated. In this case, the 5 samples of analytes were not prepared independently. Even if more samples were taken from the same contaminated beaker, the scientist would still not be able to get any closer to the "correct value" that he wants. However, if more samples were taken, the result he gets would more likely be closer to the "contaminated value".

In this case, increasing the number of trials does not increase **accuracy** of the result, but it will nevertheless increase its **precision**.