# How does probability differ from odds?

Mar 12, 2018

Odds may be compared with each other using multiplication and division (as "odds multipliers" or "odds ratios"), whereas probabilities cannot.
See below for explanation.

#### Explanation:

Probability might be thought of as the number ways that an event might happen divided by the number of observations (trials) to see whether the event has happened.

It simplifies the discussion without loss of rigour to consider the individual outcomes as independent Bernoulli trials which feature two possible (binary categorical) mutually exclusive outcomes. The outcomes might be regarded for example as "positive or negative", or as "success or failure".

Consider a series of $n$ independent trials in which $a$ outcomes are considered to belong to the positive group. That implies there must have been $n - a$ outcomes in the negative group.

Denoting the negative outcomes by $b$. that implies that

$a + b = n$ (given that $b$ was defined as $n - a$).

Denoting the probability of a positive outcome by $p$

$p = \frac{a}{n}$

You might note that this is equivalent to

$p = \frac{a}{a + b}$

By symmetry, denoting the probability of failure by $q$

$q = \frac{b}{n} = \frac{b}{a + b}$

Note that

$p + q = \frac{a}{a + b} + \frac{b}{a + b} = \frac{a + b}{a + b} = 1$

That is, $p + q = 1$ which implies $q = 1 - p$.

That is, the probabilities of all of the possible mutually exclusive outcomes sum to $1$.

Odds are defined as the number of positive outcomes divided not by the total number of observations but rather by the number of negative outcomes.

So with the terminology shown above, and denoting the odds of a positive outcome by $o \mathrm{dd} {s}_{p}$,

$o \mathrm{dd} {s}_{p} = \frac{a}{b}$

Similarly, denoting the odds of a negative outcome by $o \mathrm{dd} {s}_{q}$

$o \mathrm{dd} {s}_{q} = \frac{b}{a}$

The two sets of odds are related, just as with the probability, but this time, they are related as each being the reciprocal of the other. That is

$o \mathrm{dd} {s}_{q} = \frac{b}{a} = \frac{1}{\frac{a}{b}} = \frac{1}{o \mathrm{dd} {s}_{p}}$

Similarly,

$o \mathrm{dd} {s}_{p} = \frac{1}{o \mathrm{dd} {s}_{q}}$ .

Noting that

$o \mathrm{dd} {s}_{p} = \frac{p}{q}$
and

$q = 1 - p$

It might be seen that

$o \mathrm{dd} {s}_{p} = \frac{p}{1 - p}$

Rearranging yields the inverse function showing probability as a function of odds

$o \mathrm{dd} {s}_{p} = \frac{p}{1 - p}$

implies

$o \mathrm{dd} {s}_{p} \left(1 - p\right) = p$

which implies

$o \mathrm{dd} {s}_{p} - o \mathrm{dd} {s}_{p} p = p$

which implies

$o \mathrm{dd} {s}_{p} = p + o \mathrm{dd} {s}_{p} p$

which implies

$o \mathrm{dd} {s}_{p} = p \left(1 + o \mathrm{dd} {s}_{p}\right)$

which implies

$p = \frac{o \mathrm{dd} {s}_{p}}{1 + o \mathrm{dd} {s}_{p}}$

The more interesting question is
"why have two different measures of the chance of observing some particular outcome?"

It is because odds allows quantitative comparison of the relative chance that one of two things might occur in a way that cannot be done using probability.

The easiest way to see this is by considering what might be meant by the claim that "the chance that one thing (A) might happen is twice the chance that some other thing (B) might happen".

If you were to use probability, and the probability of B were, for example, $p \left(B\right) = 0.7$, you might note that $2 \times 0.7 = 1.4$, which is not a valid measure of probability (values of $p$ must lie in the interval of $0$ to $1$).

The same problem does not arise when using odds. If the odds of observing B were, for example, $x$, then the odds of observing A may reasonably be calculated as twice those of observing B (that is, $2 x$, even if $2 x > 1$).

Consider the example for the situation shown above where the probability of observing B is 0.7.

$o \mathrm{dd} {s}_{B} = \frac{0.7}{1 - 0.7} = \frac{0.7}{0.3} \approx 2.333$

$o \mathrm{dd} {s}_{A} = 2 \times o \mathrm{dd} {s}_{B} = 2 \times \frac{0.7}{0.3} = \frac{1.4}{0.3} \approx 4.667$

Note that this value is greater than $1$. Odds lie in the range $0$ to $\infty$

It is then possible to work back to the probability of observing A

$p \left(A\right) = \frac{o \mathrm{dd} {s}_{A}}{1 + o \mathrm{dd} {s}_{A}}$

$= \frac{\frac{1.4}{0.3}}{1 + \frac{1.4}{0.3}}$

$= \frac{\frac{1.4}{0.3}}{\frac{0.3 + 1.4}{0.3}}$

$= \frac{\frac{1.4}{0.3}}{\frac{1.7}{0.3}}$

$= \frac{1.4}{0.3} \times \frac{0.3}{1.7}$

That is,

$p \left(A\right) = \frac{1.4}{1.7} \approx 0.824$

Note that this probability is back in the required range of $0$ to $1$ (also note that it is not $2 \times p \left(B\right)$).

The value of "$2$" in the above example is an example of an "odds multiplier".

It is also possible to calculate ratios of odds.

Odds are typically used to estimate the strength of some "risk factor" that increases the chance that some particular outcome might be observed. It is frequently used in medicine, for example as the strength of the influence that some medical condition or lifestyle choice (expressed as a binary categorical discriminating factor of "present" or "not present") might lead to some particular disease. An example might be the risk that having some particular occupation might lead to some particular disease. Some risk factors might be considered "protective" in that the same technique might be used to assess the risk of catching some particular infectious disease if some particular immunisation is given (the risk might be less than that of the population that did not receive the immunisation).