In statistical computations, intuition can be very misleading
Guess Again

Even hardened scientists can make mistakes when interpreting statistics. Mathematical experiments can give you the right ideas to prevent this from happening, and quick simulations in Perl nicely illustrate and support the learning process.
If you hand somebody a die in a game of Ludo [1], and they throw a one on each of their first three turns, they are likely to become suspicious and check the sides of the die. That's just relying on intuition – but when can you scientifically demonstrate that the dice are loaded (Figure 1)? After five throws that all come up as ones? After ten throws?
Each experiment with dice is a game of probabilities. What exactly happens is a product of chance. It is not so much the results of a single throw that are relevant, but the tendency. A player could throw a one, three times in succession from pure bad luck. Although the odds are pretty low, it still happens, and you would be ill advised to jump to conclusions about the dice based on such a small number of attempts.
The Value of p
For this experiment, a scientist would start by defining a so-called null hypothesis (e.g., "The die is fair" or "The medication shows no effect in patients"). On the basis of the test results, this hypothesis would be either confirmed or rejected later on. The mistake of rejecting a correct null hypothesis is known by statisticians as a "Type I error" or an "Error of the first kind." Experiments define up front the maximum acceptable probability of this event happening; this value is known as the significance level of the experiment.
[...]
Buy this article as PDF
(incl. VAT)