Created by Kristoffer Magnusson

Type I and Type II errors, β, α, *p*-values, power and effect sizes – the ritual of null hypothesis significance testing contains many strange concepts.

Much has been said about significance testing – most of it negative.
Methodologists constantly point out that researchers misinterpret *p*-values. Some say that it is at best a meaningless exercise and at worst an impediment to scientific discoveries. Consequently, I believe it is extremely important that students and researchers correctly interpret statistical tests. This visualization is meant as an aid for students when they are learning about statistical hypothesis testing. The visualization is based on a one-sample Z-test. You can vary the sample size, power, signifance level and effect size using the sliders to see how the sampling distributions change.

- Solve for?

The visualization will show that "power" and "Type II error" is "-" when *d* is set to zero. However, the Type I error rate implies that a certain amount of tests will reject H_{0}. It is tempting to also say that this ratio is the test's "power", and frequently textbooks and software do just that. Some sources also say that power is zero when H_{0} is equal to H_{a}. Both claims are incorrect, power is not defined when the estimated effect is an element of H_{0}'s parameter space. When this is the case, the power function returns α, and therefore "power" is undefined. So even though the power function says 5 % of the tests will reject the null, it does not make sense to talk about "power" here. This also implies that as H_{a} approaches H_{0} power will approach α for small values of *d*. As a result the slider for "power" isn't allowed to be equal to or less than α.

Here are some recommended books that discuss the issues of NHST.

- Cumming, G. (2012). Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis.
- Kline, R. B. (2013). Beyond Significance Testing: Statistics Reform in the Behavioral Sciences (2nd ed)
- Abelson, P (1995). Statistics As Principled Argument

The distributions in the visualization show the theoretical sampling distribution for the null distribution (H_{0}) and the sampling distribution under the alternative hypothesis (H_{a}). Although this site is not meant as a first introduction to NHST, here is a quick summary of the core concepts.

Term | Explanation |
---|---|

α | The conditional probability of incorrectly rejecting H_{0} when it actually is true. |

β | The conditional probability of failing to reject H_{0} when it is false. |

Power | The complement of β (i.e. 1 - β), this is the probability of correctly rejecting H_{0} when it is false. |

H_{0} |
The null hypothesis, usually stated as the population mean being zero, or that there is no difference. However, it does not have to be stated as a zero or no difference hypothesis. |

H_{a} |
The alternative hypothesis, usually stated as the population mean being non-zero or greater then or less than zero. |

- It does not give us the probability that our results are due to chance.
- If we reject H
_{0}with α = 0.05 this does not mean that we are 95 % sure that the alternative hypothesis is true. - Rejecting H
_{0}with α = 0.05 does not mean that the probability that we have made a type I error is 5 %. - A
*p*-value does not tell us that our findings are relevant, clinical significant or of any scientific value whatsoever. - A small
*p*-value does not tell us our results will replicate. - A small
*p*-value does not indicate a large treatment effect. - Failing to reject the null hypothesis is not evidence of it being true.
- If our test has 80 % power and we
*fail*to reject the null hypothesis, then this does not mean that the probability is 20 % that the null is true. - If our test has 80 % power and we
*do*reject the null hypothesis, then this does not mean that the probability is 80 % that the alternative hypothesis is true.

I am deeply skeptical about the current use of significance tests. The following quotes might spark your interest in the controversies surrounding NHST.

"What's wrong with [null hypothesis significance testing]? Well, among many other things, it does not tell us what we want to know, and we so much want to know what we want to know that, out of desperation, we nevertheless believe that it does!"

“… surely the most bone-headedly misguided procedure ever institutionalized in the rote training of science students"

“… despite the awesome pre-eminence this method has attained in our journals and textbooks of applied statistics, it is based upon a fundamental misunderstanding of the nature of rational inference, and is seldom if ever appropriate to the aims of scientific research”

“… an instance of a kind of essential mindlessness in the conduct of research" – Bakan (1966)

“Statistical significance testing retards the growth of scientific knowledge; it never makes a positive contribution”

“The textbooks are wrong. The teaching is wrong. The seminar you just attended is wrong. The most prestigious journal in your scientific field is wrong."