Created by Kristoffer Magnusson

Type I and Type II errors, β, α, *p*-values, power and effect sizes – the ritual of null hypothesis significance testing contains many strange concepts.

Much has been said about significance testing – most of it negative.
Methodologists constantly point out that researchers misinterpret *p*-values. Some say that it is at best a meaningless exercise and at worst an impediment to scientific discoveries. Consequently, I believe it is extremely important that students and researchers correctly interpret statistical tests. This visualization is meant as an aid for students when they are learning about statistical hypothesis testing. The visualization is based on a one-sample Z-test. You can vary the sample size, power, significance level and the effect size using the sliders to see how the sampling distributions change.

- Solve for?

The visualization will show that "power" and "Type II error" is "-" when *d* is set to zero. However, the Type I error rate implies that a certain amount of tests will reject H_{0}. It is tempting to also say that this ratio is the test's "power", and frequently textbooks and software do just that. Some sources also say that power is zero when H_{0} is equal to H_{a}. Both claims are incorrect, power is not defined when the estimated effect is an element of H_{0}'s parameter space. When this is the case, the power function returns α, and therefore "power" is undefined. So even though the power function says 5 % of the tests will reject the null, it does not make sense to talk about "power" here. This also implies that as H_{a} approaches H_{0} power will approach α for small values of *d*. As a result the slider for "power" isn't allowed to be equal to or less than α.

If you enjoy my work, or maybe even use it in your own teaching, please consider supporting me on Patreon. My visualizations will always be free to use, but you can show your support by donating a dollar or two for every new visualization I create.

Here are some recommended books that discuss the issues of NHST.

- Cumming, G. (2012). Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis.
- Kline, R. B. (2013). Beyond Significance Testing: Statistics Reform in the Behavioral Sciences (2nd ed)
- Abelson, P (1995). Statistics As Principled Argument

I am deeply skeptical about the current use of significance tests. The following quotes might spark your interest in the controversies surrounding NHST.

"What's wrong with [null hypothesis significance testing]? Well, among many other things, it does not tell us what we want to know, and we so much want to know what we want to know that, out of desperation, we nevertheless believe that it does!"

“… surely the most bone-headedly misguided procedure ever institutionalized in the rote training of science students"

“… despite the awesome pre-eminence this method has attained in our journals and textbooks of applied statistics, it is based upon a fundamental misunderstanding of the nature of rational inference, and is seldom if ever appropriate to the aims of scientific research”

“… an instance of a kind of essential mindlessness in the conduct of research" – Bakan (1966)

“Statistical significance testing retards the growth of scientific knowledge; it never makes a positive contribution”

“The textbooks are wrong. The teaching is wrong. The seminar you just attended is wrong. The most prestigious journal in your scientific field is wrong."