The visualization shows a Bayesian two-sample *t* test, for simplicity the variance is assumed to be known. It illustrates both Bayesian estimation via the posterior distribution for the effect, and Bayesian hypothesis testing via Bayes factor. The frequentist p-value is also shown. The null hypothesis, H_{0} is that the effect δ = 0, and the alternative H_{1}: δ ≠ 0, just like a two-tailed *t* test. You can use the sliders to vary the observed effect (Cohen's d), sample size (*n* per group) and the prior on δ.

The **prior **on the effect is a scaled unit-information prior. The black, and red circle on the curves represents the likelihood of 0 under the prior and posterior. Their likelihood ratio is the Savage-Dickey density ratio, which I use here as to compute Bayes factor. The ** p-value** is the traditional

Check out Alexander Etz's blog series "Understanding Bayes" for a really good introduction to Bayes factor. Fabian Dablander also wrote a really good post, "Bayesian statistics: why and how", which introduces Bayesian inference in general. If you're interesting in an easy way to perform a Bayesian *t* test check out JASP, or BayesFactor if you use R.

I've created some posters inspired by my interactive visualizations. You can find them on my Etsy shop.

Have any suggestion? Or found any bugs? Send them to me, my contact info can be found here.