Here's a new visualization that shows the p-curve distribution when comparing the means of two independent samples for varying effects. Many know that the distribution is uniform when the null is true, but what about when it isn't?
The notion is fairly well spread that wait-lists could act as a nocebo condition in psychotherapy trials. In this post I write about some recent results from a network meta-analysis that investigated this.
In this post I will use the theoretical and empirical sampling distribution of Cohen’s d to show the expected overestimation due to selective publishing. I will look at the overestimation for various sample sizes when the population effect is 0, 0.2, 0.5 and 0.8. The conclusion is that you should be weary of effect sizes from small samples, and that the issue is rather with type M (magnitude) errors than type I errors. At least is clinical psychology the pervasive problem is overestimation of effects and not falsely rejecting null hypothesis.
Last week a group of Dutch scientists published a study providing further evidence of mindfulness’ ability to bolster creativity. Specifically they looked at if open awareness differed from focused attention in increasing divergent thinking
The practice of classifying treatments as empirically supported has been widely debated for a long time. In this post I write about a recent article that raises several concerns and suggestions regarding the current use of EST criteria—which can be summarized as the current criteria being too lenient, something that I wholeheartedly agree with