In this post I will use the theoretical and empirical sampling distribution of Cohen’s d to show the expected overestimation due to selective publishing. I will look at the overestimation for various sample sizes when the population effect is 0, 0.2, 0.5 and 0.8. The conclusion is that you should be weary of effect sizes from small samples, and that the issue is rather with type M (magnitude) errors than type I errors. At least is clinical psychology the pervasive problem is overestimation of effects and not falsely rejecting null hypothesis.
Last week a group of Dutch scientists published a study providing further evidence of mindfulness’ ability to bolster creativity. Specifically they looked at if open awareness differed from focused attention in increasing divergent thinking
The dodo bird might be extinct in the real world but in the world of psychotherapy research it refuses to die. However, a group of German researchers recently put forward an article were they had randomized patients to either a PDT or CBT condition and measured the relative proficiency of the two orientations, and they found that their results delivered a convincing blow to the dodo bird verdict.
When talking about confidence intervals, Jacob Cohen famously said: “I suspect that the main reason they are not reported is that they are so embarrassingly large!” (Cohen, 1994). In this post I’ll take a look at the relationship between the 95 % CI for Cohen’s d and it’s corresponding sample size.