Non-randomized comparisons are common in RCTs. In this post I show some examples of confounding and collider bias, using treatment adherence as an example. I present a small simulation study that show that common regression models used in clinical psychology, makes little sense, and that Bayesian instrumental variable regression can be easily fit using the R package brms.
In this post I will use the theoretical and empirical sampling distribution of Cohen’s d to show the expected overestimation due to selective publishing. I will look at the overestimation for various sample sizes when the population effect is 0, 0.2, 0.5 and 0.8. The conclusion is that you should be weary of effect sizes from small samples, and that the issue is rather with type M (magnitude) errors than type I errors. At least is clinical psychology the pervasive problem is overestimation of effects and not falsely rejecting null hypothesis.
Last week a group of Dutch scientists published a study providing further evidence of mindfulness’ ability to bolster creativity. Specifically they looked at if open awareness differed from focused attention in increasing divergent thinking