Recently there has been a couple of meta-analyses investigating heterogeneous treatment effects by analyzing the ratio of the outcome variances in the treatment and control group. The argument made in these articles is that if individuals differ in their response, then observed variances in the treatment and control group in RCTs should differ. In this post I explore this argument and provide a counterargument.
The term "treatment response" is both easy to understand and simultaneously often used when causal language is clearly unwarranted. In this post, I present a non-technical example of when a naïve subgroup analysis leads to the wrong conclusion that a subgroup of patients is treatment non-responders.
In this post I show how to make marginal inferences on the untransformed scaled when using multilevel models with a non-linear transformation applied to the dependent variable (a log-transformation is used as an example). Cluster-specific versus population-average (conditional versus marginal) effects are compared using both average effects on the untransformed scale and using relative (multiplicative) effects.
The next version of powerlmm (0.4.0) will soon be released, besides bug fixes this version also includes several new simulation features. In this post I will show two examples that cover the major new features.
Non-randomized comparisons are common in RCTs. In this post I show some examples of confounding and collider bias, using treatment adherence as an example. I present a small simulation study that show that common regression models used in clinical psychology, makes little sense, and that Bayesian instrumental variable regression can be easily fit using the R package brms.
Last week a group of Dutch scientists published a study providing further evidence of mindfulness’ ability to bolster creativity. Specifically they looked at if open awareness differed from focused attention in increasing divergent thinking