## Articles in the R category

The next version of powerlmm (0.4.0) will soon be released, besides bug fixes this version also includes several new simulation features. In this post I will show two examples that cover the major new features.

Read more
My R package 'powerlmm' has now been update to version 0.3.0. It adds support for a more flexible effect size specifiation.

Read more
This post contains the slides from a talk I gave recently at Stockholm University

Read more
My R package 'powerlmm' has now been update to version 0.2.0. It contains several improvements, and new features.

Read more
Non-randomized comparisons are common in RCTs. In this post I show some examples of confounding and collider bias, using treatment adherence as an example. I present a small simulation study that show that common regression models used in clinical psychology, makes little sense, and that Bayesian instrumental variable regression can be easily fit using the R package brms.

Read more
In this post I compare the performance of Amazon EC2 instances vs my HP workstation and my MacBook Pro, when doing Monte Carlo simulations.

Read more
Over the summer I've been working on finishing my new R package 'powerlmm', which is now almost complete. It provides flexible power calculations for typical two- and three-level longitudinal linear mixed models, with unbalanced treatment groups and cluster sizes, as well as with missing data and random slopes at both the subject and cluster-level.

Read more
This post explains how Cohen's d relates to the proportion of overlap between two normal distributions, and why I use a different measure then Cohen in my Cohen's d visualization.

Read more
I often get asked about how to fit different longitudinal models in lme/lmer. In this post I cover several different two-level, three-level and partially nested models.

Read more
In this post I will use the theoretical and empirical sampling distribution of Cohen’s d to show the expected overestimation due to selective publishing. I will look at the overestimation for various sample sizes when the population effect is 0, 0.2, 0.5 and 0.8. The conclusion is that you should be weary of effect sizes from small samples, and that the issue is rather with type M (magnitude) errors than type I errors. At least is clinical psychology the pervasive problem is overestimation of effects and not falsely rejecting null hypothesis.

Read more