Estimating treatment effects and ICCs from (G)LMMs on the observed scale using Bayes, Part 1: lognormal models

When a multilevel model includes either a non-linear transformation (such as the log-transformation) of the response variable, or of the expectations via a GLM link-function, then the interpretation of the results will be different compared to a standard Gaussian multilevel model; specifically, the estimates will be on a transformed scale and not in the original units, and the effects will no longer refer to the average effect in the population, instead they are conditional/cluster-specific. In this post, I will deal with linear mixed-effects models (LMM) that use a log-transformed outcome variable. This will be the first part of a three-part tutorial on some of the finer details of (G)LMMs, and how Bayes can make your (frequentist) life easier. In each post I will focus on:

  • log-transformation of the dependent variable in a multilevel model
  • Calculating both multiplicative effects (% change) and differences on the untransformed scale.
  • Subject-specific/cluster-specific versus population-average effects (conditional versus marginal effects).
  • Intraclass correlations on the transformed/link/latent scale or on the response/data/original scale.

In part 2 I will cover a GLMM with a binary outcome, and part 3 will focus on semicontinuous (hurdle/two-part) models when the outcome is a skewed continuous variable that include zeros.

The aim of this post is also to show:

  • How simulation-based approaches like MCMC make it much easier to make inferences about transformed parameters.
  • How the estimates from a multilevel model can be transformed to answer the same questions as population-average models or fixed effects models.

Data-generating model: a lognormal LMM

As an example, we will use a simple hierarchical design with clusters nested within either a treatment or a control condition. This is a classic 2-level multilevel model.

In this model, the natural logarithm of is normally distributed conditional on the cluster-specific effect and the treatment variable . Let’s assume is the average money (in USD) lost gambling per week (since I do research on gambling disorder). Gambling expenditure is sometimes quite well described by a lognormal distribution, or a (generalized) Gamma distribution. Importantly, the methods in this post can quite easily be tweaked to work with other response distributions. Each treatment arm has clusters, and each cluster has observations. In this example clusters refer to e.g. schools, centers, groups, and the first level are subjects. However, it would be easy to use the contents of this post for a longitudinal model where clusters refer to subjects, and the subjects have repeated observations.

Simulate data

Let’s create a simple function that generates data from this model

The model parameters are

Fit the model

We can fit this model using brms (or rstanarm, lme4, etc).

Using the notation of our 2-level model, we get these results

ParameterEstimate
6.21
-0.70
0.64
0.51

Interpreting the parameters

All of the estimates are on the log scale, and they do not have an immediate interpretation on the natural scale. To further complicate things we should also remind ourselves that the effects from (G)LMM have cluster-specific interpretations (“subject-specific” in a longitudinal model when clusters = subjects). Researchers who work mostly with LMMs sometimes tend to forget this, since for LMMs the cluster-specific effects are equal to the population-average (marginal) effects. The non-linear transformation of either the data (like the log-transformation) or via the link function in (G)LMMs both means that back-transformation of the model’s estimates from e.g. the log scale does not give the expected values on the natural scale (cf. Jensen’s inequality).

In the figure below I try to illustrate both the cluster-specific nature of the model, and the transformation between the log scale and the natural data scale, without being too technical.

lognormal multilevel LMM visualizing cluster-specific and population-average values

How to calculate marginal effects on the data scale

So how do we derive the numbers presented in the figure above? Well, for the lognormal distribution with the log mean and the log SD we have:

  • mean:
  • median:

If we want to get the marginal mean we need to average this back-transformation over the random effects distribution. Since, as noted earlier, simply back-transforming the log coefficient (the expected mean on the log scale) will not give the expected mean on the original scale (money lost).

The correct expectation is

This integral is easily solved using numerical integration.

In this case, we can also derive an exact and simple solution using the moment-generating function of the lognormal distribution (see Proof 1: left as an exercise to the reader 🤓)

A third option is Monte Carlo integration. Were we simply simulate a large number of cluster outcomes on the log scale, back-transform to the natural scale, and then get the average. For multivariate integrals (e.g. models with correlated random effects) this is often the simplest solution that also works well in practice, although the package cubature offers a fast solution using adaptive numerical integration.

We see that all three methods give the same result (within Monte Carlo error).

Can we interpret the cluster-specific values?

So, if the expected value for the outcome group is 642 USD, and = 567 USD is there a meaningful interpretation of 567? As shown in the figure above, this is the expected value for the cluster at the center of the distribution of cluster means on the log scale. Since percentiles are invariant under a monotonic transformation (and the mean is equal to the median for the normal distribution), this cluster (with ) is also at the median on the natural scale.

So to summarize, the cluster-specific effects are,

and the expected marginal mean in each treatment group is,

Average treatment effects on the data scale

Using the results above, the cluster-specific average treatment effect on the original scale is,

which is (for ),

And the marginal treatment effect on the original scale is,

Multiplicate effects (% change)

Most know that if we take the exponent of the log difference we get the % change. Generally, these multiplicative effects are also cluster-specific. However, for our simple model gives both the cluster-specific and population-average effect. We can see this if we look at the ratio of the expected conditional outcome in the treatment and control group,

and similarly for the marginal outcomes

The cluster-specific and population-average effect is the same. However, this is not true in general for nonlinear transformation or GLMM link functions (which we shall see in part 2 on binary outcomes).

95 % CIs for the backtransformed marginal effects

Getting back to our fitted model, let’s see how we can transform the estimates into the posterior distribution of e.g. the marginal mean outcome in the control group. We do this by applying the back-transformation and marginalization using all the posterior samples of , , and .

As one would expect, estimates of the population effect are less precise compared to condition on a specific cluster effect.

Summarizing the model

Let’s summarize all the results from the model, both the conditional and marginal, absolute and multiplicative treatment effects.

EffectFormulaposterior_medianl_95_CIu_95_CI
Average outcome for a median cluster in the control group570.5374.5847.9
Average outcome for a median cluster in the treatment group281.8186.7435.2
Average outcome in the control group697.9454.51123.4
Average outcome in the treatment group342.7231.3575.5
Difference between the average outcome in a median cluster in the treatment versus control group-286.1-580.2-37.9
Difference between the average outcome in the treatment versus control group-349.0-747.5-47.4
Ratio of the average outcome in a median cluster in the treatment and the control group0.50.30.9
Ratio of the average outcome in the treatment and the control group0.50.30.9
Ratio of the average outcome in the treatment and the control group0.50.30.9

Using brms::fitted

Let’s compare our results to the predicted means we get with brms::fitted and brms::marginal_effects

We see that this is exactly the same as the median outcome (cluster-specific outcome) in the control group. I’ve seen some people confused about this since the function is called marginal_effects.

Limiting inference to only the included clusters

We can also use fitted to average over the included clusters, this is very similar to fitting the clusters as “fixed effects”. So let’s compare averaging over the included clusters in the multilevel model to fitting a fixed effects (no pooling) model.

Let’s look at the predicted mean for the control group using the fixed effects model.

Which is really similar to the control groups sample mean which is 636. Now let’s compare the posterior distribution from the fixed effects model, to using our multilevel model to average over only the included clusters (and not the estimated distribution of clusters).

Treating clusters as 'fixed' in a 'fixed effects' model versus a multilevel model

We see that the results are highly similar. Although, the models behind the results do differ. If we look at the estimates for the specific clusters we can see the—in this model very weak—effect of partial pooling in the multilevel model.

No pooling (fixed effects) versus partial pooling (multilevel) for the cluster means]

Intraclass correlations (ICC)

Since the variance components are on the log scale ICCs calculated using them will also be on the log scale. This is not always what we want, for instance, if we are calculating agreement, reliability, or heritability, it can make a lot of sense to calculate ICCs on the data scale. If I am calculating ICCs for family members’ reports on gambling losses, then I am interested in agreement on the data scale (USD) and not on the log scale.

ICCs on the data scale can be calculated using the same techniques we used for the marginalization of the outcomes. You can read more about this in the context of GLMMs in Villemereuil et al. (2016; and the related R package QQglmm). Here is an example, first we calculate the marginal mean

then we use the marginal mean to calculate the expected variance between clusters on the data scale,

and lastly we calculate the expected variance within clusters on the data scale,

Where is the variance of the lognormal distribution evaluated at .

The ICC on the response scale in then

Whereas, the log scale ICC is,

Since this is also a one-dimensional integral we can use integrate,

If you find simulations more intuitive, the Monte Carlo version would be

Let’s see if the two methods agree.

Data scale (Integrate)Data scale (Simulate)Log scale
ICC0.440.440.5
WI_sd387.71387.69NA
BW_sd342.15342.22NA
mean642.01641.93NA

Great! We also see that by calculating variances on the data scale the ICCs are lower, in this case, the difference is small, but if the log variance is large this difference can be very large. So when you see ICCs from lognormal models or GLMMs you should ask if they are on the transformed (latent) scale or on the observed (data) scale, and if the scale is appropriate for the research question.

Simulation study

I began by promising that the methods in this post could make your frequentist life easier, and then continued to only fit Bayesian models. So let’s do a small simulation study to evaluate the frequentist properties of this model and all the transformations and marginalizations. I will try two different models, shown in the table below, each model will be evaluated using 5000 replications, and I will fit both the correct model using a lognormal response distribution and a standard LMM using the Gaussian response distribution.

ParameterSim1: less skewSim2: more skew
log(500)log(500)
log(0.5)log(0.5)
0.51
0.51
305
1075

Simulation results

In the figures below we see that the posterior median show mostly negligible bias for both the original parameters and the transformed marginal effects. The Gaussian model has the most trouble with the variance components, which show substantial bias.

Simulation: Relative bias for models with skewed data fit using either a normal or lognormal multilevel model

If we look at the coverage probabilities we see that the lognormal model’s posterior intervals have nominal coverage rates for all parameters. The Gaussian model has slightly worse coverage for the treatment effects, but not that bad. However, the CI’s from the Gaussian model does a poor job of capturing the true ICC and variance components.

Simulation: 95 % CI coverage for models with skewed data fit using either a normal or lognormal multilevel model

Power

Lastly, let’s take a closer look at the marginalized treatment effect on the data scale. The figure below shows a sample of the confidence intervals for the difference in the average money lost gambling from the two conditions, from Model 2 with more skewed data. We can see that the, although, the normal model is quite robust and the estimated effect is unbiased, it is less precise which leads to reduced power and CIs with worse coverage compared to the true lognormal model.

lognormal multilevel LMM compared to a normal LMM with skewed data, average treatment effect shown, the effect of skewness on precision and CI coverage

Summary

There is a lot of information in this post, so let’s summarize the key take-home messages:

  • Both cluster-specific (conditional) and population-average (marginal) treatment effects are useful—they just answer different questions and we should be careful to not mixed them up.
  • The multilevel model can be used to give both cluster-specific (conditional) and population-average effects on both the original and transformed scale while retaining all of the advantages of likelihood-based/Bayesian methods.
  • There is no free lunch. We saw that the calculations of the marginal effects on the data scale use estimates of both the lognormal variance, as well as the random intercept. Thus, model misspecification will have a large impact on the appropriateness of the back-transformation. In that sense, directly modeling the population-average effects using e.g. GEE and applying nonparametric retransformations (e.g. smearing) is more robust in some aspects. However, once we add complexities like MNAR missing data in combination with weak adherence (both by patients and therapists…) to complex interventions, then it can be argued that a carefully specified generative model that combines both statistical and clinical expertise will be more likely to give useful answers regarding the treatment effect. In my experience, such models will often have multilevel and cluster-specific components.
  • For outcomes that are highly meaningful on the original scale (such as expenditures), absolute effects on the data scale can be very meaningful.
  • Using the default priors in brms resulted in frequentist CI’s with nominal coverage probabilities.
  • It is useful to understand how the conditional and marginal effects relate to each other, to avoid misinterpretations of the default cluster-specific effects (very common in the clinical studies I read).

Further reading

The issues covered in this post is dealt with in many articles, here is a selection of relevant articles:

  • Gromping, U. (1996). A Note on Fitting a Marginal Model to Mixed Effects Log-Linear Regression Data via GEE. Biometrics, 52(1), 280–285. https://doi.org/10.2307/2533162
  • Heagerty, P. J., & Zeger, S. L. (2000). Marginalized multilevel models and likelihood inference (with comments and a rejoinder by the authors). Statistical Science, 15(1), 1–26. https://doi.org/10.1214/ss/1009212671
  • Hedeker, D., Toit, S. H. C. du, Demirtas, H., & Gibbons, R. D. (2018). A note on marginalization of regression parameters from mixed models of binary outcomes. Biometrics, 74(1), 354–361. https://doi.org/10.1111/biom.12707
  • Villemereuil, P. de, Schielzeth, H., Nakagawa, S., & Morrissey, M. (2016). General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models. Genetics, 204(3), 1281–1294. https://doi.org/10.1534/genetics.115.186536

Simulation code

run_sim.R: perform the simulation

summarise_sim.R: Summarize results

functions.R: Functions used


Written by Kristoffer Magnusson, a researcher in clinical psychology. You should follow him on Bluesky or on Twitter.


Share:

Published August 05, 2018 (View on GitHub)

Buy Me A Coffee

A huge thanks to the 175 supporters who've bought me a 422 coffees!

Steffen bought ☕☕☕☕☕☕☕☕☕☕☕☕ (12) coffees

I love your visualizations. Some of the best out there!!!

Jason Rinaldo bought ☕☕☕☕☕☕☕☕☕☕ (10) coffees

I've been looking for applets that show this for YEARS, for demonstrations for classes. Thank you so much! Students do not need to tolarate my whiteboard scrawl now. I'm sure they'd appreciate you, too.l

Shawn Bergman bought ☕☕☕☕☕ (5) coffees

Thank you for putting this together! I am using these visuals and this information to teach my Advanced Quant class.

anthonystevendick@gmail.com bought ☕☕☕☕☕ (5) coffees

I've been using a lot of your ideas in a paper I'm writing and even borrowed some of your code (cited of course). But this site has been so helpful I think, in addition, I owe you a few coffees!

Chip Reichardt bought ☕☕☕☕☕ (5) coffees

Hi Krisoffer, these are great applets and I've examined many. I'm writing a chapter for the second edition of "Teaching statistics and quantitative methods in the 21st century" by Joe Rodgers (Routledge). My chapter is on the use of applets in teaching statistics. I could well be describing 5 of yours. Would you permit me to publish one or more screen shots of the output from one or more of your applets. I promise I will be saying very positive things about your applets. If you are inclined to respond, my email address if Chip.Reichardt@du.edu.

Someone bought ☕☕☕☕☕ (5) coffees

Someone bought ☕☕☕☕☕ (5) coffees

Nice work! Saw some of your other publications and they are also really intriguing. Thanks so much!

JDMM bought ☕☕☕☕☕ (5) coffees

You finally helped me understand correlation! Many, many thanks... 😄

@VicCazares bought ☕☕☕☕☕ (5) coffees

Good stuff! It's been so helpful for teaching a Psych Stats class. Cheers!

Dustin M. Burt bought ☕☕☕☕☕ (5) coffees

Excellent and informative visualizations!

Someone bought ☕☕☕☕☕ (5) coffees

@metzpsych bought ☕☕☕☕☕ (5) coffees

Always the clearest, loveliest simulations for complex concepts. Amazing resource for teaching intro stats!

Ryo bought ☕☕☕☕☕ (5) coffees

For a couple years now I've been wanting to create visualizations like these as a way to commit these foundational concepts to memory. But after finding your website I'm both relieved that I don't have to do that now and pissed off that I couldn't create anything half as beautiful and informative as you have done here. Wonderful job.

Diarmuid Harvey bought ☕☕☕☕☕ (5) coffees

You have an extremely useful site with very accessible content that I have been using to introduce colleagues and students to some of the core concepts of statistics. Keep up the good work, and thanks!

Michael Hansen bought ☕☕☕☕☕ (5) coffees

Keep up the good work!

Michael Villanueva bought ☕☕☕☕☕ (5) coffees

I wish I could learn more from you about stats and math -- you use language in places that I do not understand. Cohen's D visualizations opened my understanding. Thank you

Someone bought ☕☕☕☕☕ (5) coffees

Thank you, Kristoffer

Pål from Norway bought ☕☕☕☕☕ (5) coffees

Great webpage, I use it to illustrate several issues when I have a lecture in research methods. Thanks, it is really helpful for the students:)

@MAgrochao bought ☕☕☕☕☕ (5) coffees

Joseph Bulbulia bought ☕☕☕☕☕ (5) coffees

Hard to overstate the importance of this work Kristoffer. Grateful for all you are doing.

@TDmyersMT bought ☕☕☕☕☕ (5) coffees

Some really useful simulations, great teaching resources.

@lakens bought ☕☕☕☕☕ (5) coffees

Thanks for fixing the bug yesterday!

@LinneaGandhi bought ☕☕☕☕☕ (5) coffees

This is awesome! Thank you for creating these. Definitely using for my students, and me! :-)

@ICH8412 bought ☕☕☕☕☕ (5) coffees

very useful for my students I guess

@KelvinEJones bought ☕☕☕☕☕ (5) coffees

Preparing my Master's student for final oral exam and stumbled on your site. We are discussing in lab meeting today. Coffee for everyone.

Someone bought ☕☕☕☕☕ (5) coffees

What a great site

@Daniel_Brad4d bought ☕☕☕☕☕ (5) coffees

Wonderful work!

David Loschelder bought ☕☕☕☕☕ (5) coffees

Terrific work. So very helpful. Thank you very much.

@neilmeigh bought ☕☕☕☕☕ (5) coffees

I am so grateful for your page and can't thank you enough!  

@giladfeldman bought ☕☕☕☕☕ (5) coffees

Wonderful work, I use it every semester and it really helps the students (and me) understand things better. Keep going strong.

Dean Norris bought ☕☕☕☕☕ (5) coffees

Sal bought ☕☕☕☕☕ (5) coffees

Really super useful, especially for teaching. Thanks for this!

dde@paxis.org bought ☕☕☕☕☕ (5) coffees

Very helpful to helping teach teachers about the effects of the Good Behavior Game

@akreutzer82 bought ☕☕☕☕☕ (5) coffees

Amazing visualizations! Thank you!

@rdh_CLE bought ☕☕☕☕☕ (5) coffees

So good!

tchipman1@gsu.edu bought ☕☕☕ (3) coffees

Hey, your stuff is cool - thanks for the visual

Hugo Quené bought ☕☕☕ (3) coffees

Hi Kristoffer, Some time ago I've come up with a similar illustration about CIs as you have produced, and I'm now also referring to your work:<br>https://hugoquene.github.io/QMS-EN/ch-testing.html#sec:t-confidenceinterval-mean<br>With kind regards, Hugo Quené<br>(Utrecht University, Netherlands)

Tor bought ☕☕☕ (3) coffees

Thanks so much for helping me understand these methods!

Amanda Sharples bought ☕☕☕ (3) coffees

Soyol bought ☕☕☕ (3) coffees

Someone bought ☕☕☕ (3) coffees

Kenneth Nilsson bought ☕☕☕ (3) coffees

Keep up the splendid work!

@jeremywilmer bought ☕☕☕ (3) coffees

Love this website; use it all the time in my teaching and research.

Someone bought ☕☕☕ (3) coffees

Powerlmm was really helpful, and I appreciate your time in putting such an amazing resource together!

DR AMANDA C DE C WILLIAMS bought ☕☕☕ (3) coffees

This is very helpful, for my work and for teaching and supervising

Georgios Halkias bought ☕☕☕ (3) coffees

Regina bought ☕☕☕ (3) coffees

Love your visualizations!

Susan Evans bought ☕☕☕ (3) coffees

Thanks. I really love the simplicity of your sliders. Thanks!!

@MichaMarie8 bought ☕☕☕ (3) coffees

Thanks for making this Interpreting Correlations: Interactive Visualizations site - it's definitely a great help for this psych student! 😃

Zakaria Giunashvili, from Georgia bought ☕☕☕ (3) coffees

brilliant simulations that can be effectively used in training

Someone bought ☕☕☕ (3) coffees

@PhysioSven bought ☕☕☕ (3) coffees

Amazing illustrations, there is not enough coffee in the world for enthusiasts like you! Thanks!

Cheryl@CurtinUniAus bought ☕☕☕ (3) coffees

🌟What a great contribution - thanks Kristoffer!

vanessa moran bought ☕☕☕ (3) coffees

Wow - your website is fantastic, thank you for making it.

Someone bought ☕☕☕ (3) coffees

mikhail.saltychev@gmail.com bought ☕☕☕ (3) coffees

Thank you Kristoffer This is a nice site, which I have been used for a while. Best Prof. Mikhail Saltychev (Turku University, Finland)

Someone bought ☕☕☕ (3) coffees

Ruslan Klymentiev bought ☕☕☕ (3) coffees

@lkizbok bought ☕☕☕ (3) coffees

Keep up the nice work, thank you!

@TELLlab bought ☕☕☕ (3) coffees

Thanks - this will help me to teach tomorrow!

SCCT/Psychology bought ☕☕☕ (3) coffees

Keep the visualizations coming!

@elena_bolt bought ☕☕☕ (3) coffees

Thank you so much for your work, Kristoffer. I use your visualizations to explain concepts to my tutoring students and they are a huge help.

A random user bought ☕☕☕ (3) coffees

Thank you for making such useful and pretty tools. It not only helped me understand more about power, effect size, etc, but also made my quanti-method class more engaging and interesting. Thank you and wish you a great 2021!

@hertzpodcast bought ☕☕☕ (3) coffees

We've mentioned your work a few times on our podcast and we recently sent a poster to a listener as prize so we wanted to buy you a few coffees. Thanks for the great work that you do!Dan Quintana and James Heathers - Co-hosts of Everything Hertz 

Cameron Proctor bought ☕☕☕ (3) coffees

Used your vizualization in class today. Thanks!

eshulman@brocku.ca bought ☕☕☕ (3) coffees

My students love these visualizations and so do I! Thanks for helping me make stats more intuitive.

Someone bought ☕☕☕ (3) coffees

Adrian Helgå Vestøl bought ☕☕☕ (3) coffees

@misteryosupjoo bought ☕☕☕ (3) coffees

For a high school teacher of psychology, I would be lost without your visualizations. The ability to interact and manipulate allows students to get it in a very sticky manner. Thank you!!!

Chi bought ☕☕☕ (3) coffees

You Cohen's d post really helped me explaining the interpretation to people who don't know stats! Thank you!

Someone bought ☕☕☕ (3) coffees

You doing useful work !! thanks !!

@ArtisanalANN bought ☕☕☕ (3) coffees

Enjoy.

@jsholtes bought ☕☕☕ (3) coffees

Teaching stats to civil engineer undergrads (first time teaching for me, first time for most of them too) and grasping for some good explanations of hypothesis testing, power, and CI's. Love these interactive graphics!

@notawful bought ☕☕☕ (3) coffees

Thank you for using your stats and programming gifts in such a useful, generous manner. -Jess

Mateu Servera bought ☕☕☕ (3) coffees

A job that must have cost far more coffees than we can afford you ;-). Thank you.

@cdrawn bought ☕☕☕ (3) coffees

Thank you! Such a great resource for teaching these concepts, especially CI, Power, correlation.

Julia bought ☕☕☕ (3) coffees

Fantastic work with the visualizations!

@felixthoemmes bought ☕☕☕ (3) coffees

@dalejbarr bought ☕☕☕ (3) coffees

Your work is amazing! I use your visualizations often in my teaching. Thank you. 

@PsychoMouse bought ☕☕☕ (3) coffees

Excellent!  Well done!  SOOOO Useful!😊 🐭 

Someone bought ☕☕ (2) coffees

Thanks, your work is great!!

Dan Sanes bought ☕☕ (2) coffees

this is a superb, intuitive teaching tool!

@whlevine bought ☕☕ (2) coffees

Thank you so much for these amazing visualizations. They're a great teaching tool and the allow me to show students things that it would take me weeks or months to program myself.

Someone bought ☕☕ (2) coffees

@notawful bought ☕☕ (2) coffees

Thank you for sharing your visualization skills with the rest of us! I use them frequently when teaching intro stats. 

Someone bought ☕ (1) coffee

You are awesome

Thom Marchbank bought ☕ (1) coffee

Your visualisations are so useful! Thank you so much for your work.

georgina g. bought ☕ (1) coffee

thanks for helping me in my psych degree!

Someone bought ☕ (1) coffee

Thank You for this work.

Kosaku Noba bought ☕ (1) coffee

Nice visualization, I bought a cup of coffee.

Someone bought ☕ (1) coffee

Thomas bought ☕ (1) coffee

Great. Use it for teaching in psychology.

Someone bought ☕ (1) coffee

It is the best statistics visualization so far!

Ergun Pascu bought ☕ (1) coffee

AMAZING Tool!!! Thank You!

Ann Calhoun-Sauls bought ☕ (1) coffee

This has been a wonderful resource for my statistics and research methods classes. I also occassionally use it for other courses such as Theories of Personality and Social Psychology

David Britt bought ☕ (1) coffee

nicely reasoned

Mike bought ☕ (1) coffee

I appreciate your making this site available. Statistics are not in my wheelhouse, but the ability to display my data more meaningfully in my statistics class is both educational and visually appealing. Thank you!

Jayne T Jacobs bought ☕ (1) coffee

Andrew J O'Neill bought ☕ (1) coffee

Thanks for helping understand stuff!

Someone bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Shawn Hemelstrand bought ☕ (1) coffee

Thank you for this great visual. I use it all the time to demonstrate Cohen's d and why mean differences affect it's approximation.

Adele Fowler-Davis bought ☕ (1) coffee

Thank you so much for your excellent post on longitudinal models. Keep up the good work!

Stewart bought ☕ (1) coffee

This tool is awesome!

Someone bought ☕ (1) coffee

Aidan Nelson bought ☕ (1) coffee

Such an awesome page, Thank you

Someone bought ☕ (1) coffee

Ellen Kearns bought ☕ (1) coffee

Dr Nazam Hussain bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Eva bought ☕ (1) coffee

I've been learning about power analysis and effect sizes (trying to decide on effect sizes for my planned study to calculate sample size) and your Cohen's d interactive tool is incredibly useful for understanding the implications of different effect sizes!

Someone bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Thanks a lot!

Someone bought ☕ (1) coffee

Reena Murmu Nielsen bought ☕ (1) coffee

Tony Andrea bought ☕ (1) coffee

Thanks mate

Tzao bought ☕ (1) coffee

Thank you, this really helps as I am a stats idiot :)

Melanie Pflaum bought ☕ (1) coffee

Sacha Elms bought ☕ (1) coffee

Yihan Xu bought ☕ (1) coffee

Really appreciate your good work!

@stevenleung bought ☕ (1) coffee

Your visualizations really help me understand the math.

Junhan Chen bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Michael Hansen bought ☕ (1) coffee

ALEXANDER VIETHEER bought ☕ (1) coffee

mather bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Bastian Jaeger bought ☕ (1) coffee

Thanks for making the poster designs OA, I just hung two in my office and they look great!

@ValerioVillani bought ☕ (1) coffee

Thanks for your work.

Someone bought ☕ (1) coffee

Great work!

@YashvinSeetahul bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Angela bought ☕ (1) coffee

Thank you for building such excellent ways to convey difficult topics to students!

@inthelabagain bought ☕ (1) coffee

Really wonderful visuals, and such a fantastic and effective teaching tool. So many thanks!

Someone bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Yashashree Panda bought ☕ (1) coffee

I really like your work.

Ben bought ☕ (1) coffee

You're awesome. I have students in my intro stats class say, "I get it now," after using your tool. Thanks for making my job easier.

Gabriel Recchia bought ☕ (1) coffee

Incredibly useful tool!

Shiseida Sade Kelly Aponte bought ☕ (1) coffee

Thanks for the assistance for RSCH 8210.

@Benedikt_Hell bought ☕ (1) coffee

Great tools! Thank you very much!

Amalia Alvarez bought ☕ (1) coffee

@noelnguyen16 bought ☕ (1) coffee

Hi Kristoffer, many thanks for making all this great stuff available to the community!

Eran Barzilai bought ☕ (1) coffee

These visualizations are awesome! thank you for creating it

Someone bought ☕ (1) coffee

Chris SG bought ☕ (1) coffee

Very nice.

Gray Church bought ☕ (1) coffee

Thank you for the visualizations. They are fun and informative.

Qamar bought ☕ (1) coffee

Tanya McGhee bought ☕ (1) coffee

@schultemi bought ☕ (1) coffee

Neilo bought ☕ (1) coffee

Really helpful visualisations, thanks!

Someone bought ☕ (1) coffee

This is amazing stuff. Very slick. 

Someone bought ☕ (1) coffee

Sarko bought ☕ (1) coffee

Thanks so much for creating this! Really helpful for being able to explain effect size to a clinician I'm doing an analysis for. 

@DominikaSlus bought ☕ (1) coffee

Thank you! This page is super useful. I'll spread the word. 

Someone bought ☕ (1) coffee

Melinda Rice bought ☕ (1) coffee

Thank you so much for creating these tools! As we face the challenge of teaching statistical concepts online, this is an invaluable resource.

@tmoldwin bought ☕ (1) coffee

Fantastic resource. I think you would be well served to have one page indexing all your visualizations, that would make it more accessible for sharing as a common resource.

Someone bought ☕ (1) coffee

Fantastic Visualizations! Amazing way to to demonstrate how n/power/beta/alpha/effect size are all interrelated - especially for visual learners! Thank you for creating this?

@jackferd bought ☕ (1) coffee

Incredible visualizations and the best power analysis software on R.

Cameron Proctor bought ☕ (1) coffee

Great website!

Someone bought ☕ (1) coffee

Hanah Chapman bought ☕ (1) coffee

Thank you for this work!!

Someone bought ☕ (1) coffee

Jayme bought ☕ (1) coffee

Nice explanation and visual guide of Cohen's d

Bart Comly Boyce bought ☕ (1) coffee

thank you

Dr. Mitchell Earleywine bought ☕ (1) coffee

This site is superb!

Florent bought ☕ (1) coffee

Zampeta bought ☕ (1) coffee

thank you for sharing your work. 

Mila bought ☕ (1) coffee

Thank you for the website, made me smile AND smarter :O enjoy your coffee! :)

Deb bought ☕ (1) coffee

Struggling with statistics and your interactive diagram made me smile to see that someone cares enough about us strugglers to make a visual to help us out!😍 

Someone bought ☕ (1) coffee

@exerpsysing bought ☕ (1) coffee

Much thanks! Visualizations are key to my learning style! 

Someone bought ☕ (1) coffee

Sponsors

You can sponsor my open source work using GitHub Sponsors and have your name shown here.

Backers ✨❤️

Questions & Comments

Please use GitHub Discussions for any questions related to this post, or open an issue on GitHub if you've found a bug or wan't to make a feature request.

Archived Comments (5)

J
João Veríssimo 2019-04-19

Thank you for the great post!
One question. How would the following calculation of population-average effects change if there were several random effects, and/or random slopes?
exp(B0 + (u0^2 + sd_log^2)/2)

i.e., if the model formula was, for example,
y ~ 1 + TX + (1 + TX | cluster)
or
y ~ 1 + TX + (1 | cluster1) + (1 | cluster2)

P
Patrick Wen 2018-09-07

Great work. But is there any reference on why the marginal mean should be exp(mu+sigma^2/2) (why it is 2 in the denominator) and on the integration formula for expected value of y_ij given TX=0. Are they just commen sense in statistics?

Kristoffer Magnusson 2018-09-07

Thanks! It's just the formula for the mean on the original scale, as a function of log(mean) and log(sd). See e.g. https://en.wikipedia.org/wi... under "Arithmetic moments".

B
blazej 2018-08-24

Great post and a really superb graphics! Thanks

J
Jayden Nord 2018-08-06

The timing of this series is impeccable as I'm currently trying to calculate ICCs for models with non-normal outcomes. Thank you.