Mediation, confounding, and measurement error

October 09, 2019

Mediation might be the ultimate example of how a method continues to be used despite a vast number of papers and textbooks describing the extremely strong assumptions required to estimate unbiased effects. My aim with this post is not to show some fancy method that could help reduce bias; rather I just want to present a small simulation-based example of the underappreciated consequences of measurement error and confounding. There’s been many other people making the same point, for instance, Dunn & Bentall (2007) expressed some strong concerns about investigating mediators in psychological treatment studies:

“The assumptions concerning the lack of hidden confounding and measurement errors are very rarely stated, let alone their validity discussed. One suspects that the majority of investigators are oblivious of these two requirements. One is left with the unsettling thought that the thousands of investigations of mediational mechanisms in the psychological and other literatures are of unknown and questionable value.” (p. 4743)

The causal mediation model

In all examples, I assume that mediation is investigated in a randomized controlled trial where treatment allocation is randomized. The treatment is a cognitive-behavioral therapy (CBT), and we want to estimate the indirect effect of homework completion, and the hypothesis is that a non-trivial amount of the treatment effect is mediated by exposure-based homework adherence. The figure bellow presents three different scenarios that I will simulate.

  • In (a), the relationship between the mediator and the outcome is confounded, but neither the mediator nor the confounder is measured with error.
  • In (b), the confounder is measured with error, I assume independent and nondifferential measurement error (i.e., classical measurement error).
  • In (c), there’s no confounding, but now the mediator is measured with error.

"Mediation DAG with confounding and measurement error"

The causal estimands are most clearly expressed using the potential outcomes framework, where the indirect effect for a single patient (Imai, Keele, & Tingley, 2010), is written as,

and the direct effect of the treatment is,

is the level of the mediator under the treatment and under the control, and is thus the outcome after treatment with the mediator at the natural level realized under the treatment. The subscript i indicates that these effects can be different for each individual. Just as with treatment effects, all these potential outcomes cannot be observed for every patient, but we can estimate the average causal effects. The indirect effect tells us ”[w]hat change would occur to the outcome if one changes the mediator from the value that would be realized under the control condition, , to the value that would be observed under the treatment condition, , while holding the treatment status at t” (Imai, Keele, & Tingley, 2010, p. 311).

Generate the data

We’ll use the following packages. The simulations are performed using powerlmm, and the models are fit using brms.

We need to create a custom function that simulates the data.

Let’s pass this function to powerlmm as a custom model.

Since this is a custom model, we need to define the true parameter values if we want to calculate the coverage of the CIs automatically.

Let’s generate a large data set to look at the values for the true causal mediation model.

We can see that the average indirect effect of exposure-based homework is -3, and that the average direct effect is -3 (effects transmitted via other mechanisms). Thus, the total treatment effect is 6 point reduction, and 50% of that effect is mediated by homework adherence.

We can also take a random sample of 100 participants and look at the individual-level effects. The figure below shows the direct, indirect, and total effects for these 100 participants. We see that the effects vary substantially on the individual level. In reality, we can’t know if the individual-level effects vary or if they are constant for all participants. "individual-level effects"

Run the simulation

Let’s first define the simulations for the scenarios with confounding, i.e., (a) and (b). We’ve already defined the measurement error, cor(pre, pre*) = 0.8.

We’ll fit all models using brms, there are other packages that can fit these models (e.g., mediation which includes a bunch of useful tools), but as I’ll use brms as powerlmm already has methods to extract the results.

We also need to add a function that will calculate the indirect and direct effects.

We can then create three simulation formulas.

Then we just run the simulation. This code can also be used to calculate power for a mediation study.

The simulation for the scenario with measurement error in the mediator is performed in the same way. The correlation between the mediator measured with error (M* = M_me) and the true mediator (M) is about 0.7, in the treatment group.

Simulation results

Now we just have to summarize the results. First, we create two functions to extract the relevant results.

Then we can plot the results for the indirect effects.

unnamed chunk 2 1

For the scenarios with confounding we see that:

  • failing to account for baseline values of the outcome variable in the mediation analysis leads to an overestimation of the indirect effect of homework adherence. Participants with fewer problems at baseline are more likely to complete more homework, and they are also likely to have fewer problems at posttest,
  • adjusting for a confounder that’s perfectly measured yields unbiased estimates (assuming no other hidden confounding), adjusting for a confounder measured with error is an improvement but there’s still residual confounding leading to bias.

When there’s measurement error in the mediator we see that:

  • the indirect effect is attenuated.
  • In this case, adjusting for pretest values does not reduce bias, but it does reduce the standard errors and leads to increased power.

Here are also tables with the results for the direct and total effect, as well.

labelparameterM_esttheta%_RBPowerCI_Cover
Mindirect-5.09-3.069.640.940.72
Mdirect-0.90-3.0-69.980.080.71
Mtotal-5.99-6.0-0.171.000.96
Mprop_mediated0.860.571.830.940.72
M_preindirect-3.07-3.02.490.600.96
M_predirect-2.92-3.0-2.520.510.96
M_pretotal-6.00-6.0-0.011.000.95
M_preprop_mediated0.520.53.340.590.96
M_pre_meindirect-3.84-3.027.840.770.92
M_pre_medirect-2.17-3.0-27.760.310.93
M_pre_metotal-6.00-6.00.041.000.96
M_pre_meprop_mediated0.640.528.960.760.92
labelparameterM_esttheta%_RBPowerCI_Cover
Mindirect-2.94-3.0-1.940.450.94
Mdirect-3.09-3.02.840.440.94
Mtotal-6.03-6.00.451.000.95
Mprop_mediated0.490.5-1.410.440.94
M_meindirect-1.47-3.0-51.030.260.73
M_medirect-4.56-3.051.910.940.75
M_metotal-6.03-6.00.441.000.95
M_meprop_mediated0.250.5-50.610.250.74
M_me_preindirect-1.47-3.0-51.130.300.68
M_me_predirect-4.56-3.052.060.960.70
M_me_pretotal-6.03-6.00.471.000.95
M_me_preprop_mediated0.250.5-50.800.300.68

Summary

Measurement error and confounding is a huge problem for mediation analyses, and there’s no easy solution. In real life, we can expect both confounding and measurement error in the mediator and confounders. There’s likely to be multiple sources of confounding, both related to baseline variables and post-randomization variables (i.e., things happening after treatment allocation). Assumptions regarding the lack of hidden confounding and measurement error are very hard to defend.

References

  • Dunn, G., & Bentall, R. (2007). Modelling treatment-effect heterogeneity in randomized controlled trials of complex interventions (psychological treatments). Statistics in Medicine, 26(26), 4719–4745. https://doi.org/10.1002/sim.2891
  • Imai, K., Keele, L., & Tingley, D. (2010). A general approach to causal mediation analysis. Psychological Methods, 15(4), 309–334. https://doi.org/10.1037/a0020761