How to tell when error bars correspond to a significant p-value

Introduction

Belia, Fidler, Williams, and Cumming (2005) found that researchers in psychology, behavior neuroscience and medicine are really bad at interpreting when error bars signify that two means are significantly different (p = 0.05). What they did was to email a bunch of researchers and invite them to take a web-based test, and they got 473 usable responses. The test consisted of an interactive plot with error bars for two independent groups, the participants were asked to move the error bars to a position they believed would represent a significant t-test at p=0.05. They did this for error bars based on the 95 % CI and the group’s standard errors. The participants did on average set the 95 % CI too far apart with their mean placement corresponding to a p value of .009. They did the opposite with the SE error bars, which they put too close together yielding placements corresponding to p = 0.109. And if you’re wondering they found no difference between the three disciplines.

Plots

I wanted to pull my weight, and I have therefore created some various plots in R that show error bars that are significant at various p-values.

Interpreting error bars and confidence intervals p = .05

Figure 1. Error bars corresponding to a significant difference at p = .05 (equal group sizes and equal variances)

Interpreting error bars and confidence intervals p = .01

Figure 2. Error bars corresponding to a significant difference at p = .01 (equal group sizes and equal variances)

Interpreting error bars and confidence intervals p = .001

Figure 3. Error bars corresponding to a significant difference at p = .001 (equal group sizes and equal variances)

Based on the first plot we see that an overlap of about one third of the 95 % CIs corresponds to p = 0.05. For the SE error bars we see that they are about 1 SE apart when p = 0.05.

R Code

Here’s the complete R code used to produce these plots

library(ggplot2)
library(ggplot2)
library(plyr)
m2 <- 100 # initital group size, should be the same as m1
p <- 1 # starting p-value
m1 <- 100 # mean group 1
sd1 <- 10 # sd group 1
sd2 <- 10 # sd group 2
n <- 20 # n per group
s <- sqrt(0.5 * (sd1^2 + sd2^2)) # pooled sd
while(p>0.05) { # loop til p = 0.05
  t <- (min(c(m1,m2)) - max(c(m1,m2))) / (s * sqrt(2/n)) # t statistics
  df <- (n*2)-2 # degress of freedom
  p <-pt(t, df)*2 # p value
  m2 <- m2 - (m2/10000) # adjust mean for group 2
}
get_CI <- function(x, sd, CI) { # calculate error bars
  se <- sd/sqrt(n) # standard error
  lwr <- c(x - qt((1 + CI)/2, n - 1) * se, x - se) # 95 % CI and SE lower limit
  upr <- c(x + qt((1 + CI)/2, n - 1) * se, x + se) # 95 % CI and SE upper limit
  data.frame("lwr" = lwr, "upr" = upr, "se" = se) # result
}
plot_df <- data.frame("mu" = rep(c(m1,m2), each=2)) # means
plot_df$group <- gl(2,2, labels=c("group1", "group2")) # group factor
plot_df$type <- gl(2,1,4, labels=c("95 % CI", "se errorbars")) # type of errorbar
plot_df <- cbind(plot_df, rbind(get_CI(m1, sd1, .95), get_CI(m2, sd2, .95))) # put it all together

get_overlap <- function(arg) { # calculate overlap %
  x <-subset(plot_df, type == arg) # subset for type of errorbar
  x_range <- abs(mean(x$lwr - x$upr)) # length of error bar
  x_lwr <- max(x$lwr) # lwr limit for group with highest lwr limit
  x_upr <- min(x$upr) # upr limit for group with lowest lwr limit
  overlap <- abs( (x_upr - x_lwr) / x_range) # % overlap
  data.frame("type"=arg, "range" = x_range, "lwr" = x_lwr, "upr" = x_upr, "overlap" = round(overlap, 2)) # result
}
overlap <-ldply(levels(plot_df$type), get_overlap) # get overlap and put into dataframe
overlap$text <- paste(overlap$overlap * 100, "% of errorbar") # label text
overlap$text_y <- c(overlap[1,4], overlap[2,3]) # quick-fix

ggplot(plot_df, aes(group, mu, group=group)) + 
  geom_point(size=3) + # point for group mean
  geom_errorbar(aes(ymax=upr, ymin=lwr), width=0.2) + # error bars for means
  opts(title=paste("Illustration of errorbars for a significant 2-sample t-test, p =", round(p,3))) +  # plot title
  facet_grid(. ~ type) + # split plot after error bar type
  geom_errorbar(data=overlap, aes(ymax=upr, ymin=lwr, x=1.5, y=NULL, group=type), width=0.1, color="red") + # add overlap error bar
  geom_text(data=overlap, aes(label = text, group=type, y=text_y, x=1.5, vjust=-1)) + # annotate overlap
  ylab(expression(bar(x))) # change y label

ResearchBlogging.org

Belia S, Fidler F, Williams J, & Cumming G (2005). Researchers misunderstand confidence intervals and standard error bars. Psychological methods, 10 (4), 389-96 PMID: 16392994

Kristoffer Magnusson

I'm a PhD-student and a clinical psychologist from Sweden with a passion for research and statistics. This is my personal blog about psychological research and statistical programming with R.

Comments (12) Write a comment

  1. great post.

    Is there a wee typo in this sentence: ‘Based on the first plot we see that an overlap of about one third of the 95 % CIs corresponds to p = 0.5.’ ?

    Otherwise, I have wondered about this question a lot. I was always told that CI’s should not include the other group’s mean and SE’s should not overlap. Turns out that this advice was actually correct even though it was also conservative.

    Reply

  2. One can alternatively calculate a Least Significant Difference (LSD) value (being the minimum difference between means that achives a specified statistical significance level) and display this on the plot.

    The LSD info could either be displayed as a ‘floating’ error bar or perhaps as error bars on the means as above (1/2 the LSD on each side of the mean). In the latter case, overlap between two error bars means lack of statistical significance, which is a nice simple interpretation. This is not a plot that I have seen used before though – people are more used to seeing CI or SE bars, and you would need to explain carefully what the ‘LSD’ bars are and how to interpret them (& their limitations – e.g. LSDs don’t adjust for multiple comparisons…).

    Reply

    • That’s a great point. I believe difference-adjusted CIs using the t distribution are similar to the LSD approach. Though, if I remember correctly, the LSD error bars will all be of equal width for the different groups regardless of their SE? Difference-adjusted CIs allow groups to have different widths of their error bars. But just as you say, I’ve never seen neither of these error bars used in a publication.

      Reply

      • Yeah, the LSD approach only works if you have one (possibly approximate) LSD for all comparisons, so equal sample sizes, essentially. But if you have unequal SE’s, hence different SEDs/LSDs for different comparisons, you are going to have problems coming up with any reliable method of graphically representing significances via error bar overlap, it seems to me? Do the guidelines suggested by your graphs above work when your confidence interval bars are of different widths for different means?

        I must look closely at that difference adjusted CI idea – thanks for that link!

        I have also encountered the suggestion of choosing a confidence level for a CI that produces error bars approximately equivalent to my aforementioned LSD bars. This is usually based on the assumption of equal variances and samples sizes for the two groups compared, and you end up with a confidence of about 53% (I think – need to check this, so don’t quote me!). This trick is useful when you can readily calculate a CI, but not an LSD, for example in using Fieller’s theorem to obtain the CI of a ratio of means…

        Reply

        • Cumming & Finch (2005) reported that an overlap of about one quarter will approximate p = 0.05 for 95 % CIs that differ by a factor up to 2, however they did not assume homogeneity of variances in their calculations. So the overlap is based on the average margin of error, which is not that practical to visually estimate.

          Cumming, G., & Finch, S. (2005). Inference by eye: Confidence intervals and how to read pictures of data. American Psycholo- gist, 60, 170–180.

          Reply

    • I’ve also used this ‘LSD-bar’ trick when I’ve transfomed data prior to analysing, but the clients wants a graph they can interpret on the original scale. given that it is feasible to produce an LSD-bar plot on the transformed scale, one can then back-transform to the original scale, to obtain a plot in which significances of differences is still interpretable in terms of error bar overlaps. But again, not a trick I’ve seen elsewhere…

      Reply

  3. Good post. I think there should be a convention that 95% CI bars are displayed differently from SE bars. Because at the moment they look the same, yet mean entirely different things. Maybe make one of them out of dots and the other solid lines. Or display both at the same time with a double-crossed line.

    Reply

    • That’s a good point, it really irks me when it’s not clearly stated what kind of error bars are used in a plot. Personally, I think two-tiered error bars are an interesting idea that might be more informative than regular CIs or SE error bars.

      Reply

      • Indeed. A former boss of mine used to get really irked when people (a) failed to understand the distinction between a SE and and a SD, and (b) when people failed to report the base used when they calculated a logarithm…

        Reply

  4. Hi Kristoffer
    Good point and cool visualisation,
    I would like to show this examples to my studens. I tryed to reproduce these plots, but there is only one. Could you share also code for other two plots.
    Many thanks

    Reply

    • Hi! Sorry for a late reply. To get other p-values than 0.05, just change the numerical part of conditional statement on line 10. Except that, the code is identical for all the plots

      Reply

Leave a Reply