A common way of illustrating the idea behind statistical power in null hypothesis significance testing, is by plotting the sampling distributions of the null hypothesis and the alternative hypothesis. Typically, these illustrations highlight the regions that correspond to making a type II error, type I error and correctly rejecting the null hypothesis (i.e. the test’s power). In this post I will show how to create such “power plots” using both ggplot and R’s base graphics.
In this post I show some R-examples on how to perform power analyses for mixed-design ANOVAs. The first example is analytical—and adapted from formulas used in G*Power (Faul et al., 2007), and the second example is a Monte Carlo simulation.
Can you tell when error bars based on 95 % CIs or standard errors correspond to a significant p-value? Don’t fret if you think it’s hard, a study from 2005 showed that researchers in psychogoly, behavior neuroscience and medicine had a hard time judging when error bars from two independent groups signified a significant difference
Background I believe there’s some information to be gained from looking at publication trends over time. But it’s really troublesome to do it by hand; fortunately it’s not so troublesome to do it in R statistical software. Though, data like these should be interpreted with extreme caution …
Cognitive Behavioral Therapy is the psychological treatment of choice for many, if not all, mental disorders. Nonetheless a majority of US clinical psychologist do not primarily identify themselves as either cognitive or behavioral therapists. Looking at data from PubMed publication counts a clear picture emerges -- psychodynamic researchers might be research loafers.