An article published recently in PLOS ONE, Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size, by Kühberger, Fritz and Scherndl, generated healthy discussion on social media.
Kühberger and colleagues, from the University of Salzburg in Austria, discuss the relationship between effect size and sample size. These two quantities, they argue, should normally be unrelated. However, the authors found a clear correlation between the two, mostly driven by studies with sample sizes below 100. They also looked at the distribution of p values and found that barely-significant values were much more frequently reported than values that just failed to reach the conventional threshold for statistical significance. Both findings, the authors argue, reflect publication bias in psychology.
“There seems to be agreement that small sample studies are at the center of the problems around publication bias. If so, neuroscience (especially traditional imaging studies with their sample sizes of about 20) could be especially affected”, Kühberger said.
”Publication bias is a problem in science (Everyone, 1950-2014).”
In an informative blog post, Daniël Lakens (from the Eindhoven Institute of Technology, The Netherlands) takes the discussion further, especially with respect to p curves (plots that illustrate the distribution of p values across studies). There is a risk of misinterpreting these p curves as a sign of scientific malconduct (“p-hacking”); what they reflect instead is precisely publication bias, Lakens indicates. He concludes that “[Kühberger et al’s] article is an excellent reminder of how publication bias is a huge waste of resources, and biases the effect size estimates based on the published literature.”
Kühberger, A., Fritz, A., & Scherndl, T. (2014). Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size PLoS ONE, 9 (9) DOI: 10.1371/journal.pone.0105825