On the Reproducibility of Psychological Science. Academic Article uri icon

abstract

  • Investigators from a large consortium of scientists recently performed a multi-year study in which they replicated 100 psychology experiments. Although statistically significant results were reported in 97% of the original studies, statistical significance was achieved in only 36% of the replicated studies. This article presents a reanalysis of these data based on a formal statistical model that accounts for publication bias by treating outcomes from unpublished studies as missing data, while simultaneously estimating the distribution of effect sizes for those studies that tested nonnull effects. The resulting model suggests that more than 90% of tests performed in eligible psychology experiments tested negligible effects, and that publication biases based on p-values caused the observed rates of nonreproducibility. The results of this reanalysis provide a compelling argument for both increasing the threshold required for declaring scientific discoveries and for adopting statistical summaries of evidence that account for the high proportion of tested hypotheses that are false. Supplementary materials for this article are available online.

published proceedings

  • J Am Stat Assoc

altmetric score

  • 279.332

author list (cited authors)

  • Johnson, V. E., Payne, R. D., Wang, T., Asher, A., & Mandal, S.

citation count

  • 100

complete list of authors

  • Johnson, Valen E||Payne, Richard D||Wang, Tianying||Asher, Alex||Mandal, Soutrik

publication date

  • January 2017