The prevalence of statistical reporting errors in psychology (1985–2013)

Open Access
Authors
  • M.B. Nuijten
  • C.H.J. Hartgerink
  • M.A.L.M. van Assen
  • S. Epskamp
  • J.M. Wicherts
Publication date 12-2016
Journal Behavior Research Methods
Volume | Issue number 48 | 4
Pages (from-to) 1205-1226
Organisations
  • Faculty of Social and Behavioural Sciences (FMG) - Psychology Research Institute (PsyRes)
  • Faculty of Social and Behavioural Sciences (FMG)
Abstract
This study documents reporting errors in a sample of over 250,000 p-values reported in eight major psychology journals from 1985 until 2013, using the new R package “statcheck.” statcheck retrieved null-hypothesis significance testing (NHST) results from over half of the articles from this period. In line with earlier research, we found that half of all published psychology papers that use NHST contained at least one p-value that was inconsistent with its test statistic and degrees of freedom. One in eight papers contained a grossly inconsistent p-value that may have affected the statistical conclusion. In contrast to earlier findings, we found that the average prevalence of inconsistent p-values has been stable over the years or has declined. The prevalence of gross inconsistencies was higher in p-values reported as significant than in p-values reported as nonsignificant. This could indicate a systematic bias in favor of significant results. Possible solutions for the high prevalence of reporting inconsistencies could be to encourage sharing data, to let co-authors check results in a so-called “co-pilot model,” and to use statcheck to flag possible inconsistencies in one’s own manuscript or during the review process.
Document type Article
Language English
Published at https://doi.org/10.3758/s13428-015-0664-2
Downloads
Permalink to this page
Back