Operations, Information and Decisions Papers

Document Type

Journal Article

Date of this Version

11-2014

Publication Source

Perspectives on Psychological Science

Volume

9

Issue

6

Start Page

666

Last Page

681

DOI

10.1177/1745691614553988

Abstract

Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the Many-Labs Replication project. We demonstrate that p-curve can arrive at conclusions opposite that of existing tools by reanalyzing the meta-analysis of the “choice overload” literature.

Keywords

publication bias, p-hacking, p-curve

Share

COinS
 

Date Posted: 27 November 2017

This document has been peer reviewed.