Significance Tests Harm Progress in Forecasting

Loading...
Thumbnail Image
Penn collection
Marketing Papers
Degree type
Discipline
Subject
accuracy measures
combining forecasts
confidence intervals
effect size
M-competition
meta-analysis
null hypothesis
practical significance
replications
Funder
Grant number
License
Copyright date
Distributor
Related resources
Contributor
Abstract

Based on a summary of prior literature, I conclude that tests of statistical significance harm scientific progress. Efforts to find exceptions to this conclusion have, to date, turned up none. Even when done correctly, significance tests are dangerous. I show that summaries of scientific research do not require tests of statistical significance. I illustrate the dangers of significance tests by examining an application to the M3-Competition. Although the authors of that reanalysis conducted a proper series of statistical tests, they suggest that the original M3 was not justified in concluding that combined forecasts reduce errors and that the selection of the best method is dependent upon the selection of a proper error measure. I show that the original conclusions were justified and that they are correct. Authors should try to avoid tests of statistical significance, journals should discourage them, and readers should ignore them. Instead, to analyze and communicate findings from empirical studies, one should use effect sizes, confidence intervals,replications/extensions, and meta-analyses.

Advisor
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Publication date
2007-04-01
Journal title
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Postprint version. Published in International Journal of Forecasting, Volume 23, Issue 2, April 2007, pages 321-327. Publisher URL: http://dx.doi.org/10.1016/j.ijforecast.2007.03.004
Recommended citation
Collection