In-Season Prediction of Batting Averages: A Field Test of Empirical Bayes and Bayes Methodologies
Penn collection
Degree type
Discipline
Subject
hierarchical Bayes
harmonic prior
variance stabilization
FDR
sports
hitting streaks
hot-hand
Applied Statistics
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
Batting average is one of the principle performance measures for an individual baseball player. It is natural to statistically model this as a binomial-variable proportion, with a given (observed) number of qualifying attempts (called “at-bats”), an observed number of successes (“hits”) distributed according to the binomial distribution, and with a true (but unknown) value of pi that represents the player’s latent ability. This is a common data structure in many statistical applications; and so the methodological study here has implications for such a range of applications. We look at batting records for each Major League player over the course of a single season (2005). The primary focus is on using only the batting records from an earlier part of the season (e.g., the first 3 months) in order to estimate the batter’s latent ability, pi, and consequently, also to predict their batting-average performance for the remainder of the season. Since we are using a season that has already concluded, we can then validate our estimation performance by comparing the estimated values to the actual values for the remainder of the season. The prediction methods to be investigated are motivated from empirical Bayes and hierarchical Bayes interpretations. A newly proposed nonparametric empirical Bayes procedure performs particularly well in the basic analysis of the full data set, though less well with analyses involving more homogeneous subsets of the data. In those more homogeneous situations better performance is obtained from appropriate versions of more familiar methods. In all situations the poorest performing choice is the naïve predictor which directly uses the current average to predict the future average. One feature of all the statistical methodologies here is the preliminary use of a new form of variance stabilizing transformation in order to transform the binomial data problem into a somewhat more familiar structure involving (approximately) Normal random variables with known variances. This transformation technique is also used in the construction of a new empirical validation test of the binomial model assumption that is the conceptual basis for all our analyses.