Model Selection Using Information Theory and the MDL Principle
Files
Penn collection
Degree type
Discipline
Subject
Bayes information criterion (BIC)
risk inflation criterion (RIC)
cross-validation
model selection
stepwise regression
regression tree
Applied Mathematics
Statistical Methodology
Statistical Theory
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
Information theory offers a coherent, intuitive view of model selection. This perspective arises from thinking of a statistical model as a code, an algorithm for compressing data into a sequence of bits. The description length is the length of this code for the data plus the length of a description of the model itself. The length of the code for the data measures the fit of the model to the data, whereas the length of the code for the model measures its complexity. The minimum description length (MDL) principle picks the model with smallest description length, balancing fit versus complexity. The conversion of a model into a code is flexible; one can represent a regression model, for example, with codes that reproduce the AIC and BIC as well as motivate other model selection criteria. Going further, information theory allows one to choose from among various types of non-nested models, such as tree-based models and regressions identified from different sets of predictors. A running example that compares several models for the well-known Boston housing data illustrates the ideas.