On the Complexity of Linear Prediction: Risk Bounds, Margin Bounds, and Regularization

Loading...
Thumbnail Image
Penn collection
Statistics Papers
Degree type
Discipline
Subject
Statistics and Probability
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Kakade, Sham M
Sridharan, Karthik
Tewari, Ambuj
Contributor
Abstract

This work characterizes the generalization ability of algorithms whose predictions are linear in the input vector. To this end, we provide sharp bounds for Rademacher and Gaussian complexities of (constrained) linear classes, which directly lead to a number of generalization bounds. This derivation provides simpli- fied proofs of a number of corollaries including: risk bounds for linear prediction (including settings where the weight vectors are constrained by either L2 or L1 constraints), margin bounds (including both L2 and L1 margins, along with more general notions based on relative entropy), a proof of the PAC-Bayes theorem, and upper bounds on L2 covering numbers (with Lp norm constraints and relative entropy constraints). In addition to providing a unified analysis, the results herein provide some of the sharpest risk and margin bounds. Interestingly, our results show that the uniform convergence rates of empirical risk minimization algorithms tightly match the regret bounds of online learning algorithms for linear prediction, up to a constant factor of 2

Advisor
Date of presentation
2008-01-01
Conference name
Statistics Papers
Conference dates
2023-05-17T15:27:16.000
Conference location
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation
Collection