Online Bounds for Bayesian Algorithms

Loading...
Thumbnail Image
Penn collection
Statistics Papers
Degree type
Discipline
Subject
Other Statistics and Probability
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Kakade, Sham
Ng, Andrew Y
Contributor
Abstract

We present a competitive analysis of Bayesian learning algorithms in the online learning setting and show that many simple Bayesian algorithms (such as Gaussian linear regression and Bayesian logistic regression) perform favorably when compared, in retrospect, to the single best model in the model class. The analysis does not assume that the Bayesian algorithms’ modeling assumptions are “correct,” and our bounds hold even if the data is adversarially chosen. For Gaussian linear regression (using logloss), our error bounds are comparable to the best bounds in the online learning literature, and we also provide a lower bound showing that Gaussian linear regression is optimal in a certain worst case sense. We also give bounds for some widely used maximum a posteriori (MAP) estimation algorithms, including regularized logistic regression.

Advisor
Date of presentation
2008-01-01
Conference name
Statistics Papers
Conference dates
2023-05-17T15:04:02.000
Conference location
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation
Collection