
Statistics Papers
Document Type
Conference Paper
Date of this Version
2011
Publication Source
Advances in Neural Information Processing Systems
Volume
24
Abstract
We develop unified information-theoretic machinery for deriving lower bounds for passive and active learning schemes. Our bounds involve the so-called Alexander's capacity function. The supremum of this function has been recently rediscovered by Hanneke in the context of active learning under the name of "disagreement coefficient." For passive learning, our lower bounds match the upper bounds of Gine and Koltchinskii up to constants and generalize analogous results of Massart and Nedelec. For active learning, we provide first known lower bounds based on the capacity function rather than the disagreement coefficient.
Recommended Citation
Raginsky, M., & Rakhlin, A. (2011). Lower Bounds for Passive and Active Learning. Advances in Neural Information Processing Systems, 24 Retrieved from https://repository.upenn.edu/statistics_papers/123
Date Posted: 27 November 2017