Information-Based Complexity, Feedback and Dynamics in Convex Programming

Loading...
Thumbnail Image
Penn collection
Statistics Papers
Degree type
Discipline
Subject
convex programming
feedback
information theory
minimax techniques
sequential estimation
Shannon information
active learning problem
feedback information theory
information-based complexity
minimax lower bound
quantitative notion
sequential convex optimization
sequential optimization algorithm
signal-to-noise ratio
statistical literature
accuracy
complexity theory
convex functions
Markov processes
noise measurement
optimization
random variables
convex optimization
Fano's inequality
feedback information theory
hypothesis testing with controlled observations
information-based complexity
information-theoretic converse
minimax lower bounds
sequential optimization algorithms
statistical estimation
Computer Sciences
Statistics and Probability
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Raginsky, Maxim
Rakhlin, Alexander
Contributor
Abstract

We study the intrinsic limitations of sequential convex optimization through the lens of feedbackinformation theory. In the oracle model of optimization, an algorithm queries an oracle for noisyinformation about the unknown objective function and the goal is to (approximately) minimize every function in a given class using as few queries as possible. We show that, in order for a function to be optimized, the algorithm must be able to accumulate enough information about the objective. This, in turn, puts limits on the speed of optimization under specific assumptions on the oracle and the type offeedback. Our techniques are akin to the ones used in statistical literature to obtain minimax lower bounds on the risks of estimation procedures; the notable difference is that, unlike in the case of i.i.d. data, a sequential optimization algorithm can gather observations in a controlled manner, so that the amount of information at each step is allowed to change in time. In particular, we show that optimization algorithms often obey the law of diminishing returns: the signal-to-noise ratio drops as the optimization algorithm approaches the optimum. To underscore the generality of the tools, we use our approach to derive fundamental lower bounds for a certain active learning problem. Overall, the present work connects the intuitive notions of “information” in optimization, experimental design, estimation, and active learning to the quantitative notion of Shannon information.

Advisor
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Publication date
2011-10-01
Journal title
IEEE Transactions on Information Theory
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation
Collection