Information Complexity of Black-Box Convex Optimization: A New Look via Feedback Information Theory
Penn collection
Degree type
Discipline
Subject
information theory
learning (artificial intelligence)
active learning problem
black box convex optimization
convex programming
feedback information theory
information complexity
minimax lower bound
feedback
history
information theory
iterative methods
large-scale systems
minimax techniques
signal processing
signal processing algorithms
statistics
stochastic resonance
Computer Sciences
Statistics and Probability
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
This paper revisits information complexity of black-box convex optimization, first studied in the seminal work of Nemirovski and Yudin, from the perspective of feedback information theory. These days, large-scale convex programming arises in a variety of applications, and it is important to refine our understanding of its fundamental limitations. The goal of black-box convex optimization is to minimize an unknown convex objective function from a given class over a compact, convex domain using an iterative scheme that generates approximate solutions by querying an oracle for local information about the function being optimized. The information complexity of a given problem class is defined as the smallest number of queries needed to minimize every function in the class to some desired accuracy. We present a simple information-theoretic approach that not only recovers many of the results of Nemirovski and Yudin, but also gives some new bounds pertaining to optimal rates at which iterative convex optimization schemes approach the solution. As a bonus, we give a particularly simple derivation of the minimax lower bound for a certain active learning problem on the unit interval.