Date of Award

2013

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Graduate Group

Applied Mathematics

First Advisor

Alexander Rakhlin

Second Advisor

Abraham J. Wyner

Abstract

First, we study online learning with an extended notion of regret, which is defined with respect to a set of strategies. We develop tools for analyzing the minimax rates and deriving efficient learning algorithms in this scenario. While the standard methods for minimizing the usual notion of regret fail, through our analysis we demonstrate the existence of regret-minimization methods that compete with such sets of strategies as: autoregressive algorithms, strategies based on statistical models, regularized least squares, and follow-the-regularized-leader strategies. In several cases, we also derive efficient learning algorithms.

Then we study how online linear optimization competes with strategies while benefiting from the predictable sequence. We analyze the minimax value of the online linear optimization problem and develop algorithms that take advantage of the predictable sequence and that guarantee performance compared to fixed actions. Later, we extend the story to a model selection problem on multiple predictable sequences. At the end, we re-analyze the problem from the perspective of dynamic regret.

Last, we study the relationship between Approximate Entropy and Shannon Entropy, and propose the adaptive Shannon Entropy approximation methods (e.g., Lempel-Ziv sliding window method) as an alternative approach to quantify the regularity of data. The new approach has the advantage of adaptively choosing the order of regularity.

Share

COinS