Control Theoretic Methods In Analysis And Design Of Optimization Algorithms

Loading...
Thumbnail Image
Degree type
Doctor of Philosophy (PhD)
Graduate group
Electrical & Systems Engineering
Discipline
Subject
Iterative Algorithms
Numerical Optimization
Robust Control
Engineering
Funder
Grant number
License
Copyright date
2018-09-28T00:00:00-07:00
Distributor
Related resources
Contributor
Abstract

Recently, there has been a surge of interest in incorporating tools from dynamical systems and control theory to analyze and design iterative optimization algorithms. This new perspective provides many insights and new directions of research. In particular, we can study robustness to uncertainties, provide nonconservative performance guarantees, and envision principled algorithm design. In this thesis, we aim to explore novel ideas to extend the literature in these directions. In the first part, we develop an interior-point method for solving a class of convex optimization problems with time-varying objective and constraint functions. This dynamical system is composed of two terms: (i) a correction term consisting of a continuous-time version of Newton's method, and (ii) a prediction term able to track the drift of the optimal solution by taking into account the time-varying nature of the problem. We illustrate the applicability of the proposed method in two practical applications: a sparsity promoting least squares problem and a collision-free robot navigation problem. In the second part, we shift focus to the analysis and design of iterative first-order optimization algorithms using tools from robust control. Specifically, we develop a semidefinite programming framework able to certify both exponential and subexponential convergence rates for a wide range of algorithms. We illustrate the utility of our results by analyzing the gradient method, proximal algorithms and their accelerated variants for (strongly) convex problems. We also develop the continuous-time counterpart, whereby we analyze the gradient flow and the continuous-time limit of Nesterov's accelerated method. Finally, we consider algorithm design, namely, we propose a framework based on sum-of-squares programming to design iterative first-order optimization algorithms for smooth and strongly convex problems. Our starting point is to develop a polynomial matrix inequality as a sufficient condition for exponential convergence of a given algorithm. The entries of this matrix are polynomial functions of the unknown parameters (exponential decay rate, stepsize, momentum coefficient, etc.). We then formulate a polynomial optimization with the aim of optimizing the exponential decay rate over the parameters of the algorithm. Finally, we use sum-of-squares (SOS) programming as a tractable relaxation of the proposed polynomial optimization problem.

Advisor
Victor M. Preciado
Date of degree
2018-01-01
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation