Discrete and Continuous Optimization for Collaborative and Multi-task Learning

Loading...
Thumbnail Image
Degree type
Doctor of Philosophy (PhD)
Graduate group
Electrical and Systems Engineering
Discipline
Electrical Engineering
Computer Sciences
Mathematics
Subject
Delay in Optimization and RL
Distributed Learning
Linear Bandit
Minimax Optimization
Robust Optimization
Submodular Maximization
Funder
Grant number
License
Copyright date
2023
Distributor
Related resources
Author
Adibi, Arman
Contributor
Abstract

This thesis is dedicated to addressing the challenges of robust collaborative learning and optimization in both discrete and continuous domains. With the ever-increasing scale of data and the growing demand for effective distributed learning, a multitude of obstacles emerge, including communication limitations, resilience to failures and corrupted data, limited information access, and collaboration in multi-task learning scenarios. The thesis consists of eight chapters, each targeting specific aspects of these challenges. In the second chapter, novel algorithms are introduced for collaborative linear bandits, offering a comprehensive exploration of the benefits of collaboration in the presence of adversaries through thorough analyses and lower bounds. The third chapter delves into multi-agent min-max learning problems by tackling the presence of Byzantine adversarial agents. Chapter four delves into the effects of delays within stochastic approximation schemes, investigating non-asymptotic convergence rates under Markovian noise. Moving forward, the fifth chapter focuses on analyzing the performance of standard min-max optimization algorithms with delayed updates. The sixth chapter concentrates on robustness in discrete learning, specifically addressing convex-submodular problems in mixed continuous-discrete domains. The seventh chapter tackles the challenge of limited information access in collaborative problems with distributed constraints, developing optimal algorithms for submodular maximization under distributed partition matroid constraints. Lastly, the eighth chapter introduces a discrete variant of multi-task learning and meta-learning. In summary, this thesis contributes to the field of robust collaborative learning and decision-making by providing insights, algorithms, and theoretical guarantees in discrete and continuous optimization. The advancements made across linear bandits, minimax optimization, distributed robust learning, delayed optimization, and submodular maximization pave the way for future developments in collaborative and multi-task learning.

Advisor
Hassani, Hamed
Date of degree
2023
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation