Skip to content

damaru2/phd_thesis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation




Hi! I am David Martínez-Rubio and this is my PhD thesis: Acceleration in First-Order Optimization Methods: Promenading Beyond Convexity or Smoothness, and Applications, submitted for the degree of Doctor of Philosophy at the University of Oxford, department of Computer Science. Handed on Michaelmas 2021.

Beyond the content of my thesis, this document contains an implementation of this LaTeX idea about wrapping most of your math notation with non-intrusive hyperlinks so at any point you can click on it and go to its definition.

Abstract

Acceleration in optimization is a term that is generally applied to optimization algorithms presenting some common methodology that enjoy convergence rates that improve over other more simple algorithms for the same problem. For example, Nesterov's Accelerated Gradient Descent improves over the gradient descent method for the optimization of smooth and convex functions. In this thesis, we investigate the acceleration phenomenon and its applications for three previously studied research questions in optimization and online learning: geodesically convex accelerated optimization in Riemannian manifolds, approximation of a proportional fair solution under positive linear constraints, also known as the $1$-fair packing problem, and regret minimization in a decentralized cooperative stochastic multi-armed bandit problem with no reward collisions. We improve over previously studied approaches by using acceleration as a key element in all of our algorithms: we obtain an accelerated method for some non-convex problems in the first case, we exploit a local decrease condition of our problem in the second case, avoiding the use of poor and non-global smoothness, and in the third problem, we design a solution for which we can effectively apply an accelerated gossip protocol to spread and use information.

In particular, we provide an extensive overview of acceleration techniques that provide context for ours proofs. In Riemannian optimization, we design the first global accelerated algorithm for the optimization of smooth functions that are geodesically convex or geodesically strongly convex and that are defined in manifolds of constant sectional curvature. We reduce these problems to a Euclidean non-convex problem and design a global accelerated method, obtaining a solution with the same rates of Accelerated Gradient Descent in the Euclidean space, up to logarithmic factors and constants depending on the initial distance to an optimum and on the curvature. We also provide some reductions. Regarding our fairness problem, we present a deterministic, accelerated, distributed and width-independent algorithm for the $1$-fair packing problem. We obtain a quadratic improvement in the number of iterations and remove polylog(width) factors with respect to the previous best solution. In our bandit problem, we design a decentralized upper confidence bound algorithm to minimize the expected regret. It allows for the use of delayed and approximate estimations of the arms' means which we can obtain fast with an accelerated gossip communication protocol. We provide theoretical and empirical comparisons showing we obtain lower regret than previous state of the art.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages