Distributed Optimization and Control
Primal--Dual, Online, and Event-Triggered Algorithms
Time: Wed 2020-10-14 14.00
Location: F3, Lindstedtsvägen 26, KTH Campus, Stockholm
Subject area: Electrical Engineering
Doctoral student: Xinlei Yi , Reglerteknik
Opponent: Professor Antonis Papachristodoulou, University of Oxford
Supervisor: Karl H. Johansson, Reglerteknik, Signaler, sensorer och system, ACCESS Linnaeus Centre; Dimos V. Dimarogonas, Reglerteknik, Centrum för autonoma system, CAS, ACCESS Linnaeus Centre; Professor John S. Baras, University of Maryland College Park
Abstract
In distributed optimization and control, each network node performs local computation based on its own information and information received from its neighbors through a communication network to achieve a global objective. Although many distributed optimization and control algorithms have been proposed, core theoretical problems with important practical relevance remain. For example, what convergence properties can be obtained for nonconvex problems? How to tackle time-varying cost and constraint functions? Can these algorithms work under limited communication resources? This thesis contributes to answering these questions by providing new algorithms with better convergence rates under less information exchange than existing algorithms. It consists of three parts.
In the first part, we consider distributed nonconvex optimization problems. It is hard to solve these problems and often only stationary points can be found. We propose distributed primal--dual optimization algorithms under different information feedback settings. Specifically, when full-information feedback or deterministic zeroth-order oracle feedback is available, we show that the proposed algorithms converge sublinearly to a stationary point if each local cost function is smooth. They converge linearly to a global optimum if the global cost function also satisfies the Polyak--{\L}ojasiewicz condition. This condition is weaker than strong convexity, which is a standard condition in the literature for proving linear convergence of distributed optimization algorithms. When stochastic gradient feedback or stochastic zeroth-order oracle feedback is available, we show that the proposed algorithms achieve linear speedup convergence rates, meaning that the convergence rates decrease linearly with the number of computing nodes.
In the second part, distributed online convex optimization problems are considered. For such problems, the cost and constraint functions are revealed at the end of each time slot. We focus on time-varying coupled inequality constraints and time-varying directed communication networks. We propose one primal--dual dynamic mirror descent algorithm and two bandit primal--dual algorithms. It is shown that these distributed algorithms achieve the same sublinear regret and constraint violation bounds as existing centralized algorithms.
In the third and final part, in order to achieve a common control objective for a networked system, we propose distributed event-triggered algorithms to reduce the amount of information exchanged. Specifically, we propose dynamic event-triggered control algorithms to solve the average consensus problem for first-order systems, the global consensus problem for systems with input saturation, and the formation control problem with connectivity preservation for first- and second-order systems. We show that these algorithms do not exhibit Zeno behavior and that they achieve exponential convergence rates.