Work

Analysis and Design of Algorithms for Dynamic Average Consensus and Convex Optimization

Public Deposited

Algorithms which are efficient and robust are essential to meet the increasing computational demands in the world today. In this thesis, we consider the analysis and design of both distributed algorithms for dynamic average consensus and centralized algorithms for convex optimization. Dynamic average consensus consists of a group of agents, each with a local signal (such as the output of a sensor), where the goal is for each agent to use communication with local neighbors to estimate the global average of the signals. We analyze and design diffusion algorithms based on local averaging to solve the dynamic average consensus problem for two classes of signals. First, we consider signals whose frequency spectrum is nonzero only at a finite number of discrete frequencies. In this case, the local agent dynamics use feedback to converge to the exact average of the signals. Furthermore, we design estimators which (1) are scalable to a large number of agents, (2) are robust to both initialization errors and agents entering/leaving the group, (3) have internally stable dynamics, (4) use discrete-time local broadcast communication, and (5) have fast convergence rates. The second class of signals considered are bandlimited; that is, their frequency spectrum is nonzero in a continuous band of frequencies. The feedback estimators designed previously are shown to be inadequate in this case. Instead, we propose a feedforward estimator which can track the average with arbitrarily small error under very limited assumptions, including when the communication among the agents changes rapidly in time. Dynamic average consensus can be formulated as a convex optimization problem, and we show that the analysis and design of algorithms for the two problems are quite similar. In the centralized setting, we design and analyze a novel gradient-based algorithm for unconstrained convex optimization. When the objective function is strongly convex with strong convexity parameter $m$, and its gradient is Lipschitz continuous with Lipschitz constant $L$, then the iterates and function values converge linearly to the optimum at rates $\ ho$ and $\ ho^2$, respectively, where $\ ho = 1-\sqrt{m/L}$. These are the fastest known guaranteed linear convergence rates for globally convergent first-order methods, and for high desired accuracies the corresponding iteration complexity is within a factor of two of the theoretical lower bound. We use a simple graphical design procedure based on integral quadratic constraints to derive closed-form expressions for the algorithm parameters. The new algorithm, which we call the triple momentum method, can be seen as an extension of gradient descent, Nesterov's accelerated gradient descent, and the heavy-ball method.

Last modified
  • 04/16/2018
Creator
DOI
Subject
Keyword
Date created
Resource type
Rights statement

Relationships

Items