A Proximal Atomic Coordination Algorithm for Distributed Optimization (original) (raw)
Related papers
2016
We consider a general class of convex optimization problems over time-varying, multi-agent networks, that naturally arise in many application domains like energy systems and wireless networks. In particular, we focus on programs with separable objective functions, local (possibly different) constraint sets and a coupling inequality constraint expressed as the non-negativity of the sum of convex functions, each corresponding to one agent. We propose a novel distributed algorithm to deal with such problems based on a combination of dual decomposition and proximal minimization. Our approach is based on an iterative scheme that enables agents to reach consensus with respect to the dual variables, while preserving information privacy. Specifically, agents are not required to disclose information about their local objective and constraint functions, nor to assume knowledge of the coupling constraint. Our analysis can be thought of as a generalization of dual gradient/subgradient algorithms to a distributed setup. We show convergence of the proposed algorithm to some optimal dual solution of the centralized problem counterpart, while the primal iterates generated by the algorithm converge to the set of optimal primal solutions. A numerical example demonstrating the efficacy of the proposed algorithm is also provided.
Proximal minimization based distributed convex optimization
2016
We provide a novel iterative algorithm for distributed convex optimization over time-varying multi-agent networks, in the presence of heterogeneous agent constraints. We adopt a proximal minimization perspective and show that this setup allows us to bypass the difficulties of existing algorithms while simplifying the underlying mathematical analysis. At every iteration each agent makes a tentative decision by solving a local optimization program, and then communicates this decision with neighboring agents. We show that following this scheme agents reach consensus on a common decision vector, and in particular that this vector is an optimizer of the centralized problem.
A Flexible Framework Of First-Order Primal-Dual Algorithms for Distributed Optimization
2019
In this paper, we study the problem of minimizing a sum of convex objective functions, where each of the summands is locally available to an agent in the network. Distributed optimization algorithms make it possible for the agents to cooperatively solve the problem through local computations and communications with neighbors. Lagrangian-based distributed optimization algorithms have received significant attention in recent years, due to their exact convergence property. However, many of these algorithms have slow convergence speed or are expensive to execute. In this work, we develop a flexible framework of first-order primal-dual algorithms (FlexPD), which allows for multiple primal steps per iteration and can be customized for various applications with different computation and communication restrictions. For strongly convex and Lipschitz gradient objective functions, we establish linear convergence to the optimal solution for our proposed algorithms. Simulation results confirm th...
A General Framework of Exact Primal-Dual First-Order Algorithms for Distributed Optimization
2019 IEEE 58th Conference on Decision and Control (CDC), 2019
We study the problem of minimizing a sum of local objective convex functions over a network of processors/agents. This problem naturally calls for distributed optimization algorithms, in which the agents cooperatively solve the problem through local computations and communications with neighbors. While many of the existing distributed algorithms with constant stepsize can only converge to a neighborhood of optimal solution, some recent methods based on augmented Lagrangian and method of multipliers can achieve exact convergence with a fixed stepsize. However, these methods either suffer from slow convergence speed or require minimization at each iteration. In this work, we develop a class of distributed first order primal-dual methods, which allows for multiple primal steps per iteration. This general framework makes it possible to control the trade off between the performance and the execution complexity in primal-dual algorithms. We show that for strongly convex and Lipschitz gradient objective functions, this class of algorithms converges linearly to the optimal solution under appropriate constant stepsize choices. Simulation results confirm the superior performance of our algorithm compared to existing methods.
FlexPD: A Flexible Framework of First-Order Primal-Dual Algorithms for Distributed Optimization
IEEE Transactions on Signal Processing, 2021
In this paper, we study the problem of minimizing a sum of convex objective functions, which are locally available to agents in a network. Distributed optimization algorithms make it possible for the agents to cooperatively solve the problem through local computations and communications with neighbors. Lagrangian-based distributed optimization algorithms have received significant attention in recent years, due to their exact convergence property. However, many of these algorithms have slow convergence or are expensive to execute. In this paper, we develop a flexible framework of first-order primal-dual algorithms (FlexPD), which allows for multiple primal steps per iteration. This framework includes three algorithms, FlexPD-F, FlexPD-G, and FlexPD-C that can be used for various applications with different computation and communication limitations. For strongly convex and Lipschitz gradient objective functions, we establish linear convergence of our proposed framework to the optimal solution. Simulation results confirm the superior performance of our framework compared to the existing methods.
On distributed optimization under inequality constraints via Lagrangian primal-dual methods
American Control Conference (ACC), …, 2010
We consider a multi-agent convex optimization problem where agents are to minimize a sum of local objective functions subject to a global inequality constraint and a global constraint set. To deal with this, we devise a distributed primal-dual subgradient algorithm which is based on the characterization of the primal-dual optimal solutions as the saddle points of the Lagrangian function. This algorithm allows the agents to exchange information over networks with timevarying topologies and asymptotically agree on a pair of primaldual optimal solutions and the optimal value. Each local objective function is convex and only known to one particular agent. On the other hand, the inequality constraint is given by a convex function and known by all agents. Each node has its own convex constraint set, and the global constraint set is defined as their intersection. This convex optimization problem arises in many practical scenarios, such as distributed parameter estimation or network utility maximization . An important feature of the problem is that local objective functions and constraint functions depend upon a global decision vector. This requires the design of distributed algorithms where, on the one hand, agents can align their decisions through a local information exchange and, on the other hand, the common decisions will coincide with an optimal solution and the optimal value.
A proximal center-based decomposition method for multi-agent convex optimization
2008 47th IEEE Conference on Decision and Control, 2008
In this paper we develop a new dual decomposition method for optimizing a sum of convex objective functions corresponding to multiple agents but with coupled constraints. In our method we define a smooth Lagrangian, by using a smoothing technique developed by Nesterov , which preserves separability of the problem. With this approach we propose a new decomposition method (the proximal center method) for which efficiency estimates are derived and which improves the bounds on the number of iterations of the classical dual gradient scheme by an order of magnitude. The method involves every agent optimizing an objective function that is the sum of his own objective function and a smoothing term while the coordination between agents is performed via the Lagrange multipliers corresponding to the coupled constraints. Applications of the new method for solving distributed model predictive control or network optimization problems are also illustrated.
Distributed Constrained Optimization by Consensus-Based Primal-Dual Perturbation Method
IEEE Transactions on Automatic Control, 2000
Various distributed optimization methods have been developed for solving problems which have simple local constraint sets and whose objective function is the sum of local cost functions of distributed agents in a network. Motivated by emerging applications in smart grid and distributed sparse regression, this paper studies distributed optimization methods for solving general problems which have a coupled global cost function and have inequality constraints. We consider a network scenario where each agent has no global knowledge and can access only its local mapping and constraint functions. To solve this problem in a distributed manner, we propose a consensus-based distributed primal-dual perturbation (PDP) algorithm. In the algorithm, agents employ the average consensus technique to estimate the global cost and constraint functions via exchanging messages with neighbors, and meanwhile use a local primal-dual perturbed subgradient method to approach a global optimum. The proposed PDP method not only can handle smooth inequality constraints but also non-smooth constraints such as some sparsity promoting constraints arising in sparse optimization. We prove that the proposed PDP algorithm converges to an optimal primal-dual solution of the original problem, under standard problem and network assumptions. Numerical examples illustrating the performance of the proposed algorithm for a sparse regression problem and a demand response control problem in smart grid are also presented. Index terms− Distributed optimization, constrained optimization, average consensus, primal-dual subgradient method, regression, smart grid, demand side management control I. INTRODUCTION Distributed optimization methods are becoming popular options for solving several engineering problems, including parameter estimation, detection and localization problems in sensor networks
49th IEEE Conference on Decision and Control (CDC), 2010
In this paper we study the constrained consensus problem, i.e. the problem of reaching a common point from the estimates generated by multiple agents that are constrained to lie in different constraint sets. First, we provide a novel formulation of this problem as a convex optimization problem but with coupling constraints. Then, we propose a primal-dual decomposition method for solving this type of coupled convex optimization problems in a distributed fashion given restrictions on the communication topology. The proposed algorithm is based on consensus principles (as an efficient strategy for information fusion in networks) in combination with local subgradient updates for the primal-dual variables. We show, for the first time, that the nonnegative weights corresponding to the consensus process can be interpreted as dual variables and thus they can be updated using arguments from duality theory. Therefore, in our algorithm the weights are updated following some precise rules, while in most of the existing distributed algorithms based on consensus principles the weights have to be tuned. Preliminary simulation results show that our algorithm works, an average, ten times faster than some existing methods.
A Distributed Proximal Method for Composite Convex Optimization
We propose a distributed first-order augmented Lagrangian (FAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private function known only to one of the nodes, and only nodes connected by an edge can communicate to each other. This model of computation applies to a number of applications in distributed machine learning. We show that any limit point of FAL iterates is optimal; and for any ǫ > 0, an ǫ-optimal and ǫ-feasible solution can be computed within O(log(1/ǫ)) FAL iterations, which require O( ψ 1.5 1 d min ǫ ) communications steps, where ψ 1 is the largest eigenvalue of the graph Laplacian, and d min is the degree of smallest degree node. For a connected graph with N nodes, the second smallest eigenvalue of the Laplacian, ψ N −1 > 0, shows the strength of connectivity and is called the spectral gap. Since ψ N −1 ≤ d min , our result also implies that the number of communication steps can be bounded above by O ψ 1.5 1 ψ N−1 ǫ −1 . We also propose an asynchronous version of FAL by incorporating randomized block coordinate descent methods; and demonstrate the efficiency of FAL on large scale sparse-group LASSO problems.