Quoc Tran Dinh | University of North Carolina at Chapel Hill (original) (raw)

Papers by Quoc Tran Dinh

Research paper thumbnail of Composite convex optimization with global and local inexact oracles

Computational Optimization and Applications, 2020

We introduce new global and local inexact oracle concepts for a wide class of convex functions in... more We introduce new global and local inexact oracle concepts for a wide class of convex functions in composite convex minimization. Such inexact oracles naturally arise in many situations, including primal-dual frameworks, barrier smoothing, and inexact evaluations of gradients and Hessians. We also provide examples showing that the class of convex functions equipped with the newly inexact oracles is larger than standard self-concordant and Lipschitz gradient function classes. Further, we investigate several properties of convex and/or self-concordant functions under our inexact oracles which are useful for algorithmic development. Next, we apply our theory to develop inexact proximal Newton-type schemes for minimizing general composite convex optimization problems equipped with such inexact oracles. Our theoretical results consist of new optimization algorithms accompanied with global convergence guarantees to solve a wide class of composite convex optimization problems. When the first objective term is additionally self-concordant, we establish different local convergence results for our method. In particular, we prove that depending on the choice of accuracy levels of the inexact second-order oracles, we obtain different local convergence rates ranging from linear and superlinear to quadratic. In special cases, where convergence bounds are known, our theory recovers the best known rates. We also apply our settings to derive a new primal-dual method for composite convex minimization problems involving linear operators. Finally, we present some representative numerical examples to illustrate the benefit of the new algorithms. Keywords Self-concordant functions • composite convex minimization • local and global inexact oracles • inexact proximal Newton-type method • primal-dual second-order method.

Research paper thumbnail of REGULARIZATION ALGORITHMS FOR SOLVING MONOTONE EQUILIBRIUM PROBLEMS

We make use the Banach contraction mapping principle to prove linear convergence of a regularizat... more We make use the Banach contraction mapping principle to prove linear convergence of a regularization algorithm for strongly monotone Ky Fan inequalities that satisfy a Lipschitz-type condition recently introduced by Mastroeni. We then modify the proposed algorithm to obtain a linesearch-free algorithm which does not require the Lipschitz-type condition. We apply the proposed algorithms to implement inexact proximal methods for solving monotone (not necessarily strongly) Ky Fan inequalities. Applications to variational inequality and complementarity problems are discussed. As a consequence a linearly convergent derivative-free algorithm without linesearch for strongly monotone nonlinear complementarity problem is obtained. Application to a Nash-Cournot equilibria model is discussed and some preliminary computation results are reported.

Research paper thumbnail of Implementable quadratic regularization methods for solving pseudo-monotone equilibrium problems

Research paper thumbnail of Sequential Convex Programming and Decomposition Approaches for Nonlinear Optimization

Research paper thumbnail of A new splitting method for solving composite monotone inclusions involving parallel-sum operators

We propose a new primal-dual splitting method for solving composite inclusions involving Lipschit... more We propose a new primal-dual splitting method for solving composite inclusions involving Lipschitzian, and parallel-sum-type monotone operators. Our approach extends the method proposed in \cite{Siopt4} to a more general class of monotone inclusions. The main idea is to represent the solution set of both the primal and dual problems using their associated Kuhn-Tucker set, and then develop an iterative projected method to successively approximate a feasible point of the Kuhn-Tucker set. We propose a primal-dual splitting algorithm that features the resolvent of each operator separately. We then prove the weak convergence of this algorithm to a solution of both the primal and dual problems. Applications to systems of monotone inclusions as well as composite convex minimization problems are also investigated.

Research paper thumbnail of Adaptive Smoothing Algorithms for Nonsmooth Composite Convex Minimization

We propose a novel adaptive smoothing algorithm based on Nes-terov's smoothing technique in [... more We propose a novel adaptive smoothing algorithm based on Nes-terov's smoothing technique in [26] for solving nonsmooth composite convex optimization problems. Our method combines both Nesterov's accelerated proximal gradient scheme and a new homotopy strategy for smoothness parameter. By an appropriate choice of smoothing functions, we develop a new algorithm that has the O 1 ε optimal worst-case iteration-complexity while allows one to automatically update the smoothness parameter at each iteration. We then further exploit the structure of problems to select smoothing functions and develop suitable algorithmic variants that reduce the complexity-per-iteration, while preserve the optimal worst-case iteration-complexity. We also specify our algorithm to solve constrained convex optimization problems and show its convergence guarantee on the primal sequence of iterates. Our preliminarily numerical tests verify the efficiency of our algorithms. Keywords Nesterov's smoothing...

Research paper thumbnail of A primal-dual framework for mixtures of regularizers

2015 23rd European Signal Processing Conference (EUSIPCO), 2015

Research paper thumbnail of Universal Primal-Dual Proximal-Gradient Methods

We propose a new primal-dual algorithmic framework for a prototypical constrained convex optimiza... more We propose a new primal-dual algorithmic framework for a prototypical constrained convex optimization template. The algorithmic instances of our framework are universal since they can automatically adapt to the unknown Holder continuity degree and constant within the dual formulation. They are also guaran- teed to have optimal convergence rates in the objective residual and the feasibility gap for each Holder smoothness degree. In contrast to existing primal-dual algorithms, our framework avoids the proximity operator of the objective function. We instead leverage computationally cheaper, Fenchel-type operators, which are the main workhorses of the generalized conditional gradient (GCG)-type methods. In contrast to the GCG-type methods, our framework does not require the objective function to be differentiable, and can also process additional general linear inclusion constraints, while guarantees the convergence rate on the primal problem

Research paper thumbnail of Splitting the Smoothed Primal-Dual Gap: Optimal Alternating Direction Methods

We develop rigorous alternating direction optimization methods for a prototype constrained convex... more We develop rigorous alternating direction optimization methods for a prototype constrained convex optimization template, which has broad applications in computational sciences. We build upon our earlier work on the model-based gap reduction (MGR) technique, which revolves around a smoothed estimate of the primal-dual gap. MGR allows us to simultaneously update a sequence of primal and dual variables as well as primal-and dual-smoothness parameters so that the smoothed gap function converges to the true gap, which in turn converges to zero—both at optimal rates. In contrast, this paper introduces a new split-gap reduction (SGR) technique as a natural counterpart of MGR in order to take advantage of additional splitting structures present in the prototype template. We illustrate SGR technique using the forward-backward and Douglas-Rachford splittings on the smoothed gap function and derive new alternating direction methods. The new methods obtain optimal convergence rates without heur...

Research paper thumbnail of An optimal first-order primal-dual gap reduction framework for constrained convex optimization

We introduce an analysis framework for constructing optimal first-order primal-dual methods for t... more We introduce an analysis framework for constructing optimal first-order primal-dual methods for the prototypical constrained convex optimization template. While this class of methods offers scalability advantages in obtaining numerical solutions, they have the disadvantage of producing sequences that are only approximately feasible to the problem constraints. As a result, it is theoretically challenging to compare the efficiency of different methods. To this end, we rigorously prove in the worst-case that the convergence of primal objective residual in first-order primal-dual algorithms must compete with their constraint feasibility convergence, and mathematically summarize this fundamental trade-off. We then provide a heuristic-free analysis recipe for constructing optimal first-order primal-dual algorithms that can obtain a desirable trade-off between the primal objective residual and feasibility gap and whose iteration convergence rates cannot be improved. Our technique obtains a...

Research paper thumbnail of Combining Convex-Concave

Research paper thumbnail of A new decomposition algorithm for globally solving mathematical programs with affine equilibrium constraints

Acta Mathematica Vietnamica

This paper proposes a new decomposition method for globally solving mathematical programming prob... more This paper proposes a new decomposition method for globally solving mathematical programming problems with affine equilibrium constraints (AMPEC). First, we view AMPEC as a bilevel programming problem where the lower level problem is a parametric affine variational inequality. Then, we use a regularization technique to formulate the resulting problem as a mathematical program with an additional constraint defined by the difference of two convex functions (DC function). A main feature of this DC decomposition is that the second component depends upon only the parameter in the lower level problem. This property allows us to develop branch-and-bound algorithms for globally solving AMPEC where the adaptive rectangular bisection takes place only in the space of the parameters. As an example, we use the proposed algorithm to solve a bilevel Nash-Cournot equilibrium market model. Computational results show the efficiency of the proposed algorithm.

Research paper thumbnail of A proximal Newton framework for composite minimization: Graph learning without Cholesky decompositions and matrix inversions

We propose an algorithmic framework for convex minimization problems of a composite function with... more We propose an algorithmic framework for convex minimization problems of a composite function with two terms: a self-concordant function and a possibly nonsmooth regularization term. Our method is a new proximal Newton algorithm that features a local quadratic convergence rate. As a specific instance of our framework, we consider the sparse inverse covariance matrix estimation in graph learning problems. Via a careful dual formulation and a novel analytic step-size selection procedure, our approach for graph learning avoids Cholesky decompositions and matrix inversions in its iteration making it attractive for parallel and distributed implementations.

Research paper thumbnail of A Primal-Dual Algorithmic Framework for Constrained Convex Minimization

We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical ... more We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primal-dual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction method-of-multipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.

Research paper thumbnail of Adaptive inexact fast augmented Lagrangian methods for constrained convex optimization

In this paper we analyze several inexact fast augmented Lagrangian methods for solving linearly c... more In this paper we analyze several inexact fast augmented Lagrangian methods for solving linearly constrained convex optimization problems. Mainly, our methods rely on the combination of excessive-gap-like smoothing technique developed in [15] and the newly introduced inexact oracle framework from [4]. We analyze several algorithmic instances with constant and adaptive smoothing parameters and derive total computational complexity results in terms of projections onto a simple primal set. For the basic inexact fast augmented Lagrangian algorithm we obtain the overall computational complexity of order mathcalOleft(frac1epsilon5/4right)\mathcal{O}\left(\frac{1}{\epsilon^{5/4}}\right)mathcalOleft(frac1epsilon5/4right), while for the adaptive variant we get mathcalOleft(frac1epsilonright)\mathcal{O}\left(\frac{1}{\epsilon}\right)mathcalOleft(frac1epsilonright), projections onto a primal set in order to obtain an epsilon−\epsilon-epsilonoptimal solution for our original problem.

Research paper thumbnail of Fast inexact decomposition algorithms for large-scale separable convex optimization

Optimization, 2015

ABSTRACT In this paper we propose a new inexact dual decomposition algorithm for solving separabl... more ABSTRACT In this paper we propose a new inexact dual decomposition algorithm for solving separable convex optimization problems. This algorithm is a combination of three techniques: dual Lagrangian decomposition, smoothing and excessive gap. The algorithm requires only one primal step and two dual steps at each iteration and allows one to solve the subproblem of each component inexactly and in parallel. Moreover, the algorithmic parameters are updated automatically without any tuning strategy as in augmented Lagrangian approaches. We analyze the convergence of the algorithm and estimate its O(frac1varepsilon)O(\frac{1}{\varepsilon})O(frac1varepsilon) worst-case complexity. Numerical examples are implemented to verify the theoretical results.

Research paper thumbnail of Structured Sparsity: Discrete and Convex Approaches

Applied and Numerical Harmonic Analysis, 2015

Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensi... more Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated structured sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.

Research paper thumbnail of An inner convex approximation algorithm for BMI optimization and applications in control

2012 IEEE 51st IEEE Conference on Decision and Control (CDC), 2012

In this work, we propose a new local optimization method to solve a class of nonconvex semidefini... more In this work, we propose a new local optimization method to solve a class of nonconvex semidefinite programming (SDP) problems. The basic idea is to approximate the feasible set of the nonconvex SDP problem by inner positive semidefinite convex approximations via a parameterization technique. This leads to an iterative procedure to search a local optimum of the nonconvex problem. The convergence of the algorithm is analyzed under mild assumptions. Applications in static output feedback control are benchmarked and numerical tests are implemented based on the data from the COMPL e ib library.

Research paper thumbnail of An application of sequential convex programming to time optimal trajectory planning for a car motion

Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference, 2009

This paper proposes an iterative method for solving non-convex optimization problems which we cal... more This paper proposes an iterative method for solving non-convex optimization problems which we call sequential convex programming (SCP) and an application of our method to time trajectory planning problem for a car motion. Firstly, we introduce a formulation of the motion of a car along a reference trajectory as an optimal control problem. We use a special convexity based formulation path employing coordinates and a change of variables technique. Secondly, we use a direct transcription to transform this optimal control problem into a large scale optimization problem which is "nearly" convex. Then, SCP method is applied to solve the resulting problem. Finally, numerical results and comparison with sequential quadratic programming (SQP) and interior point (IP) methods are reported.

Research paper thumbnail of An Inexact Proximal Path-Following Algorithm for Constrained Convex Minimization

SIAM Journal on Optimization, 2014

Many scientific and engineering applications feature large-scale non-smooth convex minimization p... more Many scientific and engineering applications feature large-scale non-smooth convex minimization problems over convex sets. In this paper, we address an important instance of this broad class where we assume that the non-smooth objective is equipped with a tractable proximity operator and that the convex constraints afford a self-concordant barrier. We provide a new joint treatment of proximal and self-concordant barrier concepts and illustrate that such problems can be efficiently solved without lifting problem dimensions. We propose an inexact path-following algorithmic framework and theoretically characterize the worst case convergence as well as computational complexity of this framework, and also analyze its behavior when the proximal subproblems are solved inexactly. To illustrate our framework, we apply its instances to both synthetic and real-world applications and illustrate their accuracy and scalability in large-scale settings. As an added bonus, we describe how our framework can obtain points on the Pareto frontier of regularized problems with self-concordant objectives in a tuning free fashion. *

Research paper thumbnail of Composite convex optimization with global and local inexact oracles

Computational Optimization and Applications, 2020

We introduce new global and local inexact oracle concepts for a wide class of convex functions in... more We introduce new global and local inexact oracle concepts for a wide class of convex functions in composite convex minimization. Such inexact oracles naturally arise in many situations, including primal-dual frameworks, barrier smoothing, and inexact evaluations of gradients and Hessians. We also provide examples showing that the class of convex functions equipped with the newly inexact oracles is larger than standard self-concordant and Lipschitz gradient function classes. Further, we investigate several properties of convex and/or self-concordant functions under our inexact oracles which are useful for algorithmic development. Next, we apply our theory to develop inexact proximal Newton-type schemes for minimizing general composite convex optimization problems equipped with such inexact oracles. Our theoretical results consist of new optimization algorithms accompanied with global convergence guarantees to solve a wide class of composite convex optimization problems. When the first objective term is additionally self-concordant, we establish different local convergence results for our method. In particular, we prove that depending on the choice of accuracy levels of the inexact second-order oracles, we obtain different local convergence rates ranging from linear and superlinear to quadratic. In special cases, where convergence bounds are known, our theory recovers the best known rates. We also apply our settings to derive a new primal-dual method for composite convex minimization problems involving linear operators. Finally, we present some representative numerical examples to illustrate the benefit of the new algorithms. Keywords Self-concordant functions • composite convex minimization • local and global inexact oracles • inexact proximal Newton-type method • primal-dual second-order method.

Research paper thumbnail of REGULARIZATION ALGORITHMS FOR SOLVING MONOTONE EQUILIBRIUM PROBLEMS

We make use the Banach contraction mapping principle to prove linear convergence of a regularizat... more We make use the Banach contraction mapping principle to prove linear convergence of a regularization algorithm for strongly monotone Ky Fan inequalities that satisfy a Lipschitz-type condition recently introduced by Mastroeni. We then modify the proposed algorithm to obtain a linesearch-free algorithm which does not require the Lipschitz-type condition. We apply the proposed algorithms to implement inexact proximal methods for solving monotone (not necessarily strongly) Ky Fan inequalities. Applications to variational inequality and complementarity problems are discussed. As a consequence a linearly convergent derivative-free algorithm without linesearch for strongly monotone nonlinear complementarity problem is obtained. Application to a Nash-Cournot equilibria model is discussed and some preliminary computation results are reported.

Research paper thumbnail of Implementable quadratic regularization methods for solving pseudo-monotone equilibrium problems

Research paper thumbnail of Sequential Convex Programming and Decomposition Approaches for Nonlinear Optimization

Research paper thumbnail of A new splitting method for solving composite monotone inclusions involving parallel-sum operators

We propose a new primal-dual splitting method for solving composite inclusions involving Lipschit... more We propose a new primal-dual splitting method for solving composite inclusions involving Lipschitzian, and parallel-sum-type monotone operators. Our approach extends the method proposed in \cite{Siopt4} to a more general class of monotone inclusions. The main idea is to represent the solution set of both the primal and dual problems using their associated Kuhn-Tucker set, and then develop an iterative projected method to successively approximate a feasible point of the Kuhn-Tucker set. We propose a primal-dual splitting algorithm that features the resolvent of each operator separately. We then prove the weak convergence of this algorithm to a solution of both the primal and dual problems. Applications to systems of monotone inclusions as well as composite convex minimization problems are also investigated.

Research paper thumbnail of Adaptive Smoothing Algorithms for Nonsmooth Composite Convex Minimization

We propose a novel adaptive smoothing algorithm based on Nes-terov's smoothing technique in [... more We propose a novel adaptive smoothing algorithm based on Nes-terov's smoothing technique in [26] for solving nonsmooth composite convex optimization problems. Our method combines both Nesterov's accelerated proximal gradient scheme and a new homotopy strategy for smoothness parameter. By an appropriate choice of smoothing functions, we develop a new algorithm that has the O 1 ε optimal worst-case iteration-complexity while allows one to automatically update the smoothness parameter at each iteration. We then further exploit the structure of problems to select smoothing functions and develop suitable algorithmic variants that reduce the complexity-per-iteration, while preserve the optimal worst-case iteration-complexity. We also specify our algorithm to solve constrained convex optimization problems and show its convergence guarantee on the primal sequence of iterates. Our preliminarily numerical tests verify the efficiency of our algorithms. Keywords Nesterov's smoothing...

Research paper thumbnail of A primal-dual framework for mixtures of regularizers

2015 23rd European Signal Processing Conference (EUSIPCO), 2015

Research paper thumbnail of Universal Primal-Dual Proximal-Gradient Methods

We propose a new primal-dual algorithmic framework for a prototypical constrained convex optimiza... more We propose a new primal-dual algorithmic framework for a prototypical constrained convex optimization template. The algorithmic instances of our framework are universal since they can automatically adapt to the unknown Holder continuity degree and constant within the dual formulation. They are also guaran- teed to have optimal convergence rates in the objective residual and the feasibility gap for each Holder smoothness degree. In contrast to existing primal-dual algorithms, our framework avoids the proximity operator of the objective function. We instead leverage computationally cheaper, Fenchel-type operators, which are the main workhorses of the generalized conditional gradient (GCG)-type methods. In contrast to the GCG-type methods, our framework does not require the objective function to be differentiable, and can also process additional general linear inclusion constraints, while guarantees the convergence rate on the primal problem

Research paper thumbnail of Splitting the Smoothed Primal-Dual Gap: Optimal Alternating Direction Methods

We develop rigorous alternating direction optimization methods for a prototype constrained convex... more We develop rigorous alternating direction optimization methods for a prototype constrained convex optimization template, which has broad applications in computational sciences. We build upon our earlier work on the model-based gap reduction (MGR) technique, which revolves around a smoothed estimate of the primal-dual gap. MGR allows us to simultaneously update a sequence of primal and dual variables as well as primal-and dual-smoothness parameters so that the smoothed gap function converges to the true gap, which in turn converges to zero—both at optimal rates. In contrast, this paper introduces a new split-gap reduction (SGR) technique as a natural counterpart of MGR in order to take advantage of additional splitting structures present in the prototype template. We illustrate SGR technique using the forward-backward and Douglas-Rachford splittings on the smoothed gap function and derive new alternating direction methods. The new methods obtain optimal convergence rates without heur...

Research paper thumbnail of An optimal first-order primal-dual gap reduction framework for constrained convex optimization

We introduce an analysis framework for constructing optimal first-order primal-dual methods for t... more We introduce an analysis framework for constructing optimal first-order primal-dual methods for the prototypical constrained convex optimization template. While this class of methods offers scalability advantages in obtaining numerical solutions, they have the disadvantage of producing sequences that are only approximately feasible to the problem constraints. As a result, it is theoretically challenging to compare the efficiency of different methods. To this end, we rigorously prove in the worst-case that the convergence of primal objective residual in first-order primal-dual algorithms must compete with their constraint feasibility convergence, and mathematically summarize this fundamental trade-off. We then provide a heuristic-free analysis recipe for constructing optimal first-order primal-dual algorithms that can obtain a desirable trade-off between the primal objective residual and feasibility gap and whose iteration convergence rates cannot be improved. Our technique obtains a...

Research paper thumbnail of Combining Convex-Concave

Research paper thumbnail of A new decomposition algorithm for globally solving mathematical programs with affine equilibrium constraints

Acta Mathematica Vietnamica

This paper proposes a new decomposition method for globally solving mathematical programming prob... more This paper proposes a new decomposition method for globally solving mathematical programming problems with affine equilibrium constraints (AMPEC). First, we view AMPEC as a bilevel programming problem where the lower level problem is a parametric affine variational inequality. Then, we use a regularization technique to formulate the resulting problem as a mathematical program with an additional constraint defined by the difference of two convex functions (DC function). A main feature of this DC decomposition is that the second component depends upon only the parameter in the lower level problem. This property allows us to develop branch-and-bound algorithms for globally solving AMPEC where the adaptive rectangular bisection takes place only in the space of the parameters. As an example, we use the proposed algorithm to solve a bilevel Nash-Cournot equilibrium market model. Computational results show the efficiency of the proposed algorithm.

Research paper thumbnail of A proximal Newton framework for composite minimization: Graph learning without Cholesky decompositions and matrix inversions

We propose an algorithmic framework for convex minimization problems of a composite function with... more We propose an algorithmic framework for convex minimization problems of a composite function with two terms: a self-concordant function and a possibly nonsmooth regularization term. Our method is a new proximal Newton algorithm that features a local quadratic convergence rate. As a specific instance of our framework, we consider the sparse inverse covariance matrix estimation in graph learning problems. Via a careful dual formulation and a novel analytic step-size selection procedure, our approach for graph learning avoids Cholesky decompositions and matrix inversions in its iteration making it attractive for parallel and distributed implementations.

Research paper thumbnail of A Primal-Dual Algorithmic Framework for Constrained Convex Minimization

We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical ... more We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primal-dual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction method-of-multipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.

Research paper thumbnail of Adaptive inexact fast augmented Lagrangian methods for constrained convex optimization

In this paper we analyze several inexact fast augmented Lagrangian methods for solving linearly c... more In this paper we analyze several inexact fast augmented Lagrangian methods for solving linearly constrained convex optimization problems. Mainly, our methods rely on the combination of excessive-gap-like smoothing technique developed in [15] and the newly introduced inexact oracle framework from [4]. We analyze several algorithmic instances with constant and adaptive smoothing parameters and derive total computational complexity results in terms of projections onto a simple primal set. For the basic inexact fast augmented Lagrangian algorithm we obtain the overall computational complexity of order mathcalOleft(frac1epsilon5/4right)\mathcal{O}\left(\frac{1}{\epsilon^{5/4}}\right)mathcalOleft(frac1epsilon5/4right), while for the adaptive variant we get mathcalOleft(frac1epsilonright)\mathcal{O}\left(\frac{1}{\epsilon}\right)mathcalOleft(frac1epsilonright), projections onto a primal set in order to obtain an epsilon−\epsilon-epsilonoptimal solution for our original problem.

Research paper thumbnail of Fast inexact decomposition algorithms for large-scale separable convex optimization

Optimization, 2015

ABSTRACT In this paper we propose a new inexact dual decomposition algorithm for solving separabl... more ABSTRACT In this paper we propose a new inexact dual decomposition algorithm for solving separable convex optimization problems. This algorithm is a combination of three techniques: dual Lagrangian decomposition, smoothing and excessive gap. The algorithm requires only one primal step and two dual steps at each iteration and allows one to solve the subproblem of each component inexactly and in parallel. Moreover, the algorithmic parameters are updated automatically without any tuning strategy as in augmented Lagrangian approaches. We analyze the convergence of the algorithm and estimate its O(frac1varepsilon)O(\frac{1}{\varepsilon})O(frac1varepsilon) worst-case complexity. Numerical examples are implemented to verify the theoretical results.

Research paper thumbnail of Structured Sparsity: Discrete and Convex Approaches

Applied and Numerical Harmonic Analysis, 2015

Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensi... more Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated structured sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.

Research paper thumbnail of An inner convex approximation algorithm for BMI optimization and applications in control

2012 IEEE 51st IEEE Conference on Decision and Control (CDC), 2012

In this work, we propose a new local optimization method to solve a class of nonconvex semidefini... more In this work, we propose a new local optimization method to solve a class of nonconvex semidefinite programming (SDP) problems. The basic idea is to approximate the feasible set of the nonconvex SDP problem by inner positive semidefinite convex approximations via a parameterization technique. This leads to an iterative procedure to search a local optimum of the nonconvex problem. The convergence of the algorithm is analyzed under mild assumptions. Applications in static output feedback control are benchmarked and numerical tests are implemented based on the data from the COMPL e ib library.

Research paper thumbnail of An application of sequential convex programming to time optimal trajectory planning for a car motion

Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference, 2009

This paper proposes an iterative method for solving non-convex optimization problems which we cal... more This paper proposes an iterative method for solving non-convex optimization problems which we call sequential convex programming (SCP) and an application of our method to time trajectory planning problem for a car motion. Firstly, we introduce a formulation of the motion of a car along a reference trajectory as an optimal control problem. We use a special convexity based formulation path employing coordinates and a change of variables technique. Secondly, we use a direct transcription to transform this optimal control problem into a large scale optimization problem which is "nearly" convex. Then, SCP method is applied to solve the resulting problem. Finally, numerical results and comparison with sequential quadratic programming (SQP) and interior point (IP) methods are reported.

Research paper thumbnail of An Inexact Proximal Path-Following Algorithm for Constrained Convex Minimization

SIAM Journal on Optimization, 2014

Many scientific and engineering applications feature large-scale non-smooth convex minimization p... more Many scientific and engineering applications feature large-scale non-smooth convex minimization problems over convex sets. In this paper, we address an important instance of this broad class where we assume that the non-smooth objective is equipped with a tractable proximity operator and that the convex constraints afford a self-concordant barrier. We provide a new joint treatment of proximal and self-concordant barrier concepts and illustrate that such problems can be efficiently solved without lifting problem dimensions. We propose an inexact path-following algorithmic framework and theoretically characterize the worst case convergence as well as computational complexity of this framework, and also analyze its behavior when the proximal subproblems are solved inexactly. To illustrate our framework, we apply its instances to both synthetic and real-world applications and illustrate their accuracy and scalability in large-scale settings. As an added bonus, we describe how our framework can obtain points on the Pareto frontier of regularized problems with self-concordant objectives in a tuning free fashion. *