A proximal point method for nonsmooth convex optimization problems in Banach spaces (original) (raw)

The prox-Tikhonov regularization method for the proximal point algorithm in Banach spaces

Journal of Global Optimization, 2011

It is known, by Rockafellar (SIAM J Control Optim 14:877-898, 1976), that the proximal point algorithm (PPA) converges weakly to a zero of a maximal monotone operator in a Hilbert space, but it fails to converge strongly. Lehdili and Moudafi (Optimization 37:239-252, 1996) introduced the new prox-Tikhonov regularization method for PPA to generate a strongly convergent sequence and established a convergence property for it by using the technique of variational distance in the same space setting. In this paper, the prox-Tikhonov regularization method for the proximal point algorithm of finding a zero for an accretive operator in the framework of Banach space is proposed. Conditions which guarantee the strong convergence of this algorithm to a particular element of the solution set is provided. An inexact variant of this method with error sequence is also discussed. Keywords Accretive operator • Maximal monoton operator • Metric projection mapping • Proximal point algorithm • Regularization method • Resolvent identity • Strong convergence • Uniformly Gâteaux differentiable norm Mathematics Subject Classification (2000) 47J20 • 49J40 • 65J15 1 Introduction Let X be a real Banach space with dual space X *. We denote by J the normalized duality mapping from X into 2 X * defined by J (x) := {f * ∈ X * : x, f * = ||x|| 2 = ||f * || 2 }, x ∈ X,

On a generalized proximal point method for solving equilibrium problems in Banach spaces

Nonlinear Analysis: Theory, Methods & Applications, 2012

We introduce a regularized equilibrium problem in Banach spaces, involving generalized Bregman functions. For this regularized problem, we establish the existence and uniqueness of solutions. These regularizations yield a proximal-like method for solving equilibrium problems in Banach spaces. We prove that the proximal sequence is an asymptotically solving sequence when the dual space is uniformly convex. Moreover, we prove that all weak accumulation points are solutions if the equilibrium function is lower semicontinuous in its first variable. We prove, under additional assumptions, that the proximal sequence converges weakly to a solution.

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

In this paper, we consider a class of inexact proximal point methods for convex optimization which allows a relative error tolerance in the approximate solution of each proximal subproblem. By exploiting the special structure of convex optimization problems, we are able to derive stronger complexity bounds for the aforementioned class. As a consequence, we show that the best size of the projected gradients (resp., gradients) generated by the projected gradient (resp., steepest descent) method up to iteration k is O(1/k) in the context of smooth convex optimization problems.

On Dual Convergence of the Generalized Proximal Point Method with Bregman Distances

Mathematics of Operations Research, 2000

The use of generalized distances (e.g. Bregman distances), instead of the Euclidean one, in the proximal point method for convex optimization, allows for elimination of the inequality constraints from the subproblems. In this paper we consider the proximal point method with Bregman distances applied to linearly constrained convex optimization problems, and study the behavior of the dual sequence obtained from the optimal multipliers of the linear constraints of each subproblem. Under rather general assumptions, which cover most Bregman distances of interest, we obtain an ergodic convergence result, namely that a sequence of weighted averages of the dual sequence converges to the centroid of the dual optimal set. As an intermediate result, we prove under the same assumptions that the dual central path generated by a large class of barriers, including the generalized Bregman distances, converges to the same point.

On the projected subgradient method for nonsmooth convex optimization in a Hilbert space

Mathematical Programming, 1998

We consider the method for constrained convex optimization in a Hilbert space, consisting of a step in the direction opposite to an ek-subgradient of the objective at a current iterate, followed by an orthogonal projection onto the feasible set. The normalized stepsizes ek are exogenously given, satisfying ~=0 c~k ec, ~=0 c~ < ec, and ek is chosen so that ek ~</~k for some # > 0. We prove that the sequence generated in this way is weakly convergent to a minimizer if the problem has solutions, and is unbounded otherwise. Among the features of our convergence analysis, we mention that it covers the nonsmooth case, in the sense that we make no assumption of differentiability off, and much less of Lipschitz continuity of its gradient. Also, we prove weak convergence of the whole sequence, rather than just boundedness of the sequence and optimality of its weak accumulation points, thus improving over all previously known convergence results. We present also convergence rate results.

Variational Analysis Perspective on Linear Convergence of Some First Order Methods for Nonsmooth Convex Optimization Problems

Set-Valued and Variational Analysis

We understand linear convergence of some first-order methods such as the proximal gradient method (PGM), the proximal alternating linearized minimization (PALM) algorithm and the randomized block coordinate proximal gradient method (R-BCPGM) for minimizing the sum of a smooth convex function and a nonsmooth convex function from a variational analysis perspective. We introduce a new analytic framework based on some theories on variational analysis such as the error bound/calmness/metric subregularity/bounded metric subregularity. This variational analysis perspective enables us to provide some concrete sufficient conditions for checking linear convergence and applicable approaches for calculating linear convergence rates of these first-order methods for a class of structured convex problems where the smooth part of the objective function is a composite of a smooth strongly convex function and a linear function. In particular, we show that these conditions are satisfied automatically, and the modulus for the calmness/metric subregularity is practically computable in some important applications such as in the LASSO, the fused LASSO and the group LASSO. Consequently,

Convergence rates of inexact proximal-gradient methods for convex optimization

Arxiv preprint arXiv:1109.2415, 2011

We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates. Using these rates, we perform as well as or better than a carefully chosen fixed error level on a set of structured sparsity problems.

A General Approach to Convergence Properties of Some Methods for Nonsmooth Convex Optimization

Applied Mathematics and Optimization, 1998

Based on the notion of the ε-subgradient, we present a unified technique to establish convergence properties of several methods for nonsmooth convex minimization problems. Starting from the technical results, we obtain the global convergence of: (i) the variable metric proximal methods presented by Bonnans, Gilbert, Lemaréchal, and Sagastizábal, (ii) some algorithms proposed by Correa and Lemaréchal, and (iii) the proximal point algorithm given by Rockafellar. In particular, we prove that the Rockafellar-Todd phenomenon does not occur for each of the above mentioned methods. Moreover, we explore the convergence rate of { x k } and { f (x k)} when {x k } is unbounded and { f (x k)} is bounded for the nonsmooth minimization methods (i), (ii), and (iii).

Perturbed Optimization in Banach Spaces I: A General Theory Based on a Weak Directional Constraint Qualification

SIAM Journal on Control and Optimization, 1996

Using a directional form of constraint qualification weaker than Robinson's, we derive an implicit function theorem for inclusions and use it for firstand second-order sensitivity analyses of the value function in perturbed constrained optimization. We obtain H61der and Lipschitz properties and, under a no-gap condition, first-order expansions for exact and approximate solutions. As an application, differentiability properties of metric projections in Hilbert spaces are obtained, using a condition generalizing polyhedricity. We also present in the appendix a short proof of a generalization of the convex duality theorem in Banach spaces.

Dual convergence of the proximal point method with Bregman distances for linear programming

Optimization Methods and Software, 2007

In this article, we consider the proximal point method with Bregman distance applied to linear programming problems, and study the dual sequence obtained from the optimal multipliers of the linear constraints of each subproblem. We establish the convergence of this dual sequence, as well as convergence rate results for the primal sequence, for a suitable family of Bregman distances. These results are obtained by studying first the limiting behavior of a certain perturbed dual path and then the behavior of the dual and primal paths.