A version of bundle method with linear programming (original) (raw)

A bundle-filter method for nonsmooth convex constrained optimization

Mathematical Programming, 2009

For solving nonsmooth convex constrained optimization problems, we propose an algorithm which combines the ideas of the proximal bundle methods with the filter strategy for evaluating candidate points. The resulting algorithm inherits some attractive features from both approaches. On the one hand, it allows effective control of the size of quadratic programming subproblems via the compression and aggregation techniques of proximal bundle methods. On the other hand, the filter criterion for accepting a candidate point as the new iterate is sometimes easier to satisfy than the usual descent condition in bundle methods. Some encouraging preliminary computational results are also reported.

A new method based on the bundle idea and gradient sampling technique for minimizing nonsmooth convex functions

arXiv: Optimization and Control, 2019

In this paper, we combine the positive aspects of the Gradient Sampling (GS) and bundle methods, as the most efficient methods in nonsmooth optimization, to develop a robust method for solving unconstrained nonsmooth convex optimization problems. The main aim of the proposed method is to take advantage of both GS and bundle methods, meanwhile avoiding their drawbacks. At each iteration of this method, to find an efficient descent direction, the GS technique is utilized for constructing a local polyhedral model for the objective function. If necessary, via an iterative improvement process, this initial polyhedral model is improved by some techniques inspired by the bundle and GS methods. The convergence of the method is studied, which reveals the following positive features (i) The convergence of our method is independent of the number of gradient evaluations required to establish and improve the initial polyhedral models. Thus, the presented method needs much fewer gradient evaluati...

A unified analysis of a class of proximal bundle methods for solving hybrid convex composite optimization problems

2021

This paper presents a proximal bundle (PB) framework based on a generic bundle update scheme for solving the hybrid convex composite optimization (HCCO) problem and establishes a common iteration-complexity bound for any variant belonging to it. As a consequence, iterationcomplexity bounds for three PB variants based on different bundle update schemes are obtained in the HCCO context for the first time and in a unified manner. While two of the PB variants are universal (i.e., their implementations do not require parameters associated with the HCCO instance), the other newly (as far as the authors are aware of) proposed one is not but has the advantage that it generates simple, namely one-cut, bundle models. The paper also presents a universal adaptive PB variant (which is not necessarily an instance of the framework) based on one-cut models and shows that its iteration-complexity is the same as the two aforementioned universal PB variants.

A proximal bundle variant with optimal iteration-complexity for a large range of prox stepsizes

arXiv: Optimization and Control, 2020

This paper presents a proximal bundle variant, namely, the RPB method, for solving convex nonsmooth composite optimization problems. Like other proximal bundle variants, RPB solves a sequence of prox bundle subproblems whose objective functions are regularized composite cutting-plane models. Moreover, RPB uses a novel condition to decide whether to perform a serious or null iteration which does not necessarily yield a function value decrease. Optimal (possibly up to a logarithmic term) iteration-complexity bounds for RPB are established for a large range of prox stepsizes, both in the convex and strongly convex settings. To the best of our knowledge, this is the first time that a proximal bundle variant is shown to be optimal for a large range of prox stepsizes. Finally, iteration-complexity results for RPB to obtain iterates satisfying practical termination criteria, rather than near optimal solutions, are also derived.

Level Bundle-Like Algorithms for Convex Optimization

2012

We propose two restricted memory level bundle-like algorithms for minimizing a convex function over a convex set. If the memory is restricted to one linearization of the objective function, then both algorithms are variations of the projected subgradient method. The first algorithm, proposed in Hilbert space, is a conceptual one. It is shown to be strongly convergent to the solution that lies closest to the initial iterate. Furthermore, the entire sequence of iterates generated by the algorithm is contained in a ball with diameter equal to the distance between the initial point and the solution set. The second algorithm is an implementable version. It mimics as much as possible the conceptual one in order to resemble convergence properties. The implementable algorithm is validated by numerical results on several two-stage stochastic linear programs.

New variants of bundle methods

Mathematical Programming, 1995

In this paper we describe a number of new variants of bundle methods for nonsmooth unconstrained and constrained convex optimization, convex-concave games and variaüonal inequaliües. We outline the ideas underlying these methods and present rate-of-convergence estimates.

A splitting bundle approach for non-smooth non-convex minimization

Optimization, 2013

We present a bundle-type method for minimizing non-convex non-smooth functions. Our approach is based on the partition of the bundle into two sets, taking into account the local convex or concave behaviour of the objective function. Termination at a point satisfying an approximate stationarity condition is proved and numerical results are provided.

Rate of Convergence of the Bundle Method

Journal of Optimization Theory and Applications

We prove that the bundle method for nonsmooth optimization achieves solution accuracy ε in at most O ln(1/ε)/ε iterations, if the function is strongly convex. The result is true for the versions of the method with multiple cuts and with cut aggregation.

A Nonmonotone Proximal Bundle Method with (Potentially) Continuous Step Decisions

SIAM Journal on Optimization, 2013

We present a convex nondifferentiable minimization algorithm of proximal bundle type that does not rely on measuring descent of the objective function to declare the so-called serious steps; rather, a merit function is defined which is decreased at each iteration, leading to a (potentially) continuous choice of the stepsize between zero (the null step) and one (the serious step). By avoiding the discrete choice the convergence analysis is simplified, and we can more easily obtain efficiency estimates for the method. Some choices for the step selection actually reproduce the dichotomic behavior of standard proximal bundle methods but shed new light on the rationale behind the process, and ultimately with different rules; furthermore, using nonlinear upper models of the function in the step selection process can lead to actual fractional steps.

A Bundle Algorithm Applied to Bilevel Programming Problems with Non-Unique Lower Level Solutions

2000

In the paper, the question is investigated if a bundle algorithm can be used to compute approximate solutions for bilevel programming problems where the lower level optimal solution is in general not uniquely determined. To give a positive answer to this question, an appropriate regularization approach is used in the lower level. In the general case, the resulting algorithm computes an approximate solution. If the problem proves to have strongly stable lower level solutions for all parameter values in a certain neighborhood of the stationary solutions of the bilevel problem, convergence to stationary solutions can be shown.