A Derivative-Free Algorithm for Linearly Constrained Finite Minimax Problems (original) (raw)
Related papers
A derivative-free method for linearly constrained nonsmooth optimization
This paper develops a new derivative-free method for solving linearly constrained nonsmooth optimization problems. The objective functions in these problems are, in general, non-regular locally Lipschitz continuous function. The computation of generalized subgradients of such functions is difficult task. In this paper we suggest an algorithm for the computation of subgradients of a broad class of non-regular locally Lipschitz continuous functions. This algorithm is based on the notion of a discrete gradient. An algorithm for solving linearly constrained nonsmooth optimization problems based on discrete gradients is developed. We report preliminary results of numerical experiments. These results demonstrate that the proposed algorithm is efficient for solving linearly constrained nonsmooth optimization problems.
A derivative-free algorithm for linearly constrained optimization problems
Computational Optimization and Applications, 2014
Your article is protected by copyright and all rights are held exclusively by Springer Science +Business Media New York. This e-offprint is for personal use only and shall not be selfarchived in electronic repositories. If you wish to self-archive your article, please use the accepted manuscript version for posting on your own website. You may further deposit the accepted manuscript version in any repository, provided it is only made publicly available 12 months after official publication or later and provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link must be accompanied by the following text: "The final publication is available at link.springer.com".
A derivative-free algorithm for systems of nonlinear inequalities
Optimization Letters, 2008
Recently a new derivative-free algorithm has been proposed for the solution of linearly constrained finite minimax problems. This derivative-free algorithm is based on a smoothing technique that allows one to take into account the non-smoothness of the max function. In this paper, we investigate, both from a theoretical and computational point of view, the behavior of the minmax algorithm when used to solve systems of nonlinear inequalities when derivatives are unavailable. In particular, we show an interesting property of the algorithm, namely, under some mild conditions regarding the regularity of the functions defining the system, it is possible to prove that the algorithm locates a solution of the problem after a finite number of iterations. Furthermore, under a weaker regularity condition, it is possible to show that an accumulation point of the sequence generated by the algorithm exists which is a solution of the system. Moreover, we carried out numerical experimentation and comparison of the method against a standard pattern search minimization method. The obtained results confirm that the good theoretical properties of the method correspond to interesting numerical performance. Moreover, the algorithm compares favorably with a standard derivative-free method, and this seems to indicate that extending the smoothing technique to pattern search algorithms can be beneficial.
Necessary Optimality Conditions for Approximate Minima in Unconstrained Finite Minmax Problem
A mathematical optimization is the process of minimizing (or maximizing) a function. The minimum of a function is a critical point and corresponds to gradient (or derivative) of 0. The research work presented in this paper deals with unconstrained minmax problem where the objective function is the maximum of a finite number of smooth convex functions. Obviously, that function is convex but may not be necessarily differentiable. Thus, we can't use gradient method. When gradient information of the objective function is unavailable, unreliable or 'expensive' in terms of computation time, the approximate optimization is ideal. More precisely, we focus on necessary optimality conditions for approximate minima in minmax problem. Firstly, we followed all the details about convex optimization, optimality conditions, subgradient and subdifferential as well as approximate optimization. We present unbiassed and sharp result using standard theorems and references. In here Carathéodory's theorem plays very important role to get our result.
A derivative-free algorithm for bound constrained optimization
2002
In this work, we propose a new globally convergent derivative-free algorithm for the minimization of a continuously differentiable function in the case that some of (or all) the variables are bounded. This algorithm investigates the local behaviour of the objective function on the feasible set by sampling it along the coordinate directions. Whenever a "suitable" descent feasible coordinate direction is detected a new point is produced by performing a linesearch along this direction. The information progressively obtained during the iterates of the algorithm can be used to build an approximation model of the objective function. The minimum of such a model is accepted if it produces an improvement of the objective function value. We also derive a bound for the limit accuracy of the algorithm in the minimization of noisy functions. Finally, we report the results of a preliminary numerical experience.
An Algorithm for the Global Optimization of a Class of Continuous Minimax Problems
Journal of Optimization Theory and Applications, 2009
We propose an algorithm for the global optimization of continuous minimax problems involving polynomials. The method can be described as a discretization approach to the well known semi-infinite formulation of the problem. We Financial support of EPSRC Grant GR/T02560/01 gratefully acknowledged.
An Interior-Point Algorithm for Nonlinear Minimax Problems
Journal of Optimization Theory and Applications, 2010
We present a primal-dual interior point method for constrained nonlinear, discrete minimax problems where the objective functions and constraints are not necessarily convex. The algorithm uses two merit functions to ensure progress towards the points satisfying the first order optimality conditions of the original problem. Convergence properties are described and numerical results provided.
On the rate of convergence of two minimax algorithms
Journal of Optimization Theory and Applications, 1991
bstract. We show that the sequences of function values constructed by two versions of a minimax algorithm converge linearly to the minimum values. Both versions use the Pironneau-Polak-Pshenichnyi search direction subprocedure; the first uses an exact line search to determine step size, while the second one uses an Armijo-type step size rule. The proofs depend on a second-order sufficiency condition, but not on strict complementary slackness. Minimax problems in which each function appearing in the max is a composition of a twice continuously differentiable function with a linear function typically do not satisfy second-ordo" sufficiency conditions. Nevatheless, we show that, on such minimax problems, the two algorithms do converge linearly when the outer functions are con vex and strict complementary slackness holds at solutions.
A derivative-free algorithm for nonlinear programming
In this paper we consider nonlinear constrained optimization problems in case where the first order derivatives of the objective function and the constraints can not be used. Up to date only a few approaches have been proposed for tackling such a class of problems. In this work we propose a new algorithm. The starting point of the proposed approach is the possibility to transform the original constrained problem into an unconstrained or linearly constrained minimization of a nonsmooth exact penalty function; this approach shows two main difficulties: the first one is the nonsmoothness of this class of exact penalty functions; the second one is the fact that the equivalence between stationary points of the constrained problem and those of the exact penalty function can be stated only when the penalty parameter is smaller than a threshold value which is not known a priori. In this paper we propose a derivative-free algorithm which overcomes the preceding difficulties and produces a sequence of points that admits a subsequence converging towards Karush-Kuhn-Tucker points of the constrained problem. In particular the proposed algorithm includes an updating rule for the penalty parameter which, after, at most, a finite number of updatings, is able to determine a "right value" of the penalty parameter. Numerical results on a set of test problems are reported which show the viability of the proposed algorithm.
Discrete gradient method: derivative-free method for nonsmooth optimization
2008
Abstract A new derivative-free method is developed for solving unconstrained nonsmooth optimization problems. This method is based on the notion of a discrete gradient. It is demonstrated that the discrete gradients can be used to approximate subgradients of a broad class of nonsmooth functions. It is also shown that the discrete gradients can be applied to find descent directions of nonsmooth functions.