A derivative-free algorithm for linearly constrained optimization problems (original) (raw)
Related papers
Derivative-free optimization: a review of algorithms and comparison of software implementations
Journal of Global Optimization, 2013
This paper addresses the solution of bound-constrained optimization problems using algorithms that require only the availability of objective function values but no derivative information. We refer to these algorithms as derivative-free algorithms. Fueled by a growing number of applications in science and engineering, the development of derivativefree optimization algorithms has long been studied, and it has found renewed interest in recent time. Along with many derivative-free algorithms, many software implementations have also appeared. The paper presents a review of derivative-free algorithms, followed by a systematic comparison of 22 related implementations using a test set of 502 problems. The test bed includes convex and nonconvex problems, smooth as well as nonsmooth problems. The algorithms were tested under the same conditions and ranked under several criteria, including their ability to find near-global solutions for nonconvex problems, improve a given starting point, and refine a near-optimal solution. A total of 112,448 problem instances were solved. We find that the ability of all these solvers to obtain good solutions diminishes with increasing problem size. For the problems used in this study, TOMLAB/MULTI-MIN, TOMLAB/GLCCLUSTER, MCS and TOMLAB/LGO are better, on average, than other derivative-free solvers in terms of solution quality within 2,500 function evaluations. These global solvers outperform local solvers even for convex problems. Finally, TOMLAB/OQNLP, NEWUOA, and TOMLAB/MULTIMIN show superior performance in terms of refining a nearoptimal solution.
A derivative-free algorithm for bound constrained optimization
2002
In this work, we propose a new globally convergent derivative-free algorithm for the minimization of a continuously differentiable function in the case that some of (or all) the variables are bounded. This algorithm investigates the local behaviour of the objective function on the feasible set by sampling it along the coordinate directions. Whenever a "suitable" descent feasible coordinate direction is detected a new point is produced by performing a linesearch along this direction. The information progressively obtained during the iterates of the algorithm can be used to build an approximation model of the objective function. The minimum of such a model is accepted if it produces an improvement of the objective function value. We also derive a bound for the limit accuracy of the algorithm in the minimization of noisy functions. Finally, we report the results of a preliminary numerical experience.
A derivative-free algorithm for nonlinear programming
In this paper we consider nonlinear constrained optimization problems in case where the first order derivatives of the objective function and the constraints can not be used. Up to date only a few approaches have been proposed for tackling such a class of problems. In this work we propose a new algorithm. The starting point of the proposed approach is the possibility to transform the original constrained problem into an unconstrained or linearly constrained minimization of a nonsmooth exact penalty function; this approach shows two main difficulties: the first one is the nonsmoothness of this class of exact penalty functions; the second one is the fact that the equivalence between stationary points of the constrained problem and those of the exact penalty function can be stated only when the penalty parameter is smaller than a threshold value which is not known a priori. In this paper we propose a derivative-free algorithm which overcomes the preceding difficulties and produces a sequence of points that admits a subsequence converging towards Karush-Kuhn-Tucker points of the constrained problem. In particular the proposed algorithm includes an updating rule for the penalty parameter which, after, at most, a finite number of updatings, is able to determine a "right value" of the penalty parameter. Numerical results on a set of test problems are reported which show the viability of the proposed algorithm.
2011
In many engineering applications it is common to find optimization problems where the cost function and/or constraints require complex simulations. Though it is often, but not always, theoretically possible in these cases to extract derivative information efficiently, the associated implementation procedures are typically non-trivial and time-consuming (e.g., adjoint-based methodologies). Derivative-free (non-invasive, black-box) optimization has lately received considerable attention within the optimization community, including the establishment of solid mathematical foundations for many of the methods considered in practice. In this chapter we will describe some of the most conspicuous derivative-free optimization techniques. Our depiction will concentrate first on local optimization such as pattern search techniques, and other methods based on interpolation/approximation. Then, we will survey a number of global search methodologies, and finally give guidelines on constraint handling approaches.
A derivative-free method for linearly constrained nonsmooth optimization
This paper develops a new derivative-free method for solving linearly constrained nonsmooth optimization problems. The objective functions in these problems are, in general, non-regular locally Lipschitz continuous function. The computation of generalized subgradients of such functions is difficult task. In this paper we suggest an algorithm for the computation of subgradients of a broad class of non-regular locally Lipschitz continuous functions. This algorithm is based on the notion of a discrete gradient. An algorithm for solving linearly constrained nonsmooth optimization problems based on discrete gradients is developed. We report preliminary results of numerical experiments. These results demonstrate that the proposed algorithm is efficient for solving linearly constrained nonsmooth optimization problems.
A derivative-free algorithm for non-linear optimization with linear equality constraints
Optimization, 2019
We propose a derivative-free algorithm for solving linear equality constrained non-linear optimization problems, named LECOA. In each iteration of the algorithm, the objective function is approximated by a quadratic model constructed from interpolation points. The choice of the points leaves some degree of freedom in the model taken up by minimizing the Frobenius norm of the change to the Hessian matrix of the model. The new iterate is generally generated by minimizing the model in a trust-region using a null space truncated conjugate gradient method. Numerical results are presented which show that the proposed algorithm competes against some algorithms in the literature. Experiments show that starting with the point that minimizes the infinity norm subject to the linear equality constraints gives excellent results. A limittype global convergence of the proposed algorithm is proved under some reasonable assumptions.
A Linesearch-based Derivative-free Approach for Nonsmooth Constrained Optimization
In this paper, we propose new linesearch-based methods for nonsmooth constrained optimization problems when first-order information on the problem functions is not available. In the first part, we describe a general framework for bound-constrained problems and analyze its convergence toward stationary points, using the Clarke-Jahn directional derivative. In the second part, we consider inequality constrained optimization problems where both objective function and constraints can possibly be nonsmooth. In this case, we first split the constraints into two subsets: difficult general nonlinear constraints and simple bound constraints on the variables. Then, we use an exact penalty function to tackle the difficult constraints and we prove that the original problem can be reformulated as the bound-constrained minimization of the proposed exact penalty function. Finally, we use the framework developed for the bound-constrained case to solve the penalized problem. Moreover, we prove that every accumulation point, under standard assumptions on the search directions, of the generated sequence of iterates is a stationary point of the original constrained problem. In the last part of the paper, we report extended numerical results on both bound-constrained and nonlinearly constrained problems, showing that our approach is promising when compared to some state-of-the-art codes from the literature.
Algorithms for Constrained Optimization
2011
This thesis is divided into two parts. The first part is concerned with the development of efficient algorithms for solving the general mathematical programming problems. Such problems frequently arise in optimization based design of control system or circuits. The thesis concentrates on a specific class of algorithms which obtain a search direction by solving a quadratic approximation to the original problem and then utilizes an exact penalty function to compute the step length. One of the defects of existing algorithms in this category is that the quadratic approxi mation to the problem is such that no feasible solution exists. A new quadratic aproximation is therefore introduced with the following properties: a feasible solution always exists and the solution of the programme yields a search direction which is a descent direction for the exact penalty function used to compute step length. Using this a new algorithm for constrained optimization is developed and shown to be globally convergent. A simplified version of the algorithm for the convex case is also developed. Finally the special structure of the quadratic approximation is exploited to obtain a simple procedure for updating the penalty parameter. Global convergence of this version of the algorithm is also established. It is then shown that if a secant procedure is used to generate the approximation to the Hessian of the Lagrangian (employed in the quadratic approximation) the resultant algorithm is globally con vergent and has a Q-superlinear rate of convergence. A new version of the B.F.G.S. procedure for approximating the Hessian is proposed and the resultant algorithm is shown to be globally convergent and have an R-superlinear rate of convergence. Numerical examples illustrate the results. In the second part, a modified version of the differential dynamic programming (DDP) method, for discrete unconstrained control problems, is introduced and is proven to be a 'step by step' implementation of Newton's method for solving systems of nonlinear equations. Finally, an extension of this method is developed to cope with control constraints. The resultant algorithm is shown, under some assumptions, to be globally convergent and have a quadratic rate of convergence.
A linear programming-based optimization algorithm for solving nonlinear programming problems
European Journal of Operational Research, 2010
In this paper a linear programming-based optimization algorithm called the Sequential Cutting Plane algorithm is presented. The main features of the algorithm are described, convergence to a Karush-Kuhn-Tucker stationary point is proved and numerical experience on some well-known test sets is showed. The algorithm is based on an earlier version for convex inequality constrained problems, but here the algorithm is extended to general continuously differentiable nonlinear programming problems containing both nonlinear inequality and equality constraints. A comparison with some existing solvers shows that the algorithm is competitive with these solvers. Thus, this new method based on solving linear programming subproblems is a good alternative method for solving nonlinear programming problems efficiently. The algorithm has been used as a subsolver in a mixed integer nonlinear programming algorithm where the linear problems provide lower bounds on the optimal solutions of the nonlinear programming subproblems in the branch and bound tree for convex, inequality constrained problems. j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / e j o r conveniently provide a lower bound on the optimal solution of the convex NLP subproblem. Lower bounds are needed in the branch and bound procedure for efficiency reasons. See [21] for more details. Very promising results are reported in [20] for a special set of difficult block optimization problems. The MINLP version of the algorithm found better solutions in one minute compared to the solutions that other commercial solvers found in 12 h.