An Algorithm Using Trust Region Strategy for Minimization of a Nondifferentiable Function (original) (raw)

A trust region method using subgradient for minimizing a nondifferentiable function

Yugoslav Journal of Operations Research, 2009

The minimization of a particular nondifferentiable function is considered. The first and second order necessary conditions are given. A trust region method for minimization of this form of the objective function is presented. The algorithm uses the subgradient instead of the gradient. It is proved that the sequence of points generated by the algorithm has an accumulation point which satisfies the first and second order necessary conditions

A trust region method for minimization of nonsmooth functions with linear constraints

1997

We introduce a trust region algorithm for minimization of nonsmooth functions with linear constraints. At each iteration, the objective function is approximated by a model fuction that satis es a set of assumptions stated recently by Qi and Sun in the context of unconstrained nonsmooth optimization. The trust region iteration begins with the resolution of an \easy problem", as in recent works of Mart nez and Santos and Friedlander, Mart nez and Santos, for smooth constrained optimization. In practical implementations we use the innity norm for de ning the trust region, which ts well with the domain of the problem. We prove global convergence and report numerical experiments.

A trust-region strategy for minimization on arbitrary domains

Mathematical Programming, 1995

We present a trust region method for minimizing a general di erentiable function restricted to an arbitrary closed set. We prove a global convergence theorem. The trust region method de nes di cult subproblems that are solvable in some particular cases. We analyze in detail the case where the domain is an Euclidean ball. For this case we present numerical experiments where we consider di erent Hessian approximations.

Global convergence of trust-region algorithms for convex constrained minimization without derivatives

Applied Mathematics and Computation, 2013

In this work we propose a trust-region algorithm for the problem of minimizing a function within a convex closed domain. We assume that the objective function is differentiable but no derivatives are available. The algorithm has a very simple structure and allows a great deal of freedom in the choice of the models. Under reasonable assumptions for derivativefree schemes, we prove global convergence for the algorithm, that is to say, that all accumulation points of the sequence generated by the algorithm are stationary.

A new modified trust region algorithm for solving unconstrained optimization problems

Journal of Mathematical Extension, 2018

Iterative methods for optimization can be classified into two categories: line search methods and trust region methods. In this paper, we propose a modified regularized Newton method without line search for minimizing nonconvex functions whose Hessian matrix may be singular. The proposed method is proved to converge globally if the Gradient and Hessian of the objective function are Lipschitz continuous. Moreover, we report numerical results that show that the proposed algorithm is competitive with the existing methods.

Convergence to a second-order point of a trust-region algorithm with a nonmonotonic penalty parameter for constrained optimization

Journal of Optimization Theory and Applications, 1996

In a recent paper (Ref. 1), the author proposed a trust-region algorithm for solving the problem of minimizing a nonlinear function subject to a set of equality constraints. The main feature of the algorithm is that the penalty parameter in the merit function can be decreased whenever it is warranted. He studied the behavior of the penalty parameter and proved several global and local convergence results. One of these results is that there exists a subsequence of the iterates generated by the algorithm that converges to a point that satisfies the first-order necessary conditions. In the current paper, we show that, for this algorithm, there exists a subsequence of iterates that converges to a point that satisfies both the first-order and the second-order necessary conditions.

A version of bundle method with linear programming

Bundle methods have been intensively studied for solving both convex and nonconvex optimization problems. In most of the bundle methods developed thus far, at least one quadratic programming (QP) subproblem needs to be solved in each iteration. In this paper, we exploit the feasibility of developing a bundle algorithm that only solves linear subproblems. We start from minimization of a convex function and show that the sequence of major iterations converge to a minimizer. For nonconvex functions we consider functions that are locally Lipschitz continuous and prox-regular on a bounded level set, and minimize the cutting-plane model over a trust region with infinity norm. The para-convexity of such functions allows us to use the locally convexified model and its convexity properties. Under some conditions and assumptions, we study the convergence of the proposed algorithm through the outer semicontinuity of the proximal mapping. Encouraging results of preliminary numerical experiments on standard test sets are provided.

Survey of trust-region derivative free optimization methods

2007

Abstract. In this survey article we give the basic description of the interpolation based derivative free optimization methods and their variants. We review the recent contributions dealing with the maintaining the geometry of the interpolation set, the management of the trust region radius and the stopping criteria. Derivative free algorithms developed for problems with some structure like for partially separable functions are discussed.

An efficient nonmonotone trust-region method for unconstrained optimization

Numerical Algorithms, 2012

The monotone trust-region methods are well-known techniques for solving unconstrained optimization problems. While it is known that the nonmonotone strategies not only can improve the likelihood of finding the global optimum but also can improve the numerical performance of approaches, the traditional nonmonotone strategy contains some disadvantages. In order to overcome to these drawbacks, we introduce a variant nonmonotone strategy and incorporate it into trust-region framework to construct more reliable approach. The new nonmonotone strategy is a convex combination of the maximum of function value of some prior successful iterates and the current function value. It is proved that the proposed algorithm possesses global convergence to first-order and second-order stationary points under some classical assumptions. Preliminary numerical experiments indicate that the new approach is considerably promising for solving unconstrained optimization problems.