A nonmonotone trust-region line search method for large-scale unconstrained optimization (original) (raw)
Related papers
Two globally convergent nonmonotone trust-region methods for unconstrained optimization
Journal of Applied Mathematics and Computing, 2015
This paper addresses some trust-region methods equipped with nonmonotone strategies for solving nonlinear unconstrained optimization problems. More specifically, the importance of using nonmonotone techniques in nonlinear optimization is motivated, then two new nonmonotone terms are proposed, and their combinations into the traditional trust-region framework are studied. The global convergence to first-and second-order stationary points and local superlinear and quadratic convergence rates for both algorithms are established. Numerical experiments on the CUTEst test collection of unconstrained problems and some highly nonlinear test functions are reported, where a comparison among state-of-theart nonmonotone trust-region methods show the efficiency of the proposed nonmonotne schemes.
An efficient nonmonotone trust-region method for unconstrained optimization
Numerical Algorithms, 2012
The monotone trust-region methods are well-known techniques for solving unconstrained optimization problems. While it is known that the nonmonotone strategies not only can improve the likelihood of finding the global optimum but also can improve the numerical performance of approaches, the traditional nonmonotone strategy contains some disadvantages. In order to overcome to these drawbacks, we introduce a variant nonmonotone strategy and incorporate it into trust-region framework to construct more reliable approach. The new nonmonotone strategy is a convex combination of the maximum of function value of some prior successful iterates and the current function value. It is proved that the proposed algorithm possesses global convergence to first-order and second-order stationary points under some classical assumptions. Preliminary numerical experiments indicate that the new approach is considerably promising for solving unconstrained optimization problems.
A hybrid of adjustable trust-region and nonmonotone algorithms for unconstrained optimization
Applied Mathematical Modelling, 2014
This study devotes to incorporating a nonmonotone strategy with an automatically adjusted trust-region radius to propose a more efficient hybrid of trust-region approaches for unconstrained optimization. The primary objective of the paper is to introduce a more relaxed trust-region approach based on a novel extension in trust-region ratio and radius. The next aim is to employ stronger nonmonotone strategies, i.e. bigger trust-region ratios, far from the optimizer and weaker nonmonotone strategies, i.e. smaller trust-region ratios, close to the optimizer. The global convergence to first-order stationary points as well as the local superlinear and quadratic convergence rates are also proved under some reasonable conditions. Some preliminary numerical results and comparisons are also reported.
An adaptive nonmonotone trust-region method with curvilinear search for minimax problem
In this paper we propose an adaptive nonmonotone algorithm for minimax problem. Unlike traditional nonmonotone method, the nonmonotone technique applied to our method is based on the nonmonotone technique proposed by Zhang and Hager [H.C. Zhang, W.W. Hager, A nonmonotone line search technique and its application to unconstrained optimization, SIAM J. Optim. 14 1043-1056] instead of that presented by Grippo et al. [L. Grippo, F. Lampariello, S. Lucidi, A nonmonotone line search technique for Newton's method, SIAM J. Numer. Anal. 23(4)(1986) 707-716]. Meanwhile, by using adaptive technique, it can adaptively perform the nonmonotone trust-region step or nonmonotone curvilinear search step when the solution of subproblems is unacceptable. Global and superlinear convergences of the method are obtained under suitable conditions. Preliminary numerical results are reported to show the effectiveness of the proposed algorithm.
A Nonmonotone trust region method with adaptive radius for unconstrained optimization problems
Computers & Mathematics with Applications, 2010
In this paper, we incorporate a nonmonotone technique with the new proposed adaptive trust region radius (Shi and Guo, 2008) [4] in order to propose a new nonmonotone trust region method with an adaptive radius for unconstrained optimization. Both the nonmonotone techniques and adaptive trust region radius strategies can improve the trust region methods in the sense of global convergence. The global convergence to first and second order critical points together with local superlinear and quadratic convergence of the new method under some suitable conditions. Numerical results show that the new method is very efficient and robustness for unconstrained optimization problems.
An Improved Adaptive Trust-Region Method for Unconstrained Optimization
Mathematical Modelling and Analysis, 2014
In this study, we propose a trust-region-based procedure to solve unconstrained optimization problems that take advantage of the nonmonotone technique to introduce an efficient adaptive radius strategy. In our approach, the adaptive technique leads to decreasing the total number of iterations, while utilizing the structure of nonmonotone formula helps us to handle large-scale problems. The new algorithm preserves the global convergence and has quadratic convergence under suitable conditions. Preliminary numerical experiments on standard test problems indicate the efficiency and robustness of the proposed approach for solving unconstrained optimization problems.
A new modified trust region algorithm for solving unconstrained optimization problems
Journal of Mathematical Extension, 2018
Iterative methods for optimization can be classified into two categories: line search methods and trust region methods. In this paper, we propose a modified regularized Newton method without line search for minimizing nonconvex functions whose Hessian matrix may be singular. The proposed method is proved to converge globally if the Gradient and Hessian of the objective function are Lipschitz continuous. Moreover, we report numerical results that show that the proposed algorithm is competitive with the existing methods.
An inexact line search approach using modified nonmonotone strategy for unconstrained optimization
Numerical Algorithms, 2014
This paper concerns with a new nonmonotone strategy and its application to the line search approach for unconstrained optimization. It has been believed that nonmonotone techniques can improve the possibility of finding the global optimum and increase the convergence rate of the algorithms. We first introduce a new nonmonotone strategy which includes a convex combination of the maximum function value of some preceding successful iterates and the current function value. We then incorporate the proposed nonmonotone strategy into an inexact Armijo-type line search approach to construct a more relaxed line search procedure. The global convergence to first-order stationary points is subsequently proved and the R-linear convergence rate are established under suitable assumptions. Preliminary numerical results finally show the efficiency and the robustness of the proposed approach for solving unconstrained nonlinear optimization problems.
A limited memory adaptive trust-region approach for large-scale unconstrained optimization
Bulletin of the iranian mathematical society
This study concerns with a trust-region-based method for solving unconstrained optimization problems. The approach takes the advantages of the compact limited memory BFGS updating formula together with an appropriate adaptive radius strategy. In our approach, the adaptive technique leads us to decrease the number of subproblems solving, while utilizing the structure of limited memory quasi-Newton formulas helps to handle large-scale problems. Theoretical analysis indicates that the new approach preserves the global convergence to first-order stationary points under classical assumptions. Moreover, the superlinear and the quadratic convergence rates are also established under suitable conditions. Preliminary numerical experiments on some standard test problems show the effectiveness of the proposed approach for solving large-scale unconstrained optimization problems. MSC(2010): Primary: 90C30; Secondary: 65k05; Third: 65k10.
A derivative-free nonmonotone line-search technique for unconstrained optimization
Journal of Computational …, 2008
A tolerant derivative-free nonmonotone line search technique is proposed and analyzed. Several consecutive increases in the objective function and also non descent directions are allowed for unconstrained minimization. To exemplify the power of this new line search we describe a direct search algorithm in which the directions are chosen randomly. The convergence properties of this random method rely exclusively on the line search technique. We present numerical experiments, to illustrate the advantages of using a derivative-free nonmonotone globalization strategy, with approximated-gradient type methods and also with the inverse SR1 update that could produce non descent directions. In all cases we use a local variation finite differences approximation to the gradient.