A New Modified Conjugate Gradient Method and Its Global Convergence Theorem (original) (raw)

Modified new conjugate gradient method for Unconstrained Optimization

Tikrit Journal of Pure Science, 2019

The current paper modified method of conjugate gradient for solving problems of unconstrained optimization. The modified method convergence is achieved by assuming some hypotheses. The statistical results demonstrate that the modified method is efficient for solving problems of Unconstrained Nonlinear Optimization in comparison with methods FR and HS. http://dx.doi.org/10.25130/tjps.24.2019.095

Convergence analysis of a new conjugate gradient method for unconstrained optimization

Applied Mathematical Sciences, 2015

In this paper, the convergence analysis of a proposed new conjugate gradient method for unconstrained optimization problems was considered. This method inherits an important property of Polak-Ribiere-Polyak (PRP). Under the exact line search condition, we established the descent condition of the method as well as the global convergence of the method. Numerical results show that our formula is effective by comparing with some existing formulas.

Two modified conjugate gradient methods for solving unconstrained optimization and application

RAIRO - Operations Research

Conjugate gradient methods are a popular class of iterative methods for solving linear systems of equations and nonlinear optimization problems as they do not require the storage of any matrices. In order to obtain a theoretically effective and numerically efficient method, two modified conjugate gradient methods (called the MCB1 and MCB2 methods) are proposed. In which the coefficient βk in the two proposed methods is inspired by the structure of the conjugate gradient parameters in some existing conjugate gradient methods. Under the strong Wolfe line search, the sufficient descent property and global convergence of the MCB1 method are proved. Moreover, the MCB2 method generates a descent direction independently of any line search and produces good convergence properties when the strong Wolfe line search is employed. Preliminary numerical results show that the MCB1 and MCB2 methods are effective and robust in minimizing some unconstrained optimization problems and each of these mod...

The Convergence Properties of a New Kind of Conjugate Gradient Method for Unconstrained Optimization

Conjugate gradient (CG) methods are the most prominent technique for solving large-scale unconstrained optimization problems, due to its robustness, low memory requirement, and global convergence properties. Numerous studies and modifications have been carried out recently to improve these methods. In this paper, a new modification of a CG coefficient that possesses the global convergence properties is presented. The global convergence result is validated using exact line search. Several numerical experiments showed that, the proposed formula is found to be robust and efficient when compared to other CG coefficients.

Two new conjugate gradient methods in unconstrained optimization problems

Annals of Mathematics and Computer Science

Zheng Y. and Zheng B. in [14] modified Dai-Liao conjugate gradient method to come up with two new Dai-Liao-type conjugate gradient methods. These methods were shown to have satisfied descent condition taken into consideration the strong Wolfe line search. Convergence for objective functions were also guarantied. In this work, two new conjugate gradient methods are introduced in line with the work of Zheng Y. and Zheng B. [14] by changing the first term in AyO-CG method [3] to solve unconstrained non-linear optimization problems. Descent properties of these methods are shown and guarantied. Convergence analyses of these methods in line with strong Wolfe conditions showed that they are globally convergent. Comparison based on Dolan More performance profile of the numerical strength of these methods with the two modified Dai-Laio type methods proved that our methods compare favorably well with them

A modified nonlinear conjugate gradient method for unconstrained optimization

Applied Mathematical Sciences, 2015

Nonlinear conjugate gradient method holds an important role in solving large scale unconstrained optimization problems. Their simplicity, low memory requirement, and global convergence stimulated a massive study on the method. Numerous modifications have been done recently to improve its performance. In this paper, we proposed a new formula for the conjugate gradient coefficient k  that generates the descent search direction. In addition, we establish the global convergence result under exact line search. The outcome of our numerical experiment show that the proposed formula is very efficient and more reliable when compare to other conjugate gradient methods.

New Hybrid Conjugate Gradient Method with Global Convergence Properties for Unconstrained Optimization

Nonlinear conjugate gradient (CG) method holds an important role in solving large-scale unconstrained optimization problems. In this paper, we suggest a new modification of CG coefficient í µí»½ í µí±˜ that satisfies sufficient descent condition and possesses global convergence property under strong Wolfe line search. The numerical results show that our new method is more efficient compared with other CG formulas tested.

A modified nonlinear conjugate gradient algorithm for unconstrained optimization

2009

Nonlinear conjugate gradient methods are among the most preferable and effortless methods to solve smooth optimization problems. Due to their clarity and low memory requirements, they are more desirable for solving large-scale smooth problems. Conjugate gradient methods make use of gradient and the previous direction information to determine the next search direction, and they require no numerical linear algebra. However, the utility of nonlinear conjugate gradient methods has not been widely employed in solving nonsmooth optimization problems. In this paper, a modified nonlinear conjugate gradient method, which achieves the global convergence property and numerical efficiency, is proposed to solve large-scale nonsmooth convex problems. The new method owns the search direction, which generates sufficient descent property and belongs to a trust region. Under some suitable conditions, the global convergence of the proposed algorithm is analyzed for nonsmooth convex problems. The numerical efficiency of the proposed algorithm is tested and compared with some existing methods on some large-scale nonsmooth academic test problems. The numerical results show that the new algorithm has a very good performance in solving large-scale nonsmooth problems.

A new conjugate gradient algorithms using conjugacy condition for solving unconstrained optimization

2021

The primarily objective of this paper which is indicated in the field of conjugate gradient algorithms for unconstrained optimization problems and algorithms is to show the advantage of the new proposed algorithm in comparison with the standard method which is denoted as. Hestenes Stiefel method, as we know the coefficient conjugate parameter is very crucial for this reason, we proposed a simple modification of the coefficient conjugate gradient which is used to derived the new formula for the conjugate gradient update parameter described in this paper. Our new modification is based on the conjugacy situation for nonlinear conjugate gradient methods which is given by the conjugacy condition for nonlinear conjugate gradient methods and added a nonnegative parameter to suggest the new extension of the method. Under mild Wolfe conditions, the global convergence theorem and lemmas are also defined and proved. The proposed method's efficiency is programming and demonstrated by the numerical instances, which were very encouraging.

A New Conjugate Gradient for Unconstrained Optimization Based on Step Size of Barzilai and Borwein

Science Journal of University of Zakho, 2016

In this paper, a new formula of is suggested for conjugate gradient method of solving unconstrained optimization problems based on step size of Barzilai and Borwein. Our new proposed CG-method has descent condition, sufficient descent condition and global convergence properties. Numerical comparisons with a standard conjugate gradient algorithm show that this algorithm very effective depending on the number of iterations and the number of functions evaluation.