The local modified extrapolated Gauss–Seidel (LMEGS) method (original) (raw)
Related papers
Analysis of Convergence of Jacobi and Gauss Siedel Method and Error Minimization
In this research, we will show that neither of the iterative methods always converges. Implying that the Jacobi and Gauss Seidel Methods do not converge often when applied to a system of linear equations yielding a divergent sequence of approximations. In such cases, the method is termed divergent. Therefore for a system of equations to converge, the Diagonal Dominance of the matrix is necessary before applying any iterative method. The error reduction factor will also be discussed in each Iteration in Jacobi and Gauss Seidel method.
The m-order Jacobi, Gauss–Seidel and symmetric Gauss–Seidel methods
Pesquisa e Ensino em Ciências Exatas e da Natureza, 2022
Here, m-order methods are developed that conserve the form of the first-order methods. The m-order methods have a higher rate of convergence than their first-order version. These m-order methods are subsequences of its precursor method, where some benefits of using vector and parallel processors can be explored. The numerical results obtained with vector implementations show computational advantages when compared to the first-order versions.
Generalized Jacobi and Gauss-Seidel Methods for Solving Linear System of Equations
2007
The Jacobi and Gauss-Seidel algorithms are among the stationary iterative methods for solving linear system of equations. They are now mostly used as preconditioners for the popular iterative solvers. In this paper a generalization of these methods are proposed and their convergence properties are studied. Some numerical experiments are given to show the efficiency of the new methods.
A Parallel Multiparametric Gauss-Seidel Method
Numerical Mathematics and Advanced Applications, 2006
In this paper we present the convergence analysis of the Local Modified Extrapolated Gauss-Seidel (LMEGS) method. The related theory of convergence is developed. Convergence ranges and optimum values for the involved parameters of the LMEGS method are obtained. It is proved that even if l, the smallest in absolute value eigenvalue of the iteration matrix of the Jacobi method, becomes larger than unity LMEGS will converge. In fact, the larger l the faster the convergence of LMEGS.
Jacobi and Gauss-Seidel Iterative Methods for the Solution of Systems of Linear Equations Comparison
In our review paper we have compared the two iterative\ methods of solving system of linear equation, these iterative methods are used for solving sparse and de nse system of linear equation. The methods being considered here are: Jacobi method and Gauss-S eidel method. Then the results give us the proof that Gauss-S eidel method is more efficient than Jacobi method by considering maximum number of iteration required to converge and higher accuracy.
Local convergence analysis of the Gauss-Newton-Kurchatov method
2019
We present a local convergence analysis of the Gauss-Newton-Kurchatov method for solving nonlinear least squares problems with a decomposition of the operator. The method uses the sum of the derivative of the differentiable part of the operator and the divided difference of the nondifferentiable part instead of computing the full Jacobian. A theorem, which establishes the conditions of convergence, radius and the convergence order of the proposed method, is proved (Shakhno 2017). However, the radius of convergence is small in general limiting the choice of initial points. Using tighter estimates on the distances, under weaker hypotheses (Argyros et al. 2013), we provide an analysis of the Gauss-Newton-Kurchatov method with the following advantages over the corresponding results (Shakhno 2017): extended convergence region; finer error distances, and an at least as precise information on the location of the solution. The numerical examples illustrate the theoretical results.
A New Modified Version Of Gauss-Seidel Iterative Method Using Grouping Relaxation Approach
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH (ISSN 2277-8616), 2020
Systems of linear equations appear in many areas either directly as in modeling physical situations or indirectly as in the numerical solutions of other mathematical models. The solution of the linear equations' system is probably the most important issue in numerical methods like the finite element method (FEM). Using the finite element method in modeling various structures, with either simple or complicated configuration of elements, in structural engineering became prevalent many years ago. There are two main types of solvers depending on whether the used method is direct or iterative (indirect) method. In contrast to the iterative techniques, the direct techniques provide almost exact solutions, however they are not convenient for some situations, including but not limited to huge systems of equations. In such situations, the iterative solvers are favored as they have privileges concerning solving speed and storage requirements. In addition, indirect solvers are simpler to program. This research focuses on using the Classical (Stationary) iterative techniques for solving linear systems of equations. The main objective of this research is to develop a new modified version of the well-known Gauss-Seidel (GS) iterative technique which is adapted to solving problems in structural engineering. The proposed technique remarkably outperforms GS technique regarding the required number of iterations and the convergence speed. In this paper, the differences between the direct and iterative approaches have been discussed, along with a quick overview of some of the methods underlying these two classes. Then, the idea and algorithm of the new proposed-Modified Gauss-Seidel‖ (MGS) technique have been elucidated. Afterward, the algorithm has been programmed and used to solve some 2D Practical Examples, besides using the conventional Jacobi and GS techniques. Finally, the obtained results have been compared to assess the proposed MGS; it outperformed both Jacobi and GS.
Mathematics
We study an iterative differential-difference method for solving nonlinear least squares problems, which uses, instead of the Jacobian, the sum of derivative of differentiable parts of operator and divided difference of nondifferentiable parts. Moreover, we introduce a method that uses the derivative of differentiable parts instead of the Jacobian. Results that establish the conditions of convergence, radius and the convergence order of the proposed methods in earlier work are presented. The numerical examples illustrate the theoretical results.
A Modified Precondition in the Gauss-Seidel Method
Advances in Linear Algebra & Matrix Theory, 2012
In recent years, a number of preconditioners have been applied to solve the linear systems with Gauss-Seidel method (see [1-7,10-12,14-16]). In this paper we use S l instead of (S + S m) and compare with M. Morimoto's precondition [3] and H. Niki's precondition [5] to obtain better convergence rate. A numerical example is given which shows the preference of our method.