The influence of a matrix condition number on iterative methods' convergence (original) (raw)
Related papers
Jacobi and Gauss-Seidel Iterative Methods for the Solution of Systems of Linear Equations Comparison
In our review paper we have compared the two iterative\ methods of solving system of linear equation, these iterative methods are used for solving sparse and de nse system of linear equation. The methods being considered here are: Jacobi method and Gauss-S eidel method. Then the results give us the proof that Gauss-S eidel method is more efficient than Jacobi method by considering maximum number of iteration required to converge and higher accuracy.
Generalized Jacobi and Gauss-Seidel Methods for Solving Linear System of Equations
2007
The Jacobi and Gauss-Seidel algorithms are among the stationary iterative methods for solving linear system of equations. They are now mostly used as preconditioners for the popular iterative solvers. In this paper a generalization of these methods are proposed and their convergence properties are studied. Some numerical experiments are given to show the efficiency of the new methods.
Convergence of Preconditioned Gauss-Seidel Iterative Method For −Matrices
Communication in Physical Sciences, 6(1): 803-808, 2020
A great many real-life situations are often modeled as linear system of equations, =. Direct methods of solution of such systems are not always realistic, especially where the coefficient matrix is very large and sparse, hence the recourse to iterative solution methods. The Gauss-Seidel, a basic iterative method for linear systems, is one such method. Although convergence is rarely guaranteed for all cases, it is established that the method converges for some situations depending on properties of the entries of the coefficient matrix and, by implication, on the algebraic structure of the method. However, as with all basic iterative methods, when it does converge, convergence could be slow. In this research, a preconditioned version of the Gauss-Seidel method is proposed in order to improve upon its convergence and robustness. For this purpose, convergence theorems are advanced and established. Numerical experiments are undertaken to validate results of the proved theorems.
Iterative methods for the numerical solution of linear systems
2012
The objective of this dissertation is the design and analysis of iterative methods for the numerical solution of large, sparse linear systems. This type of systems emerges from the discretization of Partial Differential Equations. Two special types of linear systems are studied. The first type deals with systems whose coefficient matrix is two cyclic whereas the second type studies the augmented linear systems. Initially, the Preconditioned Simultaneous Displacement (PSD) method, which is a generalized version of the Symmetric SOR (SSOR) method, is studied when the Jacobi iteration matrix is weakly cyclic and its eigenvalues are all real “real case” or all imaginary “imaginary case”. The first result is that the PSD method has better convergence rate than the SSOR method. In particular, in the “imaginary case” its convergence is increased by an order of magnitude compared to the SSOR method. In an attempt to further increase the convergence rate of the PSD method, more parameters we...
On Some Iterative Methods for Solving Systems of Linear Equations
This paper presents some iterative methods for solving system of linear equations namely the Jacobi method and the modified Jacobi method. The Jacobi method is an algorithm for solving system of linear equations with largest absolute values in each row and column dominated by the diagonal elements. The modified Jacobi method also known as the Gauss Seidel method or the method of successive displacement is useful for the solution of system of linear equations. The comparative results analysis of the two methods was considered. We also discussed the rate of convergence of the Jacobi method and the modified Jacobi method. Finally, the results showed that the modified Jacobi method is more efficient, accurate and converges faster than its counterpart “the Jacobi Method”
Preconditioned Gauss-Seidel type iterative method for solving linear systems
Applied Mathematics and Mechanics, 2006
The preconditioned Gauss-Seidel type iterative method for solving linear systems, with the proper choice of the preconditioner, is presented. Convergence of the preconditioned method applied to Z-matrices is discussed. Also the optimal parameter is presented. Numerical results show that the proper choice of the preconditioner can lead to effective by the preconditioned Gauss-Seidel type iterative methods for solving linear systems.
On a new iterative method for solving linear systems and comparison results
Journal of Computational and Applied Mathematics, 2008
In Ujević [A new iterative method for solving linear systems, Appl. Math. Comput. 179 (2006) 725-730], the author obtained a new iterative method for solving linear systems, which can be considered as a modification of the Gauss-Seidel method. In this paper, we show that this is a special case from a point of view of projection techniques. And a different approach is established, which is both theoretically and numerically proven to be better than (at least the same as) Ujević's. As the presented numerical examples show, in most cases, the convergence rate is more than one and a half that of Ujević.
Comparison of Convergence of General Stationary Iterative Methods for Singular Matrices
SIAM Journal on Matrix Analysis and Applications, 2002
New comparison theorems are presented comparing the asymptotic convergence factor of iterative methods for the solution of consistent (as well as inconsistent) singular systems of linear equations. The asymptotic convergence factor of the iteration matrix T is the quantity γ(T ) = max{|λ|, λ ∈ σ(T ), λ = 1}, where σ(T ) is the spectrum of T . In the new theorems, no restrictions are imposed on the projections associated with the two iteration matrices being compared. The splittings of the well-known example of Kaufman [SIAM J. Sci. Statist. Comput., 4 (1983), pp. 525-552] satisfy the hypotheses of the new theorems.
The Jacobi method for iteratively solving a set of linear algebraic equations is well known. However, it suffers from the drawback that it does not converge for all linear algebraic systems. In science and engineering, one often encounters algebraic systems of the form Ax = b where the matrix A is symmetric and positive-definite. In this paper we show that a slight variant of the Jacobi method, namely the Jacobi overrelaxation (JOR) method, can be made to ensure convergence of the iterative scheme for such matrices. Thus we expand the applicability of JOR iterative methods to all symmetric, positive-definite matrices in a manner which is computationally convenient and simple. In particular, we show that this variant scheme can be explicitly determined without the need for any eigenvalue computations. We show that if eigenvalue computations are employed, one can obtain the maximum rate of convergence by a proper choice of parameters for the JOR scheme. This maximum rate of convergence is also explicitly obtained.