Solving Ill-Posed Linear Systems with GMRES and a Singular Preconditioner (original) (raw)
Related papers
2020
We introduce a new class of preconditioners to enable flexible GMRES to find a least-squares solution, and potentially the pseudoinverse solution, of large-scale sparse, asymmetric, singular, and potentially inconsistent systems. We develop the preconditioners based on a new observation that generalized inverses (i.e., A ∈ {G | AGA = A}) enable the preconditioned Krylov subspaces (KSP) to converge in a single step. We then compute an approximate generalized inverse (AGI) efficiently using a hybrid incomplete factorization (HIF), which combines multilevel incomplete LU with rank-revealing QR on its final Schur complement. We define the criteria of -accuracy and stability of AGI to guarantee the convergence of preconditioned GMRES for consistent systems. For inconsistent systems, we fortify HIF with iterative refinement to obtain HIFIR, which effectively mitigates the potential breakdowns of KSP and allows accurate computations of the null space vectors. By combining the two technique...
On GMRES for Singular EP and GP Systems
SIAM Journal on Matrix Analysis and Applications, 2018
In this contribution, we study the numerical behavior of the Generalized Minimal Residual (GMRES) method for solving singular linear systems. It is known that GMRES determines a least squares solution without breakdown if the coefficient matrix is range-symmetric (EP), or if its range and nullspace are disjoint (GP) and the system is consistent. We show that the accuracy of GMRES iterates may deteriorate in practice due to three distinct factors: (i) the inconsistency of the linear system; (ii) the distance of the initial residual to the nullspace of the coefficient matrix; (iii) the extremal principal angles between the ranges of the coefficient matrix and its transpose. These factors lead to poor conditioning of the extended Hessenberg matrix in the Arnoldi decomposition and affect the accuracy of the computed least squares solution. We also compare GMRES with the range restricted GM-RES (RR-GMRES) method. Numerical experiments show typical behaviors of GMRES for small problems with EP and GP matrices.
Approximate inverse preconditioning in the parallel solution of sparse eigenproblems
Numerical Linear Algebra with Applications, 2000
A preconditioned scheme for solving sparse symmetric eigenproblems is proposed. The solution strategy relies upon the DACG algorithm, which is a Preconditioned Conjugate Gradient algorithm for minimizing the Rayleigh Quotient. A comparison with the well established ARPACK code, shows that when a small number of the leftmost eigenpairs is to be computed, DACG is more efficient than ARPACK. Effective convergence acceleration of DACG is shown to be performed by a suitable approximate inverse preconditioner (AINV). The performance of such a preconditioner is shown to be safe, i.e. not highly dependent on a drop tolerance parameter. On sequential machines, AINV preconditioning proves a practicable alternative to the effective incomplete Cholesky factorization, and is more efficient than Block Jacobi. Due to its parallelizability, the AINV preconditioner is exploited for a parallel implementation of the DACG algorithm. Numerical tests account for the high degree of parallelization attainable on a Cray T3E machine and confirm the satisfactory scalability properties of the algorithm. A final comparison with PARPACK shows the (relative) higher efficiency of AINV-DACG.
Residual Algorithm with Preconditioner for Linear System of Equations
Applied Mathematical Sciences
One of the most powerful tools for solving large and sparse systems of linear equation is iterative methods. Their significant advantages like low memory requirements and good approximation properties make them very popular, and they are widely used in applications throughout science and engineering. Residual Algorithm for solving large-scale nonsymmetric linear system of equation which symmetric part is positive (or negative) definite, is evaluated. It uses in a systematic way the residual vector as a search direction, and a spectral steplength. The global convergence is analyzed. A preliminary numerical experimentation is included for showing that the new algorithm is a robust method for solving nonsymmetric linear system and it is competitive with the well-known GMRES and BICGSTAB in number of computed residual and CPU time. The new method for sparse matrix with 12 10 entries has been successfully examined with we use the two preconditioning strategies ILU and SSOR.
MPGMRES: a generalized minimum residual method with multiple preconditioners
Standard Krylov subspace methods only allow the user to choose a single preconditioner, although in many situations there may be a number of possibilities. Here we describe an extension of GMRES that allows the use of more than one preconditioner. We make some theoretical observations, propose a practical algorithm, and present numerical results from problems in domain decomposition and PDE-constrained optimization. Our results illustrate the applicability and potential of the proposed approach.
An improved preconditioned LSQR for discrete ill-posed problems
Mathematics and Computers in Simulation, 2006
We present a modified version of the two-level iterative method proposed in [M. Hanke, R. Vogel, Two-level preconditioners for regularized inverse problems. I: Theory, Numerische Mathematik 83 (1999) 385-402]. Here, we propose the application of the two-level Schur complement CG on the unregularized problem and the introduction of the regularization process for solving only one of the linear systems produced by the algorithm. The modified algorithm is substantially cheaper and numerical examples show similar approximations in both cases. A novel basis for the coarse subspace is incorporated in the analysis. Numerical experiments for some test problems and a practical scattering problem are presented.
Computers & Mathematics with Applications, 2003
main idea of this paper is in determination of the pattern of nonzero elements of the LU factors of a given matrix A. The idea is based on taking the powers of the Boolean matrix derived from A. This powers of a Boolean matrix strategy (PBS) is an efficient, effective, and inexpensive approach. Construction of an ILU preconditioner using PBS is described and used in solving large nonsymmetric sparse linear systems. Effectiveness of the proposed ILU preconditioner in solving large nonsymmetric sparse linear systems by the GMRES method is also shown. Numerical experiments are performed which show that it is possible to considerably reduce the number of GMRES iterations when the ILU preconditioner constructed here is used. In numerical examples, the influence of lc, the dimension of the Krylov subspace, on the performance of the GMRES method using an ILU preconditioner is tested. For all the tests carried out, the best value for Ic is found to be 10.
DQGMRES: a Direct Quasi‐minimal Residual Algorithm Based on Incomplete Orthogonalization
Numerical Linear Algebra with Applications, 1996
We describe a Krylov subspace technique, based on incomplete orthogonalization of the Krylov vectors, which can be considered as a truncated version of GMRES. Unlike GMRES(m), the restarted version of GMRES, the new method does not require restarting. Like GMRES, it does not break down. Numerical experiments show that DQGMRES(k) often performs as well as the restarted GMRES using a subspace of dimension m = 2k. In addition, the algorithm is flexible to variable preconditioning, i.e., it can accommodate variations in the preconditioner at every step. In particular, this feature allows the use of any iterative solver as a right-preconditioner for DQGMRES(k). This inner-outer iterative combination often results in a robust approach for solving indefinite non-Hermitian linear systems.
FQMR: A Flexible Quasi-Minimal Residual Method with Inexact Preconditioning
SIAM Journal on Scientific Computing, 2001
A flexible version of the QMR algorithm is presented which allows for the use of a different preconditioner at each step of the algorithm. In particular, inexact solutions of the preconditioned equations are allowed, as well as the use of an (inner) iterative method as a preconditioner. Several theorems are presented relating the norm of the residual of the new method with the norm of the residual of other methods, including QMR and flexible GMRES (FGMRES). In addition, numerical experiments are presented which illustrate the convergence of flexible QMR (FQMR), and show that in certain cases FQMR can produce approximations with lower residual norms than QMR. ).
On Least-Squares Approximate Inverse-Based Preconditioners
Numerical Mathematics and Advanced Applications, 2008
We discuss approximate inverse preconditioners based on Frobeniusnorm minimization. We introduce a novel adaptive algorithm based on truncated Neumann matrix expansions for selecting the sparsity pattern of the preconditioner. The construction of the approximate inverse is based on a dual dropping strategy, namely a threshold to drop small entries and a maximum number of nonzero entries per column. We introduce a post-processing stabilization technique to deflate some of the smallest eigenvalues in the spectrum of the preconditioned matrix which can potentially disturb the convergence. Results of preliminary experiments are reported on a set of linear systems arising from different application fields to illustrate the potential of the proposed algorithm for preconditioning effectively iterative Krylov solvers.