Pseudoinverse preconditioners and iterative methods for large dense linear least-squares problems (original) (raw)
Related papers
Ssai: A Symmetric Sparse Approximate Inverse Preconditioner for the Conjugate Gradient Method
2019
We propose a method for solving a Hermitian positive definite linear system Ax = b, where A is an explicit sparse matrix (real or complex). A sparse approximate right inverse M is computed and replaced by M̃ = (M + MH)/2, which is used as a left-right preconditioner in a modified version of the preconditioned conjugate gradient (PCG) method. M is formed column by column and can therefore be computed in parallel. PCG requires only matrix-vector multiplications with A and M̃ (not solving a linear system with the preconditioner), and so too can be carried out in parallel. We compare it with incomplete Cholesky factorization (the gold standard for PCG) and with MATLAB’s backslash operator (sparse Cholesky) on matrices from various applications.
Parallel and Systolic Solution of Normalized Explicit Approximate Inverse Preconditioning
The Journal of Supercomputing, 2000
A new class of normalized approximate inverse matrix techniques, based on the concept of sparse normalized approximate factorization procedures are introduced for solving sparse linear systems derived from the finite difference discretization of partial differential equations. Normalized explicit preconditioned conjugate gradient type methods in conjunction with normalized approximate inverse matrix techniques are presented for the efficient solution of sparse linear systems. Theoretical results on the rate of convergence of the normalized explicit preconditioned conjugate gradient scheme and estimates of the required computational work are presented. Application of the new proposed methods on two dimensional initial/boundary value problems is discussed and numerical results are given. The parallel and systolic implementation of the dominant computational part is also investigated.
Approximate Inverse Preconditioners via Sparse-Sparse Iterations
SIAM Journal on Scientific Computing, 1998
The standard incomplete LU (ILU) preconditioners often fail for general sparse inde nite matrices because they give rise to`unstable' factors L and U. In such cases, it may be attractive to approximate the inverse of the matrix directly. This paper focuses on approximate inverse preconditioners based on minimizing kI ? AMk F , where AM is the preconditioned matrix. An iterative descent-type method is used to approximate each column of the inverse. For this approach to be e cient, the iteration must be done in sparse mode, i.e., with`sparse-matrix by sparse-vector' operations. Numerical dropping is applied to maintain sparsity; compared to previous methods, this is a natural way to determine the sparsity pattern of the approximate inverse. This paper describes Newton,`global' and column-oriented algorithms, and discusses options for initial guesses, self-preconditioning, and dropping strategies. Some limited theoretical results on the properties and convergence of approximate inverses are derived. Numerical tests on problems from the Harwell-Boeing collection and the FIDAP uid dynamics analysis package show the strengths and limitations of approximate inverses. Finally, some ideas and experiments with practical variations and applications are presented.
ROBUST APPROXIMATE INVERSE PRECONDITIONING FOR THE CONJUGATE GRADIENT METHOD
Siam Journal on Scientific Computing, 2001
We present a variant of the AINV factorized sparse approximate inverse algorithm which is applicable to any symmetric positive definite matrix. The new preconditioner is breakdown- free and, when used in conjunction with the conjugate gradient method, results in a reliable solver for highly ill-conditioned linear systems. We also investigate an alternative approach to a stable approximate inverse algorithm, based
A sparse approximate inverse preconditioner for parallel preconditioning of general sparse matrices
Applied Mathematics and Computation, 2002
This paper is concerned with a new approach to preconditioning for large, sparse linear systems. A procedure for computing an incomplete factorization of the inverse of a nonsymmetric matrix is developed, and the resulting factorized sparse approximate inverse is used as an explicit preconditioner for conjugate gradient-type methods. Some theoretical properties of the preconditioner are discussed, and numerical experiments on test matrices from the Harwell-Boeing collection and from Tim Davis's collection are presented. Our results indicate that the new preconditioner is cheaper to construct than other approximate inverse preconditioners. Furthermore, the new technique insures convergence rates of the preconditioned iteration which are comparable with those obtained with standard implicit preconditioners.
On approximate-inverse preconditioners
1995
We investigate the use of sparse approximate-inverse preconditioners for the iterative solution of unsymmetric linear systems of equations. Such methods are of particular interest because of the considerable scope for parallelization. We propose a number of enhancements which may improve their performance. When run in a sequential environment, these methods can perform unfavourably when compared with other techniques. However, they can be successful when other methods fail and simulations indicate that they can be competitive when considered in a parallel environment.
Numerical Linear Algebra with Applications, 2017
SummaryTwo classes of methods for approximate matrix inversion with convergence orders p=3∗2k+1 (Class 1) and p=5∗2k−1 (Class 2), k≥1 an integer, are given based on matrix multiplication and matrix addition. These methods perform less number of matrix multiplications compared to the known hyperpower method or pth‐order method for the same orders and can be used to construct approximate inverse preconditioners for solving linear systems. Convergence, error, and stability analyses of the proposed classes of methods are provided. Theoretical results are justified with numerical results obtained by using the proposed methods of orders p=7,13 from Class 1 and the methods with orders p=9,19 from Class 2 to obtain polynomial preconditioners for preconditioning the biconjugate gradient (BICG) method for solving well‐ and ill‐posed problems. From the literature, methods with orders p=8,16 belonging to a family developed by the effective representation of the pth‐order method for orders p=2k,...
Sparse Approximate-Inverse Preconditioners Using Norm-Minimization Techniques
SIAM Journal on Scientific Computing, 1998
We investigate the use of sparse approximate-inverse preconditioners for the iterative solution of unsymmetric linear systems of equations. We consider the approximations proposed by Cosgrove, Diaz, and Griewank [Internat. J. Comput. Math., 44 (1992), pp. 91-110] and Huckle and Grote [A New Approach to Parallel Preconditioning with Sparse Approximate Inverses, Tech. report SCCM-94-03, Stanford University, 1994] which are based on norm-minimization techniques. Such methods are of particular interest because of the considerable scope for parallelization. We propose a number of enhancements which may improve their performance. When run in a sequential environment, these methods can perform unfavorably when compared with other techniques. However, they can be successful when other methods fail and simulations indicate that they can be competitive when considered in a parallel environment.
Explicit approximate inverse preconditioning techniques
Archives of Computational Methods in Engineering, 2002
Summary The numerical treatment and the production of related software for solving large sparse linear systems of algebraic equations, derived mainly from the discretization of partial differential equation, by preconditioning techniques has attracted the attention of many researchers. In this paper we give an overview of explicit approximate inverse matrix techniques for computing explicitly various families of approximate inverses based on
On Least-Squares Approximate Inverse-Based Preconditioners
2000
We discuss approximate inverse preconditioners based on Frobenius-norm minimization. We introduce a novel adaptive algorithm based on truncated Neumann matrix expansions for selecting the sparsity pattern of the preconditioner. The construction of the approximate inverse is based on a dual dropping strategy, namely a threshold to drop small entries and a maximum number of nonzero entries per column. We introduce