A unifying approach to the construction of circulant preconditioners (original) (raw)

Addendum to “a note on construction of circulant preconditioners from kernels”

Applied Mathematics and Computation, 1998

A unified treatment of construction of block circulant preconditioners for block Toeplitz systems from the viewpoint of kernels has been given [Appl. Math. Comput. 83 (1997) 3]. It was shown that some well-known block circulant preconditioners can be derived from convoluting the generating functions of systems with some famous kernels. A convergence analysis was also given there. For solving a large class of block Toeplitz systems, a linear convergence rate is obtained by using the preconditioned conjugate gradient (PCG) method with block circulant preconditioner. In this addendum, by using a given convergence result [Linear Algebra Appl. 232 (1996) 1], a superlinear convergence rate is obtained. Numerical results are given to illustrate the rate of convergence.

Toeplitz Preconditioners Constructed from Linear Approximation Processes

SIAM Journal on Matrix Analysis and Applications, 1998

Preconditioned conjugate gradients (PCG) are widely and successfully used methods to solve Toeplitz linear systems An(f )x = b. Here we consider preconditioners belonging to trigonometric matrix algebras and to the band Toeplitz class and we analyze them from the viewpoint of the function theory in the case where f is supposed continuous and strictly positive. First we prove that the necessary (and sufficient) condition, in order to devise a superlinear PCG method, is that the spectrum of the preconditioners is described by a sequence of approximation operators "converging" to f . The other important information we deduce is that while the matrix algebra approach is substantially not sensitive to the approximation features of the underlying approximation operators, the band Toeplitz approach is. Therefore, the only class of methods for which we may obtain impressive evidence of superlinear convergence behavior is the one [S. Serra, Math. Comp., 66 (1997), pp. 651-665] based on band Toeplitz matrices with weakly increasing bandwidth.

A new preconditioner for indefinite and asymmetric matrices

Applied Mathematics and Computation, 2013

We present a novel preconditioner for numerical solutions of large sparse linear systems with indefinite and asymmetric matrices. This new preconditioner named as product preconditioner(PS) is constructed by two fairly simple preconditioners. The distribution of eigenvalues and the form of the eigenvectors of the preconditioned matrix are analyzed. Moreover, an upper bound on the degree of the minimal polynomial is also studied. Numerical experiments with several examples show that the proposed PS performs better than block diagonal preconditioner(BD) and block triangular preconditioner (BT) as well as the constraint preconditioner(SC) in terms of the number of iteration and computational time.

A class of preconditioners based on the -type preconditioning matrices for solving linear systems

Applied Mathematics and Computation, 2007

The purpose of this paper is to present a class of preconditioners based on the ðI þ SðaÞÞ-type preconditioning matrices provided by Evans et al. [D.J. Evans, M.M. Martins, M.E. Trigo, The AOR iterative method for new preconditioned linear systems, J. Comput. Appl. Math. 132 (2001) 461-466] and Zhang et al. [Y. Zhang, T.Z. Huang, X.P. Liu, Modified iterative methods for nonnegative matrices and M-matrices linear systems, Comput. Math. Appl. 50 (2005) 1587-1602].

C. G. preconditioning for Toeplitz matrices

Computers & Mathematics with Applications, 1993

We consider the problem of salving a Toeplitz system of equations by conjugate gradient method. When a sequence of nested Toeplitz matrices is associated to a function, the spectral behaviour of the matrices involved is closely related to the analytical properties of the generating function. Thus, it is possible to devise efficient preconditioning techniques by using various functional approximation strategies. This approach leads to attractive results in the case of ill-conditioned matrices, for which a wide class of preconditioners are proposed.

A collection of new preconditioners for solving linear systems

Scientific research and essays

In this paper, new preconditioners for solving linear systems are developed and preconditioned accelerated overrelaxation method (AOR) is used for the systems. The improvement of convergence rate via using new preconditioners method also shown. A numerical example is also given to illustrate our results. 2000 Mathematics Subject Classifications: 65F10, 15A06 Key Words and Phrases: linear systems, preconditioner, AOR iterative method, spectral radius, Z-, M- matrix

A classification scheme for regularizing preconditioners, with application to Toeplitz systems

Linear Algebra and its Applications, 2005

Preconditioning techniques for linear systems are widely used in order to speed up the convergence of iterative methods. If the linear system is generated by the discretization of an ill-posed problem, preconditioning may lead to wrong results, since components related to noise on input data are amplified. Using basic concepts from the theory of inverse problems, we identify a class of preconditioners which acts as a regularizing tool. In this paper we study relationships between this class and previously known circulant preconditioners for ill-conditioned Hermitian Toeplitz systems. In particular, we deal with the low-pass filtered optimal preconditioners and with a recent family of superoptimal preconditioners. We go on to describe a set of preconditioners endowed with particular regularization properties, whose effectiveness is supported by several numerical tests.

The update of sequences of some incomplete decompositions matrices for preconditioning

Simulation with models based on partial differential equations require very often the solution of (sequences of) large and sparse algebraic linear systems. In multidimensional domains, preconditioned Krylov iterative solvers are often appropriate for these duties. Therefore, the search for efficient preconditioners for Krylov subspace methods is a crucial theme. Recent developments, especially in computing hardware, have renewed the interest in approximate inverse preconditioners in factorized form, because their application during the solution process can be more efficient. We present some ideas for updating approximate inverse preconditioners in factorized form. Computational costs, reorderings and implementation issues are considered.

On Updating Preconditioners for the Iterative Solution of Linear Systems

The main topic of this thesis is updating preconditioners for solving large sparse linear systems Ax = b by using Krylov iterative methods. Two interesting types of problems are considered. In the first one is studied the iterative solution of nonsingular, non-symmetric linear systems where the coefficient matrix A has a skewsymmetric part of low-rank or can be well approximated with a skew-symmetric low-rank matrix. Systems like this arise from the discretization of PDEs with certain Neumann boundary conditions, the discretization of integral equations as well as path following methods, for example, the Bratu problem and the Love's integral equation. The second type of linear systems considered are least squares (LS) problems that are solved by considering the solution of the equivalent normal equations system. More precisely, we consider the solution of modified and rank deficient LS problems. By modified LS problem, it is understood that the set of linear relations is updated with some new information, a new variable is added or, contrarily, some information or variable is removed from the set. Rank deficient LS problems are characterized by a coefficient matrix that has not full rank, which makes difficult the computation of an incomplete factorization of the normal equations. LS problems arise in many large-scale applications of the science and engineering as for instance neural networks, linear programming, exploration seismology or image processing. Usually, incomplete LU or incomplete Cholesky factorization are used as preconditioners for iterative methods. The main contribution of this thesis is the development of a technique for updating preconditioners by bordering. It consists in the computation of an approximate decomposition for an equivalent augmented linear system, that is used as preconditioner for the original problem. The theoretical study and the results of the numerical experiments presented in this thesis show the performance of the preconditioner technique proposed and its competitiveness compared with other methods available in the literature for computing preconditioners for the problems studied. To my family. A special feeling of gratitude to my wife Elisa Savoia and my daughter Lucía Guerrero. I also dedicate this thesis to my friends who have supported me throughout the process. To each professor I had during my education, in particular, my project coordinators José Marín, José Mas and Juana Cerdán who have been more than generous with their expertise and precious time spent with me for preparing this thesis. To the

On the iterative computation of a '2-norm scaling based preconditioner

In this paper we consider the Krylov subspace based method introduced in , for iteratively solving the symmetric and possibly indefinite linear system Ax = b. We emphasize the application of the latter method to compute a diagonal preconditioner. The approach proposed is based on the approximate computation of the 2 -norm of the rows (columns) of the matrix A and on its use to equilibrate the matrix A. The distinguishing feature of this approach is that the computation of the 2 -norm is performed without requiring the knowledge of the entries of the matrix A but only using a routine which provides the product of A times a vector.