Novel Algorithms Based on the Conjugate Gradient Method for Inverting Ill-Conditioned Matrices, and a New Regularization Method to Solve Ill-Posed Linear Systems (original) (raw)

On the regularizing behavior of recent gradient methods in the solution of linear ill-posed problems

We analyze the regularization properties of two recently proposed gradient methods applied to discrete linear inverse problems. By studying their filter factors, we show that the tendency of these methods to eliminate first the eigencomponents of the gradient corresponding to large singular values allows to reconstruct the most significant part of the solution, thus yielding a useful filtering effect. This behavior is confirmed by numerical experiments performed on some image restoration problems. Furthermore, the experiments show that, for severely ill-conditioned problems and high noise levels, the two methods can be competitive with the Conjugate Gradient (CG) method, since they are slightly slower than CG, but exhibit a better semiconvergence behavior.

Regularization of inverse problems by an approximate matrix-function technique

Numerical Algorithms

In this work, we introduce and investigate a class of matrix-free regularization techniques for discrete linear ill-posed problems based on the approximate computation of a special matrix-function. In order to produce a regularized solution, the proposed strategy employs a regular approximation of the Heavyside step function computed into a small Krylov subspace. This particular feature allows our proposal to be independent from the structure of the underlying matrix. If on the one hand, the use of the Heavyside step function prevents the amplification of the noise by suitably filtering the responsible components of the spectrum of the discretization matrix, on the other hand, it permits the correct reconstruction of the signal inverting the remaining part of the spectrum. Numerical tests on a gallery of standard benchmark problems are included to prove the efficacy of our approach even for problems affected by a high level of noise.

ROBUST APPROXIMATE INVERSE PRECONDITIONING FOR THE CONJUGATE GRADIENT METHOD

Siam Journal on Scientific Computing, 2001

We present a variant of the AINV factorized sparse approximate inverse algorithm which is applicable to any symmetric positive definite matrix. The new preconditioner is breakdown- free and, when used in conjunction with the conjugate gradient method, results in a reliable solver for highly ill-conditioned linear systems. We also investigate an alternative approach to a stable approximate inverse algorithm, based

On the regularizing behavior of the SDA and SDC gradient methods in the solution of linear ill-posed problems

Journal of Computational and Applied Mathematics, 2016

We analyze the regularization properties of two recently proposed gradient methods, SDA and SDC, applied to discrete linear inverse problems. By studying their filter factors, we show that the tendency of these methods to eliminate first the eigencomponents of the gradient corresponding to large singular values allows to reconstruct the most significant part of the solution, thus yielding a useful filtering effect. This behavior is confirmed by numerical experiments performed on some image restoration problems. Furthermore, the experiments show that, for severely ill-conditioned problems and high noise levels, the SDA and SDC methods can be competitive with the Conjugate Gradient (CG) method, since they are slightly slower than CG, but exhibit a better semiconvergence behavior.

A new method for computing Moore–Penrose inverse matrices

Journal of Computational and Applied Mathematics, 2009

The Moore-Penrose inverse of an arbitrary matrix (including singular and rectangular) has many applications in statistics, prediction theory, control system analysis, curve fitting and numerical analysis. In this paper, an algorithm based on the conjugate Gram-Schmidt process and the Moore-Penrose inverse of partitioned matrices is proposed for computing the pseudoinverse of an m × n real matrix A with m ≥ n and rank r ≤ n. Numerical experiments show that the resulting pseudoinverse matrix is reasonably accurate and its computation time is significantly less than that of pseudoinverses obtained by the other methods for large sparse matrices.

Inverse Problems, Regularization and Applications

ArXiv, 2019

Inverse problems arise in a wide spectrum of applications in fields ranging from engineering to scientific computation. Connected with the rise of interest in inverse problems is the development and analysis of regularization methods, such as truncated singular value decomposition (TSVD), Tikhonov regularization or iterative regularization methods (like Landerweb), which are a necessity in most inverse problems due to their ill-posedness. In this thesis we propose a new iterative regularization technique to solve inverse problems, without any dependence on external parameters and thus avoiding all the difficulties associated with their involvement. To boost the convergence rate of the iterative method different descent directions are provided, depending on the source conditions, which are based on some specific a-priori knowledge about the solution. We show that this method is very robust to the presence of (extreme) errors in the data. In addition, we also provide a very efficient ...

An iterative Lagrange method for the regularization of discrete ill-posed inverse problems

Computers & Mathematics with Applications, 2010

In this paper, an iterative method is presented for the computation of regularized solutions of discrete ill-posed problems. In the proposed method, the regularization problem is formulated as an equality constrained minimization problem and an iterative Lagrange method is used for its solution. The Lagrange iteration is terminated according to the discrepancy principle. The relationship between the proposed approach and classical Tikhonov regularization is discussed. Results of numerical experiments are presented to illustrate the effectiveness and usefulness of the proposed method.

Matrix Computations-A COMPARISON OF SOLUTION ACCURACY RESULTING FROM FACTORING AND INVERTING ILL-CONDITIONED MATRICES

0. ABSTRACT The residual vector R] = [Z]A]-B] where [Z] is a coefficient matrix, A] is a vector of unknowns and B] is a right-hand side vector, is often used as a measure of solution error when solving linear systems of the kind that arise in computational electromagnetics. Residual errors are of particular interest in using iterative solutions where they are instrumental in determining the next trial answer in a sequence of iterates. As demonstrated here, when a matrix is ill-conditioned, the residual may imply the solution is more accurate than is actually the case. 1. MATRIX CONDITION NUMBER AND SOLUTION ACCURACY In previous related work [Miller (1995)] a study was described that investigated the behavior of ill-conditioned matrices having the goal of numerically characterizing their information content. One numerical result from that study was that the solution accuracy (SA) is related to the coefficient accuracy (CA) and condition number (CN), all expressed in digits, approximately as SA ≤ CA-CN. This conclusion was based on using, as one measure of SA, a comparison of [Z][Y] with [I], where [Z] is a matrix under study, [Y] is its computed inverse and [I] is the identity matrix. CNs can generally be expected to grow with increasing matrix size, even for one as benign as having all coefficients being random numbers. For some matrices, the Hilbert matrix for example, one of those studied, the CN can grow much faster, being of order 10 1.5N , for a matrix of size NxN. A large matrix CN was encountered in later work that involved model-based parameter estimation (MBPE) for adaptive sampling and estimation of a transfer function [Miller (1996)] using rational functions as fitting models (FM). For example, when using simple LU decomposition to solve even a low-order system, say one having fewer than 20 coefficients, the CN might exceed 10 6. (Note that this problem can be circumvented by using a more robust solution, such as singular-value decomposition, but that's also left for a later discussion.) An interesting aspect of these large CNs was that the match of the FM with the original data when computed using coefficients obtained from [Y]xB], with B] the right-hand-side vector, could be much less accurate than when using coefficients instead obtained from back substitution.

A proposal for regularized inversion for an ill-conditioned deconvolution operator

CT&F - Ciencia, Tecnología y Futuro, 2013

rom the inverse problem theory aspect, deconvolution can be understood as the linear inversion of an ill-posed and ill-conditioned problem. The ill-conditioned property of the deconvolution operator make the solution of inverse problem sensitive to errors in the data. Tikhonov regularization is the most commonly used method for stability and uniqueness of the solution. However, results from Tikhonov method do not provide sufficient quality when the noise in the data is strong. This work uses the conjugate gradient method applied to the Tikhonov deconvolution scheme, including a regularization parameter calculated iteratively and based on the improvement criterion of Morozov discrepancy applied on the objective function. Using seismic synthetic data and real stacked seismic data, we carried out a deconvolution process with regularization and without regularization based on a conjugated gradient algorithm. A comparison of results is also presented. Applying regularized deconvolution on synthetic data shows improved stability on the solution. Additionally, real post-stack seismic data shows a direct application for increasing the vertical resolution even with noisy data.