Regularization of ill-posed linear equations by the non- stationary augmented Lagrangian method (original) (raw)

Tikhonov and iterative regularization methods for embedded inverse problems

In this paper we suggest two novel classes of regularization techniques for systems of nonlinear ill-posed tomographic problems. We analyze variational regularization method as well as iterative regularization techniques analytically. The later turns out to be of Landweber-Kaczmarz type. We discuss new stopping criteria for such iterative methods and present a subtle convergence analysis. The stopping criterion is favourable (both analytically and practically) to existing stopping stratgies.

On the regularizing behavior of recent gradient methods in the solution of linear ill-posed problems

We analyze the regularization properties of two recently proposed gradient methods applied to discrete linear inverse problems. By studying their filter factors, we show that the tendency of these methods to eliminate first the eigencomponents of the gradient corresponding to large singular values allows to reconstruct the most significant part of the solution, thus yielding a useful filtering effect. This behavior is confirmed by numerical experiments performed on some image restoration problems. Furthermore, the experiments show that, for severely ill-conditioned problems and high noise levels, the two methods can be competitive with the Conjugate Gradient (CG) method, since they are slightly slower than CG, but exhibit a better semiconvergence behavior.

Augmented Lagrangian Without Alternating Directions:Practical Algorithms for Inverse Problems in Imaging

Several problems in signal processing and machine learning can be casted as optimization problems. In many cases, they are of large-scale, nonlinear, have constraints, and may be nonsmooth in the unknown parameters. There exists plethora of fast algorithms for smooth convex optimization, but these algorithms are not readily applicable to nonsmooth problems, which has led to a considerable amount of research in this direction. In this paper, we propose a general algorithm for nonsmooth bound-constrained convex optimization problems. Our algorithm is instance of the so-called augmented Lagrangian, for which theoretical convergence is well established for convex problems. The proposed algorithm is a blend of superlinearly convergent limited memory quasi-ewton method, and proximal projection operator. The initial promising numerical results for total-variation based image deblurring show that they are as fast as the best existing algorithms in the same class, but with fewer and less sen...

Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces

Inverse Problems, 2012

Nonstationary iterated Tikhonov regularization is an efficient method for solving ill-posed problems in Hilbert spaces. However, this method may not produce good results in some situations since it tends to oversmooth solutions and hence destroy special features such as sparsity and discontinuity. By making use of duality mappings and Bregman distance, we propose an extension of this method to the Banach space setting and establish its convergence. We also present numerical simulations which indicate that the method in Banach space setting can produce better results.

An iterative Lagrange method for the regularization of discrete ill-posed inverse problems

Computers & Mathematics with Applications, 2010

In this paper, an iterative method is presented for the computation of regularized solutions of discrete ill-posed problems. In the proposed method, the regularization problem is formulated as an equality constrained minimization problem and an iterative Lagrange method is used for its solution. The Lagrange iteration is terminated according to the discrepancy principle. The relationship between the proposed approach and classical Tikhonov regularization is discussed. Results of numerical experiments are presented to illustrate the effectiveness and usefulness of the proposed method.

Analysis of Some Optimization Techniques for Regularization of Inverse Problems

2016

The main objective in inverse problems is to approximate some unknown parameters or attributes of interest, given some measurements that are only indirectly related to these parameters. This type of problem appears in many areas of science, engineering and industry. Examples can be found in medical computerized tomography, groundwater flow modeling, etc. In the process of solving these problems often appears an instability phenomenon known as ill-posedness which requires regularization. Ill-posedness is related to the fact that the presence of even a small amount of noise in the data can lead to enormous errors in the approximated solution. Different regularization techniques have been proposed in the literature. In this thesis our focus is put on Total Variation regularization. We study the total variation regularization for both image denoising and image deblurring problems. Three algorithms for total variation regularization will be analysed, namely the split Bregman algorithms, ...

Mixed gradient-Tikhonov methods for solving nonlinear ill-posed problems in Banach spaces

Inverse Problems, 2016

Tikhonov regularization is a very useful and widely used method for finding stable solutions of ill-posed problems. A good choice of the penalization functional as well as a careful selection of the topologies of the involved spaces is fundamental to the quality of the reconstructions. These choices can be combined with some a priori information about the solution in order to preserve desired characteristics like sparsity constraints for example. To prove convergence and stability properties of this method, one usually has to assume that a minimizer of the Tikhonov functional is known. In practical situations however, the exact computation of a minimizer is very difficult and even finding an approximation can be a very challenging and expensive task if the involved spaces have poor convexity or smoothness properties. In this paper we propose a method to attenuate this gap between theory and practice, applying a gradient-like method to a Tikhonov functional in order to approximate a minimizer. Using only available information, we explicitly calculate a maximal step-size which ensures a monotonically decreasing error. The resulting algorithm performs only finitely many steps and terminates using the discrepancy principle. In particular the knowledge of a minimizer or even its existence does not need to be assumed. Under standard assumptions, we prove convergence and stability results in relatively general Banach spaces, and subsequently, test its performance numerically, reconstructing conductivities with sparsely located inclusions and different kinds of noise in the 2D Electrical Impedance Tomography.

Enhanced convergence rates for Tikhonov regularization revisited: improved results

In this paper, we are going to improve the enhanced convergence rates for Tikhonov regularization of nonlinear ill-posed problems in Banach spaces presented by Neubauer in (14). The new message is that rates are shown to be independent of the residual norm exponents 1 p < 1 in the functional to be minimized for obtaining regularized solutions. However, on the one hand the smoothness of the image space influences the rates, and on the other hand best possible rates require specific choices of the regularization parameters > 0. In the limiting case p = 1, the -values must not tend to zero as the noise level decreases, but has to converge to a fixed positive value characterized by properties of the solution.

Convergence rates for inverse Problems in Hilbert spaces: A Comparative Study

2018

In this paper, we apply a new kind of smoothness concept, i.e. H\"older stability estimates for the determination of convergence rates of Tikhonov regularization for linear and non-linear inverse problems in Hilbert spaces. For linear inverse problems, we obtain the convergence rates without incorporating the classical concept of spectral theory and for non-linear inverse problems, we obtain the convergence rates without incorporating any additional non-linearity estimate. Further, we employ the smoothness concept of inhomogeneous variational inequalities to deduce the convergence rates for non-linear inverse problems. In addition to Tikhonov regularization, we also consider Lavrentiev's regularization method for non-linear inverse problems and determine its convergence rates by incorporating the H\"older stability estimates as well as inhomogeneous variational inequalities. And finally, we discuss the co-action between the variational inequalities and the H\"olde...