Regularized Total Least Squares: Computational Aspects and Error Bounds (original) (raw)
Related papers
Regularized total least squares: computational aspects and error
2012
Abstract. For solving linear ill-posed problems regularization methods are required when the right hand side and the operator are with some noise. In the present paper regularized approximations are obtained by regularized total least squares and dual regularized total least squares. We discuss computational aspects and provide order optimal error bounds that characterize the accuracy of the regularized approximations. The results extend earlier results where the operator is exactly given. We also present some numerical experiments, which shed a light on the relationship between RTLS, dual RTLS and the standard Tikhonov regularization.
A Regularized Total Least Squares Algorithm
Total Least Squares and Errors-in-Variables Modeling, 2002
Error-contaminated systems Ax ≈ b, for which A is ill-conditioned, are considered. Such systems may be solved using Tikhonov-like regularized total least squares (R-TLS) methods. Golub et al, 1999, presented a direct algorithm for the solution of the Lagrange multiplier formulation for the R-TLS problem. Here we present a parameter independent algorithm for the approximate R-TLS solution. The algorithm, which utilizes the shifted inverse power method, relies only on a prescribed estimate for the regularization constraint condition and does not require the specification of other regularization parameters. An extension of the algorithm for nonsmooth solutions is also presented.
On the Solution of the Tikhonov Regularization of the Total Least Squares Problem
SIAM Journal on Optimization, 2006
Total least squares (TLS) is a method for treating an overdetermined system of linear equations Ax ≈ b, where both the matrix A and the vector b are contaminated by noise. Tikhonov regularization of the TLS (TRTLS) leads to an optimization problem of minimizing the sum of fractional quadratic and quadratic functions. As such, the problem is nonconvex. We show how to reduce the problem to a single variable minimization of a function G over a closed interval. Computing a value and a derivative of G consists of solving a single trust region subproblem. For the special case of regularization with a squared Euclidean norm we show that G is unimodal and provide an alternative algorithm, which requires only one spectral decomposition. A numerical example is given to illustrate the effectiveness of our method.
A Fast Algorithm for Solving Regularized Total Least Squares Problems
Electronic transactions on numerical analysis ETNA
The total least squares (TLS) method is a successful approach for linear problems if both the system matrix and the right hand side are contaminated by some noise. For ill-posed TLS problems Renaut and Guo [SIAM J. Matrix Anal. Appl., 26 (2005), pp. 457 -476] suggested an iterative method which is based on a sequence of linear eigenvalue problems. Here we analyze this method carefully, and we accelerate it substantially by solving the linear eigenproblems by the Nonlinear Arnoldi method (which reuses information from the previous iteration step considerably) and by a modified root finding method based on rational interpolation.
Computational Experiments on the Tikhonov Regularization of the Total Least Squares Problem
In this paper we consider flnding meaningful solutions of ill- conditioned overdetermined linear systems Ax … b; where A and b are both contaminated by noise. This kind of problems fre- quently arise in discretization of certain integral equations. One of the most popular approaches to flnd meaningful solutions of such systems is the so called total least squares problem.
Mathematical Modelling and Analysis, 2008
The total least squares (TLS) method is a successful approach for linear problems if both the matrix and the right hand side are contaminated by some noise. In a recent paper Sima, Van Huffel and Golub suggested an iterative method for solving regularized TLS problems, where in each iteration step a quadratic eigenproblem has to be solved. In this paper we prove its global convergence, and we present an efficient implementation using an iterative projection method with thick updates.
Optimization and Regularization of Nonlinear Least Squares Problems
An important branch in scientific computing is parameter estimation. Given a mathematical model and observation data, parameters are sought to explain physical properties as well as possible. In order to find these parameters an optimization problem is often formed, frequently a nonlinear least squares problem. This thesis mainly contributes to the development of tools, techniques, and theories for nonlinear least squares problems that lack a well-defined solution. Specifically, the intention is to generalize regularization methods for linear inverse problems to also handle nonlinear inverse problems. The investigation started by considering an exactly rank-deficient problem, i.e., a problem with a dependency among the parameters. It turns out that such a problem can be formulated as a nonlinear minimum norm problem. To solve this optimization problem two regularization methods are proposed: A Gauss-Newton Tikhonov regularized method and a minimum norm GaussNewton method. It is show...
Regularization Methods For Nonlinear Least Squares Problems. Part Ii: Almost Rank-Deficiency
. A nonlinear least squares problem is almost rank deficient at a local minimum if there is a large gap in the singular values of the Jacobian and at least one singular value is small. We analyze the almost rank deficient problem giving the relevant KKT-conditions and propose two methods based on truncation and Tikhonov regularization. Our approach is based on techniques from linear algebra and nonlinear optimization. This enables us to develop a local and asymptotic convergence theory based on second order information for Gauss-Newton like methods applied to the nonlinear truncated and Tikhonov regularized problems with known regularization parameter. Finally, we test the methods on artificial problems where we are able to choose the singular values and the nonlinearity of the problem making it possible to show the different features of the problem and the methods. The method based on Tikhonov regularization is more generally applicable to illposed problems having no gap in the sin...
Performance of first- and second-order methods for -regularized least squares problems
2016
We study the performance of first-and second-order optimization methods for 1-regularized sparse least-squares problems as the conditioning of the problem changes and the dimensions of the problem increase up to one trillion. A rigorously defined generator is presented which allows control of the dimensions, the conditioning and the sparsity of the problem. The generator has very low memory requirements and scales well with the dimensions of the problem. Keywords 1-Regularised least-squares • First-order methods • Second-order methods • Sparse least squares instance generator • Ill-conditioned problems B Kimon Fountoulakis