SVDdiagnostic , a program to diagnose numerical conditioning of Rietveld refinements (original) (raw)

MATRIX COMPUTATIONS: ASSESING THE INFLENCE OF COEFFICIENT ACCURACY, ETC. ON MATRIX-SOLUTION ACCURACY.pdf

Matrices occupy a central role in most physical modeling. As the size of matrices being solved increases, it is becoming more important to quantify what factors influence the accuracy of the final result. The condition number (CN) of a matrix is an important controlling factor in limiting the solution accuracy, the effect of which can be circumvented by increasing the precision of the computations. This typically involves going from single to double precision, e.g., going from 64-bit to 128-bit word size. But the number of unknowns, the accuracy to which the original matrix coefficients are obtained, and the accuracy to which the right-hand-side is known also affect the final result.

SVDdiagnostic, a program to diagnose numerical conditioning of Rietveld refinements

Journal of Applied Crystallography, 2006

Singular value decomposition (SVD) of the matrix of normal equations is used here both passively to assess numerical stability, and actively to troubleshoot problem refinements, singular or not. Such systems can then either be cured by rank reduction or solved with arbitrary-precision arithmetic carrying a number of digits known to be sufficient. SVD analysis provides objective information about such required rank reduction or number of digits. Pre-conditioning of the normal matrix is seen to decrease its condition number by many orders of magnitude in actual cases, illustrating its great practical usefulness. The methods and tools developed here have general applicability to diagnose problems with least squares, in particular ill-conditioned Rietveld refinements. Crystalchemical and standard refinements described in the work by Mercier et al. [J.

Numerical aspects of computing the Moore-Penrose inverse of full column rank matrices

BIT Numerical Mathematics, 2012

This paper presents a comparison of certain direct algorithms for computing the Moore-Penrose inverse, for matrices of full column rank, from the point of view of numerical stability. It is proved that the algorithm using Householder QR decomposition, implemented in floating point arithmetic, is forward stable but only conditionally mixed forward-backward stable. A similar result holds also for the Classical Gram-Schmidt algorithm with reorthogonalization (CGS2). This algorithm was developed and analyzed by Abdelmalek (BIT, 11(4):354-367, 1971) and its detailed error analysis was given in Giraud et al. (Numer. Math. 101(1):87-100, 2005).

Matrix Computation-A COMPUTATIONAL STUDY OF THE EFFECT OF MATRIX SIZE AND TYPE, CONDITION NUMBER, COEFFICIENT ACCURACY AND COMPUTATION PRECISION ON MATRIX-SOLUTION ACCURACY

Matrices occupy a central role in most physical modeling. As the size of matrices being solved increases, it is becoming more important to quantify what factors influence the accuracy of the final result. The condition number (CN) of a matrix is an important controlling factor in limiting the solution accuracy, the effect of which can be circumvented by increasing the precision of the computations. This typically involves going from single to double precision, e.g., going from @-bit to 128-bit word size. But the number of unknowns, the accuracy to which the original matrix coefficients are obtained, and the accuracy to which the right-hand-side is known also affect the final result. This discussion reports results from some ongoing computer experiments conducted with the goal of acquiring some insight into such questions using a compiled BASIC language (Future Basic) that permits varying the computer precision up to 240 digits (or more) through a simple configuration command. By also varying the matrix size, the accuracy to which the original matrix coefficients are computed and the matrix type and condition number, some quantitative guidelines might be developed concerning the influences of these factors on solution accuracy. Consistent with previous findings is the result that, with the solution accuracy, SA, the computation precision, P, and CN, all expressed in digits, their relationship can be expressed as P-CN I SA where CN is one of the estimates commonly used, such as the ratio of maximum-to-minimum singular values, and when CAP is the coefficient accuracy. Thus, even for a condition number of accuracy if P and CA are increased to-100 + X digits. When the coefficient accuracy of the original matrix is possibly being less than P, the above result becomes, for the matrices studied, CA-CN I SA I CA, i.e., coefficient inaccuracy can counter any benefit otherwise derived from increasing the compute precision and vice-versa if CA is not commensurately increased. The results obtained thus far will be reviewed and their implications for computational electromagnetics (CEM) will be discussed.

Matrix Computation-ASSESSING MATRIX-SOLUTION ACCURACY.pdf

Matrices occupy a central role in most physical modeling. As the size of matrices being solved increases, it is becoming more important to quantify what factors influence the accuracy of the final result. The condition number (CN) of a matrix is an important controlling factor in limiting the solution accuracy, the effect of which can be circumvented by increasing the precision of the computations. This typically involves going from single to double precision, e.g., going from 64-bit to 128-bit word size. But the number of unknowns, the accuracy to which the original matrix coefficients are obtained, and the accuracy to which the right-hand-side is known also affect the finalß result.

Matrix Computations-A COMPARISON OF SOLUTION ACCURACY RESULTING FROM FACTORING AND INVERTING ILL-CONDITIONED MATRICES

0. ABSTRACT The residual vector R] = [Z]A]-B] where [Z] is a coefficient matrix, A] is a vector of unknowns and B] is a right-hand side vector, is often used as a measure of solution error when solving linear systems of the kind that arise in computational electromagnetics. Residual errors are of particular interest in using iterative solutions where they are instrumental in determining the next trial answer in a sequence of iterates. As demonstrated here, when a matrix is ill-conditioned, the residual may imply the solution is more accurate than is actually the case. 1. MATRIX CONDITION NUMBER AND SOLUTION ACCURACY In previous related work [Miller (1995)] a study was described that investigated the behavior of ill-conditioned matrices having the goal of numerically characterizing their information content. One numerical result from that study was that the solution accuracy (SA) is related to the coefficient accuracy (CA) and condition number (CN), all expressed in digits, approximately as SA ≤ CA-CN. This conclusion was based on using, as one measure of SA, a comparison of [Z][Y] with [I], where [Z] is a matrix under study, [Y] is its computed inverse and [I] is the identity matrix. CNs can generally be expected to grow with increasing matrix size, even for one as benign as having all coefficients being random numbers. For some matrices, the Hilbert matrix for example, one of those studied, the CN can grow much faster, being of order 10 1.5N , for a matrix of size NxN. A large matrix CN was encountered in later work that involved model-based parameter estimation (MBPE) for adaptive sampling and estimation of a transfer function [Miller (1996)] using rational functions as fitting models (FM). For example, when using simple LU decomposition to solve even a low-order system, say one having fewer than 20 coefficients, the CN might exceed 10 6. (Note that this problem can be circumvented by using a more robust solution, such as singular-value decomposition, but that's also left for a later discussion.) An interesting aspect of these large CNs was that the match of the FM with the original data when computed using coefficients obtained from [Y]xB], with B] the right-hand-side vector, could be much less accurate than when using coefficients instead obtained from back substitution.

S-APS 1995-A COMPUTATIONAL STUDY OF THE EFFECT OF MATRIX SIZE AND TYPE, ETC. ON MATRIX-SOLUTION ACCURACY.pdf

Matrices occupy a central role in most physical modeling. As the size of matrices being solved increases, it is becoming more important to quantify what factors influence the accuracy of the final result. The condition number (CN) of a matrix is an important controlling factor in limiting the solution accuracy, the effect of which can be circumvented by increasing the precision of the computations. This typically involves going from single to double precision, e.g., going from 64-bit to 128-bit word size. But the number of unknowns, the accuracy to which the original matrix coefficients are obtained, and the accuracy to which the right-hand-side is known also affect the final result.

A note on the stability of Toeplitz matrix inversion formulas

Applied Mathematics Letters, 2004

In this paper, we consider the stability of the algorithms emerging from Toeplitz matrix inversion formulas. We show that if the Toeplitz matrix is nonsingular and well-conditioned, then they are numerically forward stable. (~) 2004 Elsevier Ltd. All rights reserved.

On Frobenius normwise condition numbers for Moore–Penrose inverse and linear least-squares problems

Numerical Linear Algebra with Applications, 2007

Condition numbers play an important role in numerical analysis. Classical condition numbers are normwise: they measure the size of both input perturbations and output errors using norms. In this paper, we give explicit, computable expressions depending on the data, for the normwise condition numbers for the computation of the Moore-Penrose inverse as well as for the solutions of linear least-squares problems with full-column rank.