Jesse Barlow - Profile on Academia.edu (original) (raw)
Papers by Jesse Barlow
Modifying rank-revealing decompositions
Modified Gram-Schmidt-based downdating technique for ULV decompositions with applications to recursive TLS problemsProceedings of SPIE, Nov 2, 1999
The ULV decomposition (ULVD) is an important member of a class of rank-revealing two-sided orthog... more The ULV decomposition (ULVD) is an important member of a class of rank-revealing two-sided orthogonal decompositions used to approximate the singular value decomposition (SVD). The ULVD can be updated and downdated much faster than the SVD, hence its utility in the solution of recursive total least squares (TLS) problems. However, the robust implementation of ULVD after the addition and deletion of rows (called updating and downdating respectively) is not altogether straightforward. When updating or downdating the ULVD, the accurate computation of the subspaces necessary to solve the TLS problem is of great importance. In this paper, algorithms are given to compute simple parameters that can often show when good subspaces have been computed.
Computing, Dec 1, 1985
On Roundoff Error Distributions in Floating Point and Logarithmic Arithmetic. Probabilistic model... more On Roundoff Error Distributions in Floating Point and Logarithmic Arithmetic. Probabilistic models of floating point and logarithmic arithmetic are constructed using assumptions with both theoretical and empirical justification. The justification of these assumptions resolves open questions in and . These models are applied to errors from sums and inner products. A comparison is made between the error analysis properties of floating point and logarithmic computers. We conclude that the logarithmic computer has smaller error confidence intervals for roundofferrors than a floating point computer with the same computer word size and approximately the same number range.
Image restoration and reconstruction: beyond least squares
Systems of linear equations with ill conditioned coefficient matrices arise very often in signal ... more Systems of linear equations with ill conditioned coefficient matrices arise very often in signal and image processing. The most commonly used method for solving such systems is the Regularized Least Squares method, in which the unknown parameters are computed by minimizing a cost function that consists of quadratic data-fitting and regularization terms. We consider techniques other than the Regularized Least Squares for solving such systems. Our focus is on image processing problems. One implicit assumption behind the least squares solution is that exact information of the coefficient matrix is available. When error exists in both the right hand side and the coefficient matrix, the Total Least Squares method gives better results than the ordinary Least Squares method. We present algorithms for the Regularized Total Least Squares (RTLS) problem. In many image and signal processing problems, the coefficient matrices have certain useful structure. For example, in the problems of image restoration and high resolution image reconstruction, the resulting blurring matrices have a Block Toeplitz Toeplitz Block (BTTB) like structure. In the problem of color image restoration, the blurring matrix consists of BTTB blocks. However, traditional Total Least Squares methods do not preserve the structure in the coefficient matrix. Therefore it is more appropriate to apply Structured Total Least Squares (STLS). This thesis presents Regularized Structured Total Least Squares (RSTLS) algorithms for the problems of high resolution image reconstruction and color image restoration. The major cost at each iteration of our RSTLS algorithms is in solving large sparse and structured linear least squares systems. We propose to use preconditioned CGLS or LSQR method to solve these systems. We show that Discrete Cosine Transform or Fast Fourier Transform based preconditioners are very effective for these systems. Other assumptions behind the regularized least squares solution include Gaussian prior distribution of the unknown parameters and the additive noise. When these assumptions are violated, we consider the Least Mixed Norm (LMN) or the Least Absolute Deviation (LAD) solution. For the LAD solution, both the data-fitting and the regularization terms are in the e1 norm. For the LMN solution, the regularization term is in the e1 norm but the data-fitting term is in the e2 norm. Both solutions are formulated as the solution to a convex programming problem, and solved by interior point methods. At each iteration of the interior point method, a structured linear system must be solved. The preconditioned conjugate gradient method with factorized sparse inverse preconditioners is employed to such structured inner systems.
On the Distribution of Accumulated Roundoff Error in Floating Point Arithmetic
Bit Numerical Mathematics, Jun 1, 1980
Computing accurate eigensystems of scaled diagonally dominant matrices: LAPACK working note No. 7
When computing eigenvalues of symmetric matrices and singular values of general matrices in finit... more When computing eigenvalues of symmetric matrices and singular values of general matrices in finite precision arithmetic we in general only expect to compute them with an error bound proportional to the product of machine precision and the norm of the matrix. In particular, we do not expect to compute tiny eigenvalues and singular values to high relative accuracy. There are some important classes of matrices where we can do much better, including bidiagonal matrices, scaled diagonally dominant matrices, and scaled diagonally dominant definite pencils. These classes include many graded matrices, and all symmetric positive definite matrices which can be consistently ordered (and thus all symmetric positive definite tridiagonal matrices). In particular, the singular values and eigenvalues are determined to high relative precision independent of their magnitudes, and there are algorithms to compute them this accurately. The eigenvectors are also determined more accurately than for general matrices, and may be computed more accurately as well. This work extends results of Kahan and Demmel for bidiagnoal and tridiagonal matrices. 17 refs.
A stable algorithm for downdating the ULV decomposition
Publisher Summary This chapter consists of three sections— the first focusing on a method for dow... more Publisher Summary This chapter consists of three sections— the first focusing on a method for downdating ULV decomposition along with its main results. The second section presents an error analysis showing the favorable stability properties of this downdating algorithm and the procedures carried out to develop it. The third section describes the ULV decomposition algorithm in detail. The ULV method was described by Stewart who also gives a method for updating it. It is a particular case of what Lawson and Hanson called HRK decompositions but the difference is that blocks that are exactly zero are separated out. The downdating algorithm presented here coupled with Stewart's updating algorithm show that the ULV decomposition can be updated and downdated in O(n2) flops in a manner that preserves their structure.
Probabilistic error analysis of floating point and crd arithmetics
This paper discusses the longstanding, unsolved problem of the probability distribution of roundo... more This paper discusses the longstanding, unsolved problem of the probability distribution of roundoff errors in computer arithmetics. Probabilistic models of floating point and logarithmic arithmetic are constructed using assumptions with both theoretical and empirical justification. To justify these assumptions we resolve open questions in Hamming (1970) and Bustoz et al. (1979). We also develop a probabilistic roundoff error model for fixed point arithmetic as discussed in Bareiss and Barlow (1980). The models for floating point and logarithmic arithmetic are applied to the error from sums, extended products, polynomials in one variable, inner products, matrix computations, linear equation solving procedures both direct and iterative, and linear multistep methods for the solution of ordinary differential equations. A comparison is made of the error analysis properties of floating point and logarithmic computers. We conclude that the logarithmic computer has smaller error confidence ...
Regularization of Inverse Scattering Problem in Ultrasound Tomography
Modifying Two-Sided Orthogonal Decompositions: Algorithms, Implementation, and Applications
Abstract : In this thesis we propose several algorithms for rank-one updates and downdates to the... more Abstract : In this thesis we propose several algorithms for rank-one updates and downdates to these decompositions with strong stability properties and efficient implementations on high-performance computers. We seek algorithms which only require O(n2) operations per update or downdate unlike recomputing the two-sided orthogonal decomposition (TSOD) in O(n3). We also desire highly regular data movement inherited in these algorithms in order to implement these efficiently on the distributed memory MIMD multiprocessors. The algorithms are based upon 'chasing' strategies for updating and downdating procedures for orthogonal decompositions. (AN)
Advanced Signal Processing Algorithms, Architectures, and Implementations X (Proceedings Volume)
Research is being conducted on high power microwave devices (eg, gyrotrons) at the University of ... more Research is being conducted on high power microwave devices (eg, gyrotrons) at the University of Michigan. Of utmost concern is the phenomenon of pulse shortening, that is, the duration of the microwave pulse is shorter than the duration of the cathode voltage. For years researchers have applied the Fourier transform to the heterodyned microwave signals. The problem with this technique is that a signal with multiple frequency components has the same spectrum as that of a signal with frequency components emitted at different times. Time-frequency ...
We consider the problem of solving the homogeneous system of linear equations
The standard perturbation theory for linear equations states that nearly uncoupled Markov chains(... more The standard perturbation theory for linear equations states that nearly uncoupled Markov chains(NUMCs) are very sensitive to small changes in the elements. Indeed, some algorithms, such as standard Gaussian elimination, will obtain poor results for such problems. A structured perturbation theory is given that shows that NUMCs usually lead to well conditioned problems. It is shown that with appropriate stopping criteria, iterative aggregation/disagregation algorithms will achieve these structured error bounds. A variant of Gaussian elimination due to Grassman, Taksar, and Heyman was recently shown by O'Cinneide
Accurate eigenvalue decomposition of arrowhead matrices, rank-one modifications of diagonal matrices and applications
ABSTRACT We present a new algorithm for solving an eigenvalue problem for a real symmetric arrowh... more ABSTRACT We present a new algorithm for solving an eigenvalue problem for a real symmetric arrowhead matrix. The algorithm computes all eigenvalues and all components of the corresponding eigenvectors with high relative accuracy in O(n2)O(n^{2})O(n2) operations. The algorithm is based on a shift-and-invert approach. Double precision is eventually needed to compute only one element of the inverse of the shifted matrix. Each eigenvalue and the corresponding eigenvector can be computed separately, which makes the algorithm adaptable for parallel computing. Our results extend to Hermitian arrowhead matrices, real symmetric diagonal-plus-rank-one matrices and singular value decomposition of real triangular arrowhead matrices.
Block gram-schmidt downdating
Electronic Transactions on Numerical Analysis, 2014
Deflation for the Symmetric Arrowhead and Diagonal-Plus-Rank-One Eigenvalue Problems
SIAM Journal on Matrix Analysis and Applications
Linear Algebra and its Applications, 2000
We present a Weyl-type relative bound for eigenvalues of Hermitian perturbations A + E of (not ne... more We present a Weyl-type relative bound for eigenvalues of Hermitian perturbations A + E of (not necessarily definite) Hermitian matrices A. This bound, given in function of the quantity η = A −1/2 EA −1/2 2 , that was already known in the definite case, is shown to be valid as well in the indefinite case. We also extend to the indefinite case relative eigenvector bounds which depend on the same quantity η. As a consequence, new relative perturbation bounds for singular values and vectors are also obtained. Using matrix differential calculus techniques we obtain for eigenvalues a sharper, first-order bound involving the logarithm matrix function, which is smaller than η not only for small E, as expected, but for any perturbation.
Solving the ultrasound inverse scattering problem of inhomogeneous media using different approaches of total least squares algorithms
Medical Imaging 2018: Ultrasonic Imaging and Tomography
The distorted Born iterative method (DBI) is used to solve the inverse scattering problem in the ... more The distorted Born iterative method (DBI) is used to solve the inverse scattering problem in the ultrasound tomography with the objective of determining a scattering function that is related to the acoustical properties of the region of interest (ROI) from the disturbed waves measured by transducers outside the ROI. Since the method is iterative, we use Born approximation for the first estimate of the scattering function. The main problem with the DBI is that the linear system of the inverse scattering equations is ill-posed. To deal with that, we use two different algorithms and compare the relative errors and execution times. The first one is Truncated Total Least Squares (TTLS). The second one is Regularized Total Least Squares method (RTLS-Newton) where the parameters for regularization were found by solving a nonlinear system with Newton method. We simulated the data for the DBI method in a way that leads to the overdetermined system. The advantage of RTLS-Newton is that the computation of singular value decomposition for a matrix is avoided, so it is faster than TTLS, but it still solves the similar minimization problem. For the exact scattering function we used Modified Shepp-Logan phantom. For finding the Born approximation, RTLS-Newton is 10 times faster than TTLS. In addition, the relative error in L2-norm is smaller using RTLS-Newton than TTLS after 10 iterations of the DBI method and it takes less time.
A new reorthogonalized block classical Gram--Schmidt algorithm is proposed that factorizes a full... more A new reorthogonalized block classical Gram--Schmidt algorithm is proposed that factorizes a full column rank matrix A into A=QR where Q is left orthogonal (has orthonormal columns) and R is upper triangular and nonsingular. With appropriate assumptions on the diagonal blocks of R, the algorithm, when implemented in floating point arithmetic with machine unit , produces Q and R such that I- Q^T Q _2 =O() and A-QR _2 =O( A _2). The resulting bounds also improve a previous bound by Giraud et al. [Num. Math., 101(1):87-100, 2005] on the CGS2 algorithm originally developed by Abdelmalek [BIT, 11(4):354--367, 1971]. Keywords: Block matrices, Q--R factorization, Gram-Schmidt process, Condition numbers, Rounding error analysis.
An error analysis result is given for classical Gram--Schmidt factorization of a full rank matrix... more An error analysis result is given for classical Gram--Schmidt factorization of a full rank matrix A into A=QR where Q is left orthogonal (has orthonormal columns) and R is upper triangular. The work presented here shows that the computed R satisfies R=A+E where E is an appropriately small backward error, but only if the diagonals of R are computed in a manner similar to Cholesky factorization of the normal equations matrix. A similar result is stated in [Giraud at al, Numer. Math. 101(1):87--100,2005]. However, for that result to hold, the diagonals of R must be computed in the manner recommended in this work.
Modifying rank-revealing decompositions
Modified Gram-Schmidt-based downdating technique for ULV decompositions with applications to recursive TLS problemsProceedings of SPIE, Nov 2, 1999
The ULV decomposition (ULVD) is an important member of a class of rank-revealing two-sided orthog... more The ULV decomposition (ULVD) is an important member of a class of rank-revealing two-sided orthogonal decompositions used to approximate the singular value decomposition (SVD). The ULVD can be updated and downdated much faster than the SVD, hence its utility in the solution of recursive total least squares (TLS) problems. However, the robust implementation of ULVD after the addition and deletion of rows (called updating and downdating respectively) is not altogether straightforward. When updating or downdating the ULVD, the accurate computation of the subspaces necessary to solve the TLS problem is of great importance. In this paper, algorithms are given to compute simple parameters that can often show when good subspaces have been computed.
Computing, Dec 1, 1985
On Roundoff Error Distributions in Floating Point and Logarithmic Arithmetic. Probabilistic model... more On Roundoff Error Distributions in Floating Point and Logarithmic Arithmetic. Probabilistic models of floating point and logarithmic arithmetic are constructed using assumptions with both theoretical and empirical justification. The justification of these assumptions resolves open questions in and . These models are applied to errors from sums and inner products. A comparison is made between the error analysis properties of floating point and logarithmic computers. We conclude that the logarithmic computer has smaller error confidence intervals for roundofferrors than a floating point computer with the same computer word size and approximately the same number range.
Image restoration and reconstruction: beyond least squares
Systems of linear equations with ill conditioned coefficient matrices arise very often in signal ... more Systems of linear equations with ill conditioned coefficient matrices arise very often in signal and image processing. The most commonly used method for solving such systems is the Regularized Least Squares method, in which the unknown parameters are computed by minimizing a cost function that consists of quadratic data-fitting and regularization terms. We consider techniques other than the Regularized Least Squares for solving such systems. Our focus is on image processing problems. One implicit assumption behind the least squares solution is that exact information of the coefficient matrix is available. When error exists in both the right hand side and the coefficient matrix, the Total Least Squares method gives better results than the ordinary Least Squares method. We present algorithms for the Regularized Total Least Squares (RTLS) problem. In many image and signal processing problems, the coefficient matrices have certain useful structure. For example, in the problems of image restoration and high resolution image reconstruction, the resulting blurring matrices have a Block Toeplitz Toeplitz Block (BTTB) like structure. In the problem of color image restoration, the blurring matrix consists of BTTB blocks. However, traditional Total Least Squares methods do not preserve the structure in the coefficient matrix. Therefore it is more appropriate to apply Structured Total Least Squares (STLS). This thesis presents Regularized Structured Total Least Squares (RSTLS) algorithms for the problems of high resolution image reconstruction and color image restoration. The major cost at each iteration of our RSTLS algorithms is in solving large sparse and structured linear least squares systems. We propose to use preconditioned CGLS or LSQR method to solve these systems. We show that Discrete Cosine Transform or Fast Fourier Transform based preconditioners are very effective for these systems. Other assumptions behind the regularized least squares solution include Gaussian prior distribution of the unknown parameters and the additive noise. When these assumptions are violated, we consider the Least Mixed Norm (LMN) or the Least Absolute Deviation (LAD) solution. For the LAD solution, both the data-fitting and the regularization terms are in the e1 norm. For the LMN solution, the regularization term is in the e1 norm but the data-fitting term is in the e2 norm. Both solutions are formulated as the solution to a convex programming problem, and solved by interior point methods. At each iteration of the interior point method, a structured linear system must be solved. The preconditioned conjugate gradient method with factorized sparse inverse preconditioners is employed to such structured inner systems.
On the Distribution of Accumulated Roundoff Error in Floating Point Arithmetic
Bit Numerical Mathematics, Jun 1, 1980
Computing accurate eigensystems of scaled diagonally dominant matrices: LAPACK working note No. 7
When computing eigenvalues of symmetric matrices and singular values of general matrices in finit... more When computing eigenvalues of symmetric matrices and singular values of general matrices in finite precision arithmetic we in general only expect to compute them with an error bound proportional to the product of machine precision and the norm of the matrix. In particular, we do not expect to compute tiny eigenvalues and singular values to high relative accuracy. There are some important classes of matrices where we can do much better, including bidiagonal matrices, scaled diagonally dominant matrices, and scaled diagonally dominant definite pencils. These classes include many graded matrices, and all symmetric positive definite matrices which can be consistently ordered (and thus all symmetric positive definite tridiagonal matrices). In particular, the singular values and eigenvalues are determined to high relative precision independent of their magnitudes, and there are algorithms to compute them this accurately. The eigenvectors are also determined more accurately than for general matrices, and may be computed more accurately as well. This work extends results of Kahan and Demmel for bidiagnoal and tridiagonal matrices. 17 refs.
A stable algorithm for downdating the ULV decomposition
Publisher Summary This chapter consists of three sections— the first focusing on a method for dow... more Publisher Summary This chapter consists of three sections— the first focusing on a method for downdating ULV decomposition along with its main results. The second section presents an error analysis showing the favorable stability properties of this downdating algorithm and the procedures carried out to develop it. The third section describes the ULV decomposition algorithm in detail. The ULV method was described by Stewart who also gives a method for updating it. It is a particular case of what Lawson and Hanson called HRK decompositions but the difference is that blocks that are exactly zero are separated out. The downdating algorithm presented here coupled with Stewart's updating algorithm show that the ULV decomposition can be updated and downdated in O(n2) flops in a manner that preserves their structure.
Probabilistic error analysis of floating point and crd arithmetics
This paper discusses the longstanding, unsolved problem of the probability distribution of roundo... more This paper discusses the longstanding, unsolved problem of the probability distribution of roundoff errors in computer arithmetics. Probabilistic models of floating point and logarithmic arithmetic are constructed using assumptions with both theoretical and empirical justification. To justify these assumptions we resolve open questions in Hamming (1970) and Bustoz et al. (1979). We also develop a probabilistic roundoff error model for fixed point arithmetic as discussed in Bareiss and Barlow (1980). The models for floating point and logarithmic arithmetic are applied to the error from sums, extended products, polynomials in one variable, inner products, matrix computations, linear equation solving procedures both direct and iterative, and linear multistep methods for the solution of ordinary differential equations. A comparison is made of the error analysis properties of floating point and logarithmic computers. We conclude that the logarithmic computer has smaller error confidence ...
Regularization of Inverse Scattering Problem in Ultrasound Tomography
Modifying Two-Sided Orthogonal Decompositions: Algorithms, Implementation, and Applications
Abstract : In this thesis we propose several algorithms for rank-one updates and downdates to the... more Abstract : In this thesis we propose several algorithms for rank-one updates and downdates to these decompositions with strong stability properties and efficient implementations on high-performance computers. We seek algorithms which only require O(n2) operations per update or downdate unlike recomputing the two-sided orthogonal decomposition (TSOD) in O(n3). We also desire highly regular data movement inherited in these algorithms in order to implement these efficiently on the distributed memory MIMD multiprocessors. The algorithms are based upon 'chasing' strategies for updating and downdating procedures for orthogonal decompositions. (AN)
Advanced Signal Processing Algorithms, Architectures, and Implementations X (Proceedings Volume)
Research is being conducted on high power microwave devices (eg, gyrotrons) at the University of ... more Research is being conducted on high power microwave devices (eg, gyrotrons) at the University of Michigan. Of utmost concern is the phenomenon of pulse shortening, that is, the duration of the microwave pulse is shorter than the duration of the cathode voltage. For years researchers have applied the Fourier transform to the heterodyned microwave signals. The problem with this technique is that a signal with multiple frequency components has the same spectrum as that of a signal with frequency components emitted at different times. Time-frequency ...
We consider the problem of solving the homogeneous system of linear equations
The standard perturbation theory for linear equations states that nearly uncoupled Markov chains(... more The standard perturbation theory for linear equations states that nearly uncoupled Markov chains(NUMCs) are very sensitive to small changes in the elements. Indeed, some algorithms, such as standard Gaussian elimination, will obtain poor results for such problems. A structured perturbation theory is given that shows that NUMCs usually lead to well conditioned problems. It is shown that with appropriate stopping criteria, iterative aggregation/disagregation algorithms will achieve these structured error bounds. A variant of Gaussian elimination due to Grassman, Taksar, and Heyman was recently shown by O'Cinneide
Accurate eigenvalue decomposition of arrowhead matrices, rank-one modifications of diagonal matrices and applications
ABSTRACT We present a new algorithm for solving an eigenvalue problem for a real symmetric arrowh... more ABSTRACT We present a new algorithm for solving an eigenvalue problem for a real symmetric arrowhead matrix. The algorithm computes all eigenvalues and all components of the corresponding eigenvectors with high relative accuracy in O(n2)O(n^{2})O(n2) operations. The algorithm is based on a shift-and-invert approach. Double precision is eventually needed to compute only one element of the inverse of the shifted matrix. Each eigenvalue and the corresponding eigenvector can be computed separately, which makes the algorithm adaptable for parallel computing. Our results extend to Hermitian arrowhead matrices, real symmetric diagonal-plus-rank-one matrices and singular value decomposition of real triangular arrowhead matrices.
Block gram-schmidt downdating
Electronic Transactions on Numerical Analysis, 2014
Deflation for the Symmetric Arrowhead and Diagonal-Plus-Rank-One Eigenvalue Problems
SIAM Journal on Matrix Analysis and Applications
Linear Algebra and its Applications, 2000
We present a Weyl-type relative bound for eigenvalues of Hermitian perturbations A + E of (not ne... more We present a Weyl-type relative bound for eigenvalues of Hermitian perturbations A + E of (not necessarily definite) Hermitian matrices A. This bound, given in function of the quantity η = A −1/2 EA −1/2 2 , that was already known in the definite case, is shown to be valid as well in the indefinite case. We also extend to the indefinite case relative eigenvector bounds which depend on the same quantity η. As a consequence, new relative perturbation bounds for singular values and vectors are also obtained. Using matrix differential calculus techniques we obtain for eigenvalues a sharper, first-order bound involving the logarithm matrix function, which is smaller than η not only for small E, as expected, but for any perturbation.
Solving the ultrasound inverse scattering problem of inhomogeneous media using different approaches of total least squares algorithms
Medical Imaging 2018: Ultrasonic Imaging and Tomography
The distorted Born iterative method (DBI) is used to solve the inverse scattering problem in the ... more The distorted Born iterative method (DBI) is used to solve the inverse scattering problem in the ultrasound tomography with the objective of determining a scattering function that is related to the acoustical properties of the region of interest (ROI) from the disturbed waves measured by transducers outside the ROI. Since the method is iterative, we use Born approximation for the first estimate of the scattering function. The main problem with the DBI is that the linear system of the inverse scattering equations is ill-posed. To deal with that, we use two different algorithms and compare the relative errors and execution times. The first one is Truncated Total Least Squares (TTLS). The second one is Regularized Total Least Squares method (RTLS-Newton) where the parameters for regularization were found by solving a nonlinear system with Newton method. We simulated the data for the DBI method in a way that leads to the overdetermined system. The advantage of RTLS-Newton is that the computation of singular value decomposition for a matrix is avoided, so it is faster than TTLS, but it still solves the similar minimization problem. For the exact scattering function we used Modified Shepp-Logan phantom. For finding the Born approximation, RTLS-Newton is 10 times faster than TTLS. In addition, the relative error in L2-norm is smaller using RTLS-Newton than TTLS after 10 iterations of the DBI method and it takes less time.
A new reorthogonalized block classical Gram--Schmidt algorithm is proposed that factorizes a full... more A new reorthogonalized block classical Gram--Schmidt algorithm is proposed that factorizes a full column rank matrix A into A=QR where Q is left orthogonal (has orthonormal columns) and R is upper triangular and nonsingular. With appropriate assumptions on the diagonal blocks of R, the algorithm, when implemented in floating point arithmetic with machine unit , produces Q and R such that I- Q^T Q _2 =O() and A-QR _2 =O( A _2). The resulting bounds also improve a previous bound by Giraud et al. [Num. Math., 101(1):87-100, 2005] on the CGS2 algorithm originally developed by Abdelmalek [BIT, 11(4):354--367, 1971]. Keywords: Block matrices, Q--R factorization, Gram-Schmidt process, Condition numbers, Rounding error analysis.
An error analysis result is given for classical Gram--Schmidt factorization of a full rank matrix... more An error analysis result is given for classical Gram--Schmidt factorization of a full rank matrix A into A=QR where Q is left orthogonal (has orthonormal columns) and R is upper triangular. The work presented here shows that the computed R satisfies R=A+E where E is an appropriately small backward error, but only if the diagonals of R are computed in a manner similar to Cholesky factorization of the normal equations matrix. A similar result is stated in [Giraud at al, Numer. Math. 101(1):87--100,2005]. However, for that result to hold, the diagonals of R must be computed in the manner recommended in this work.