Jesse Barlow - Academia.edu (original) (raw)

Papers by Jesse Barlow

Research paper thumbnail of Accurate eigenvalue decomposition of arrowhead matrices, rank-one modifications of diagonal matrices and applications

ABSTRACT We present a new algorithm for solving an eigenvalue problem for a real symmetric arrowh... more ABSTRACT We present a new algorithm for solving an eigenvalue problem for a real symmetric arrowhead matrix. The algorithm computes all eigenvalues and all components of the corresponding eigenvectors with high relative accuracy in O(n2)O(n^{2})O(n2) operations. The algorithm is based on a shift-and-invert approach. Double precision is eventually needed to compute only one element of the inverse of the shifted matrix. Each eigenvalue and the corresponding eigenvector can be computed separately, which makes the algorithm adaptable for parallel computing. Our results extend to Hermitian arrowhead matrices, real symmetric diagonal-plus-rank-one matrices and singular value decomposition of real triangular arrowhead matrices.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Block gram-schmidt downdating

Electronic Transactions on Numerical Analysis, 2014

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Deflation for the Symmetric Arrowhead and Diagonal-Plus-Rank-One Eigenvalue Problems

SIAM Journal on Matrix Analysis and Applications

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Weyl-type relative perturbation bounds for eigensystems of Hermitian matrices

Linear Algebra and its Applications, 2000

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Solving the ultrasound inverse scattering problem of inhomogeneous media using different approaches of total least squares algorithms

Medical Imaging 2018: Ultrasonic Imaging and Tomography

The distorted Born iterative method (DBI) is used to solve the inverse scattering problem in the ... more The distorted Born iterative method (DBI) is used to solve the inverse scattering problem in the ultrasound tomography with the objective of determining a scattering function that is related to the acoustical properties of the region of interest (ROI) from the disturbed waves measured by transducers outside the ROI. Since the method is iterative, we use Born approximation for the first estimate of the scattering function. The main problem with the DBI is that the linear system of the inverse scattering equations is ill-posed. To deal with that, we use two different algorithms and compare the relative errors and execution times. The first one is Truncated Total Least Squares (TTLS). The second one is Regularized Total Least Squares method (RTLS-Newton) where the parameters for regularization were found by solving a nonlinear system with Newton method. We simulated the data for the DBI method in a way that leads to the overdetermined system. The advantage of RTLS-Newton is that the computation of singular value decomposition for a matrix is avoided, so it is faster than TTLS, but it still solves the similar minimization problem. For the exact scattering function we used Modified Shepp-Logan phantom. For finding the Born approximation, RTLS-Newton is 10 times faster than TTLS. In addition, the relative error in L2-norm is smaller using RTLS-Newton than TTLS after 10 iterations of the DBI method and it takes less time.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Reorthogonalized Block Classical Gram--Schmidt

A new reorthogonalized block classical Gram--Schmidt algorithm is proposed that factorizes a full... more A new reorthogonalized block classical Gram--Schmidt algorithm is proposed that factorizes a full column rank matrix A into A=QR where Q is left orthogonal (has orthonormal columns) and R is upper triangular and nonsingular. With appropriate assumptions on the diagonal blocks of R, the algorithm, when implemented in floating point arithmetic with machine unit , produces Q and R such that I- Q^T Q _2 =O() and A-QR _2 =O( A _2). The resulting bounds also improve a previous bound by Giraud et al. [Num. Math., 101(1):87-100, 2005] on the CGS2 algorithm originally developed by Abdelmalek [BIT, 11(4):354--367, 1971]. Keywords: Block matrices, Q--R factorization, Gram-Schmidt process, Condition numbers, Rounding error analysis.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A note on the error analysis of classical Gram-Schmidt

An error analysis result is given for classical Gram--Schmidt factorization of a full rank matrix... more An error analysis result is given for classical Gram--Schmidt factorization of a full rank matrix A into A=QR where Q is left orthogonal (has orthonormal columns) and R is upper triangular. The work presented here shows that the computed R satisfies R=A+E where E is an appropriately small backward error, but only if the diagonals of R are computed in a manner similar to Cholesky factorization of the normal equations matrix. A similar result is stated in [Giraud at al, Numer. Math. 101(1):87--100,2005]. However, for that result to hold, the diagonals of R must be computed in the manner recommended in this work.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Accurate eigenvalue decomposition

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Efficient minimization methods of mixed `2-`1 and `1-`1 norms for image estoration

Abstract. Image restoration problems are often solved by finding the minimizer of a suitable obje... more Abstract. Image restoration problems are often solved by finding the minimizer of a suitable objective function. Usually this function consists of a data-fitting term and a regularization term. For the least squares solution, both the data-fitting and the regularization terms are in the 2 norm. In this paper, we consider the least absolute deviation (LAD) solution and the least mixed norm (LMN) solution. For the LAD solution, both the data-fitting and the regularization terms are in the 1 norm. For the LMN solution, the regularization term is in the 1 norm but the data-fitting term is in the 2 norm. Since images often have nonnegative intensity values, the proposed algorithms provide the option of taking into account the nonnegativity constraint. The LMN and LAD solutions are formulated as the solution to a linear or quadratic programming problem which is solved by interior point methods. At each iteration of the interior point method, a structured linear system must be solved. The ...

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Updating approximate principal components with applications to template tracking

Numerical Linear Algebra with Applications

Summary Adaptive principal component analysis is prohibitively expensive when a large-scale data ... more Summary Adaptive principal component analysis is prohibitively expensive when a large-scale data matrix must be updated frequently. Therefore, we consider the truncated URV decomposition that allows faster updates to its approximation to the singular value decomposition while still producing a good enough approximation to recover principal components. Specifically, we suggest an efficient algorithm for the truncated URV decomposition when a rank 1 matrix updates the data matrix. After the algorithm development, the truncated URV decomposition is successfully applied to the template tracking problem in a video sequence proposed by Matthews et al. [IEEE Trans. Pattern Anal. Mach. Intell., 26:810-815 2004], which requires computation of the principal components of the augmented image matrix at every iteration. From the template tracking experiments, we show that, in adaptive applications, the truncated URV decomposition maintains a good approximation to the principal component subspace more efficiently than other procedures.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Forward stable eigenvalue decomposition of rank-one modifications of diagonal matrices

We present a new algorithm for solving an eigenvalue problem for a real symmetric matrix which is... more We present a new algorithm for solving an eigenvalue problem for a real symmetric matrix which is a rank-one modification of a diagonal matrix. The algorithm computes each eigenvalue and all components of the corresponding eigenvector with high relative accuracy in O(n) operations. The algorithm is based on a shift-and-invert approach. Only a single element of the inverse of the shifted matrix eventually needs to be computed with double the working precision. Each eigenvalue and the corresponding eigenvector can be computed separately, which makes the algorithm adaptable for parallel computing. Our results extend to the complex Hermitian case. The algorithm is similar to the algorithm for solving the eigenvalue problem for real symmetric arrowhead matrices from: N. Jakovčević Stor, I. Slapničar and J. L. Barlow, Accurate eigenvalue decomposition of real symmetric arrowhead matrices and applications, Lin. Alg. Appl., 464 (2015).

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Semi-supervised Clustering for High-dimensional and Sparse Features a Dissertation in Information Sciences and Technology

Clustering is one of the most common data mining tasks, used frequently for data organization and... more Clustering is one of the most common data mining tasks, used frequently for data organization and analysis in various application domains. Traditional machine learning approaches to clustering are fully automated and unsupervised where class labels are unknown a priori. In real application domains, however, some " weak " form of side information about the domain or data sets can be often available or derivable. In particular, information in the form of instance-level pairwise constraints is general and is relatively easy to derive. The problem with traditional clustering techniques is that they cannot benefit from side information even when available. I study the problem of semi-supervised clustering, which aims to partition a set of unlabeled data items into coherent groups given a collection of constraints. Because semi-supervised clustering promises higher quality with little extra human effort, it is of great interest both in theory and in practice. Semi-supervised clu...

Bookmarks Related papers MentionsView impact

Research paper thumbnail of For Golub – Kahan – Lanczos Bidiagonal Reduction : Part II – Singular Vectors

where U ∈ R is left orthogonal, V ∈ R is orthogonal, and B ∈ R is bidiagonal. When the Lanczos re... more where U ∈ R is left orthogonal, V ∈ R is orthogonal, and B ∈ R is bidiagonal. When the Lanczos recurrence is implemented in finite precision arithmetic, the columns of U and V tend to lose orthogonality, making a reorthogonalization strategy necessary to preserve convergence of the singular values. A new strategy is proposed for recovering the left singular vectors. When using that strategy, it is shown that, in floating point arithmetic with machine unit εM , if orth(V ) = ‖I − V T V ‖2,

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Reconstruction of ultrasound tomography for cancer detection using total least squares and conjugate gradient method

Medical Imaging 2018: Ultrasonic Imaging and Tomography

The distorted Born iterative (DBI) method is a powerful approach for solving the inverse scatteri... more The distorted Born iterative (DBI) method is a powerful approach for solving the inverse scattering problem for ultrasound tomographic imaging. This method iteratively solves the inverse problem for the scattering function and the forward problem for the inhomogeneous Green’s function and the total field. Because of the ill-posed system from the inverse problem, regularization methods are needed to obtain a smooth solution. The three methods compared are truncated total least squares (TTLS), conjugate gradient for least squares (CGLS), and Tikhonov regularization. This paper uses numerical simulations to compare these three approaches to regularization in terms of both quality of image reconstruction and speed. Noise from both transmitters and receivers is very common in real applications, and is considered in stimulation as well. The solutions are evaluated by residual error of scattering function of region of interest(ROI), convergence of total field solutions in all iteration steps, and accuracy of estimated Green’s functions. By comparing the result of reconstruction quality as well as the computational cost of the three methods under different ultrasound frequency, we prove that TTLS method has the lowest error in solving strongly ill-posed problems. CGLS consumes the shortest computational time but its error is higher than TTLS, but lower than Tikhonov regularization.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Fast algorithms for 11 norm/mixed 11 and 12 norms for image restoration

Lecture Notes in Computer Science, 2005

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Error Analysis of Update Methods for the Symmetric Eigenvalue Problem

SIAM Journal on Matrix Analysis and Applications, 1993

Cuppen’s divide-and-conquer method for solving the symmetric tridiagonal eigenvalue problem has b... more Cuppen’s divide-and-conquer method for solving the symmetric tridiagonal eigenvalue problem has been shown to be very efficient on shared memory multiprocessor architectures.In this paper, some error analysis issues concerning this method are resolved. The method is shown to be stable and a slightly different stopping criterion for finding the zeroes of the spectral function is suggested.These error analysis results extend to general update methods for the symmetric eigenvalue problem. That is, good backward error bounds are obtained for methods to find the eigenvalues and eigenvectors of A+rhowwTA + \rho ww^T A+rhowwT, given those of A. These results can also be used to analyze a new fast method for finding the eigenvalues of banded, symmetric Toeplitz matrices.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Computing Accurate Eigensystems of Scaled Diagonally Dominant Matrices

Siam Journal on Numerical Analysis, 1990

n fi When com puting eigenvalu es of sym m etric m atrices an d singular valu es of general m atr... more n fi When com puting eigenvalu es of sym m etric m atrices an d singular valu es of general m atrices i nite precision arithm etic we in general only expect to com pute them with an error bound pro- n portional to the product of m ach ine precision an d the norm of the m atrix. In

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Computing Accurate Eigensystems f Scaled Diagonally Dominant Matrices ) ( Appeared in

n fi When computing eigenvalues of sym metric matrices and singular values of general matrices i ... more n fi When computing eigenvalues of sym metric matrices and singular values of general matrices i nite precision arithmetic we in general only expect to compute them with an error bound pron portional to the product of machine precision and the norm of the matrix. In particular, we do ot expect to compute tiny eigenvalues and singular values to high relative accuracy. There are l m some important classes of matrices where we can do much better, including bidiagona atrices, scaled diagonally dominant matrices, and scaled diagonally dominant definite pencils. c These classes include many graded matrices, and all sym metric positive definite matrices which an be consistently ordered (and thus all sym metric positive definite tridiagonal matrices) . In n i particular, the singular values and eigenvalues are determined to high relative precisio ndependent of their magnitudes, and there are algorithms to compute them this accurately. c The eigenvectors are also determined more accurately t...

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Block Modified Gram--Schmidt Algorithms and Their Analysis

SIAM Journal on Matrix Analysis and Applications

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Regularization in ultrasound tomography using projection-based regularized total least squares

Inverse Problems in Science and Engineering

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Accurate eigenvalue decomposition of arrowhead matrices, rank-one modifications of diagonal matrices and applications

ABSTRACT We present a new algorithm for solving an eigenvalue problem for a real symmetric arrowh... more ABSTRACT We present a new algorithm for solving an eigenvalue problem for a real symmetric arrowhead matrix. The algorithm computes all eigenvalues and all components of the corresponding eigenvectors with high relative accuracy in O(n2)O(n^{2})O(n2) operations. The algorithm is based on a shift-and-invert approach. Double precision is eventually needed to compute only one element of the inverse of the shifted matrix. Each eigenvalue and the corresponding eigenvector can be computed separately, which makes the algorithm adaptable for parallel computing. Our results extend to Hermitian arrowhead matrices, real symmetric diagonal-plus-rank-one matrices and singular value decomposition of real triangular arrowhead matrices.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Block gram-schmidt downdating

Electronic Transactions on Numerical Analysis, 2014

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Deflation for the Symmetric Arrowhead and Diagonal-Plus-Rank-One Eigenvalue Problems

SIAM Journal on Matrix Analysis and Applications

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Weyl-type relative perturbation bounds for eigensystems of Hermitian matrices

Linear Algebra and its Applications, 2000

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Solving the ultrasound inverse scattering problem of inhomogeneous media using different approaches of total least squares algorithms

Medical Imaging 2018: Ultrasonic Imaging and Tomography

The distorted Born iterative method (DBI) is used to solve the inverse scattering problem in the ... more The distorted Born iterative method (DBI) is used to solve the inverse scattering problem in the ultrasound tomography with the objective of determining a scattering function that is related to the acoustical properties of the region of interest (ROI) from the disturbed waves measured by transducers outside the ROI. Since the method is iterative, we use Born approximation for the first estimate of the scattering function. The main problem with the DBI is that the linear system of the inverse scattering equations is ill-posed. To deal with that, we use two different algorithms and compare the relative errors and execution times. The first one is Truncated Total Least Squares (TTLS). The second one is Regularized Total Least Squares method (RTLS-Newton) where the parameters for regularization were found by solving a nonlinear system with Newton method. We simulated the data for the DBI method in a way that leads to the overdetermined system. The advantage of RTLS-Newton is that the computation of singular value decomposition for a matrix is avoided, so it is faster than TTLS, but it still solves the similar minimization problem. For the exact scattering function we used Modified Shepp-Logan phantom. For finding the Born approximation, RTLS-Newton is 10 times faster than TTLS. In addition, the relative error in L2-norm is smaller using RTLS-Newton than TTLS after 10 iterations of the DBI method and it takes less time.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Reorthogonalized Block Classical Gram--Schmidt

A new reorthogonalized block classical Gram--Schmidt algorithm is proposed that factorizes a full... more A new reorthogonalized block classical Gram--Schmidt algorithm is proposed that factorizes a full column rank matrix A into A=QR where Q is left orthogonal (has orthonormal columns) and R is upper triangular and nonsingular. With appropriate assumptions on the diagonal blocks of R, the algorithm, when implemented in floating point arithmetic with machine unit , produces Q and R such that I- Q^T Q _2 =O() and A-QR _2 =O( A _2). The resulting bounds also improve a previous bound by Giraud et al. [Num. Math., 101(1):87-100, 2005] on the CGS2 algorithm originally developed by Abdelmalek [BIT, 11(4):354--367, 1971]. Keywords: Block matrices, Q--R factorization, Gram-Schmidt process, Condition numbers, Rounding error analysis.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A note on the error analysis of classical Gram-Schmidt

An error analysis result is given for classical Gram--Schmidt factorization of a full rank matrix... more An error analysis result is given for classical Gram--Schmidt factorization of a full rank matrix A into A=QR where Q is left orthogonal (has orthonormal columns) and R is upper triangular. The work presented here shows that the computed R satisfies R=A+E where E is an appropriately small backward error, but only if the diagonals of R are computed in a manner similar to Cholesky factorization of the normal equations matrix. A similar result is stated in [Giraud at al, Numer. Math. 101(1):87--100,2005]. However, for that result to hold, the diagonals of R must be computed in the manner recommended in this work.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Accurate eigenvalue decomposition

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Efficient minimization methods of mixed `2-`1 and `1-`1 norms for image estoration

Abstract. Image restoration problems are often solved by finding the minimizer of a suitable obje... more Abstract. Image restoration problems are often solved by finding the minimizer of a suitable objective function. Usually this function consists of a data-fitting term and a regularization term. For the least squares solution, both the data-fitting and the regularization terms are in the 2 norm. In this paper, we consider the least absolute deviation (LAD) solution and the least mixed norm (LMN) solution. For the LAD solution, both the data-fitting and the regularization terms are in the 1 norm. For the LMN solution, the regularization term is in the 1 norm but the data-fitting term is in the 2 norm. Since images often have nonnegative intensity values, the proposed algorithms provide the option of taking into account the nonnegativity constraint. The LMN and LAD solutions are formulated as the solution to a linear or quadratic programming problem which is solved by interior point methods. At each iteration of the interior point method, a structured linear system must be solved. The ...

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Updating approximate principal components with applications to template tracking

Numerical Linear Algebra with Applications

Summary Adaptive principal component analysis is prohibitively expensive when a large-scale data ... more Summary Adaptive principal component analysis is prohibitively expensive when a large-scale data matrix must be updated frequently. Therefore, we consider the truncated URV decomposition that allows faster updates to its approximation to the singular value decomposition while still producing a good enough approximation to recover principal components. Specifically, we suggest an efficient algorithm for the truncated URV decomposition when a rank 1 matrix updates the data matrix. After the algorithm development, the truncated URV decomposition is successfully applied to the template tracking problem in a video sequence proposed by Matthews et al. [IEEE Trans. Pattern Anal. Mach. Intell., 26:810-815 2004], which requires computation of the principal components of the augmented image matrix at every iteration. From the template tracking experiments, we show that, in adaptive applications, the truncated URV decomposition maintains a good approximation to the principal component subspace more efficiently than other procedures.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Forward stable eigenvalue decomposition of rank-one modifications of diagonal matrices

We present a new algorithm for solving an eigenvalue problem for a real symmetric matrix which is... more We present a new algorithm for solving an eigenvalue problem for a real symmetric matrix which is a rank-one modification of a diagonal matrix. The algorithm computes each eigenvalue and all components of the corresponding eigenvector with high relative accuracy in O(n) operations. The algorithm is based on a shift-and-invert approach. Only a single element of the inverse of the shifted matrix eventually needs to be computed with double the working precision. Each eigenvalue and the corresponding eigenvector can be computed separately, which makes the algorithm adaptable for parallel computing. Our results extend to the complex Hermitian case. The algorithm is similar to the algorithm for solving the eigenvalue problem for real symmetric arrowhead matrices from: N. Jakovčević Stor, I. Slapničar and J. L. Barlow, Accurate eigenvalue decomposition of real symmetric arrowhead matrices and applications, Lin. Alg. Appl., 464 (2015).

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Semi-supervised Clustering for High-dimensional and Sparse Features a Dissertation in Information Sciences and Technology

Clustering is one of the most common data mining tasks, used frequently for data organization and... more Clustering is one of the most common data mining tasks, used frequently for data organization and analysis in various application domains. Traditional machine learning approaches to clustering are fully automated and unsupervised where class labels are unknown a priori. In real application domains, however, some " weak " form of side information about the domain or data sets can be often available or derivable. In particular, information in the form of instance-level pairwise constraints is general and is relatively easy to derive. The problem with traditional clustering techniques is that they cannot benefit from side information even when available. I study the problem of semi-supervised clustering, which aims to partition a set of unlabeled data items into coherent groups given a collection of constraints. Because semi-supervised clustering promises higher quality with little extra human effort, it is of great interest both in theory and in practice. Semi-supervised clu...

Bookmarks Related papers MentionsView impact

Research paper thumbnail of For Golub – Kahan – Lanczos Bidiagonal Reduction : Part II – Singular Vectors

where U ∈ R is left orthogonal, V ∈ R is orthogonal, and B ∈ R is bidiagonal. When the Lanczos re... more where U ∈ R is left orthogonal, V ∈ R is orthogonal, and B ∈ R is bidiagonal. When the Lanczos recurrence is implemented in finite precision arithmetic, the columns of U and V tend to lose orthogonality, making a reorthogonalization strategy necessary to preserve convergence of the singular values. A new strategy is proposed for recovering the left singular vectors. When using that strategy, it is shown that, in floating point arithmetic with machine unit εM , if orth(V ) = ‖I − V T V ‖2,

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Reconstruction of ultrasound tomography for cancer detection using total least squares and conjugate gradient method

Medical Imaging 2018: Ultrasonic Imaging and Tomography

The distorted Born iterative (DBI) method is a powerful approach for solving the inverse scatteri... more The distorted Born iterative (DBI) method is a powerful approach for solving the inverse scattering problem for ultrasound tomographic imaging. This method iteratively solves the inverse problem for the scattering function and the forward problem for the inhomogeneous Green’s function and the total field. Because of the ill-posed system from the inverse problem, regularization methods are needed to obtain a smooth solution. The three methods compared are truncated total least squares (TTLS), conjugate gradient for least squares (CGLS), and Tikhonov regularization. This paper uses numerical simulations to compare these three approaches to regularization in terms of both quality of image reconstruction and speed. Noise from both transmitters and receivers is very common in real applications, and is considered in stimulation as well. The solutions are evaluated by residual error of scattering function of region of interest(ROI), convergence of total field solutions in all iteration steps, and accuracy of estimated Green’s functions. By comparing the result of reconstruction quality as well as the computational cost of the three methods under different ultrasound frequency, we prove that TTLS method has the lowest error in solving strongly ill-posed problems. CGLS consumes the shortest computational time but its error is higher than TTLS, but lower than Tikhonov regularization.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Fast algorithms for 11 norm/mixed 11 and 12 norms for image restoration

Lecture Notes in Computer Science, 2005

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Error Analysis of Update Methods for the Symmetric Eigenvalue Problem

SIAM Journal on Matrix Analysis and Applications, 1993

Cuppen’s divide-and-conquer method for solving the symmetric tridiagonal eigenvalue problem has b... more Cuppen’s divide-and-conquer method for solving the symmetric tridiagonal eigenvalue problem has been shown to be very efficient on shared memory multiprocessor architectures.In this paper, some error analysis issues concerning this method are resolved. The method is shown to be stable and a slightly different stopping criterion for finding the zeroes of the spectral function is suggested.These error analysis results extend to general update methods for the symmetric eigenvalue problem. That is, good backward error bounds are obtained for methods to find the eigenvalues and eigenvectors of A+rhowwTA + \rho ww^T A+rhowwT, given those of A. These results can also be used to analyze a new fast method for finding the eigenvalues of banded, symmetric Toeplitz matrices.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Computing Accurate Eigensystems of Scaled Diagonally Dominant Matrices

Siam Journal on Numerical Analysis, 1990

n fi When com puting eigenvalu es of sym m etric m atrices an d singular valu es of general m atr... more n fi When com puting eigenvalu es of sym m etric m atrices an d singular valu es of general m atrices i nite precision arithm etic we in general only expect to com pute them with an error bound pro- n portional to the product of m ach ine precision an d the norm of the m atrix. In

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Computing Accurate Eigensystems f Scaled Diagonally Dominant Matrices ) ( Appeared in

n fi When computing eigenvalues of sym metric matrices and singular values of general matrices i ... more n fi When computing eigenvalues of sym metric matrices and singular values of general matrices i nite precision arithmetic we in general only expect to compute them with an error bound pron portional to the product of machine precision and the norm of the matrix. In particular, we do ot expect to compute tiny eigenvalues and singular values to high relative accuracy. There are l m some important classes of matrices where we can do much better, including bidiagona atrices, scaled diagonally dominant matrices, and scaled diagonally dominant definite pencils. c These classes include many graded matrices, and all sym metric positive definite matrices which an be consistently ordered (and thus all sym metric positive definite tridiagonal matrices) . In n i particular, the singular values and eigenvalues are determined to high relative precisio ndependent of their magnitudes, and there are algorithms to compute them this accurately. c The eigenvectors are also determined more accurately t...

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Block Modified Gram--Schmidt Algorithms and Their Analysis

SIAM Journal on Matrix Analysis and Applications

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Regularization in ultrasound tomography using projection-based regularized total least squares

Inverse Problems in Science and Engineering

Bookmarks Related papers MentionsView impact