Fei Xue - Academia.edu (original) (raw)
Papers by Fei Xue
The Progressive GMRES algorithm, introduced by Beckermann and Reichel in 2008, is a residual-mini... more The Progressive GMRES algorithm, introduced by Beckermann and Reichel in 2008, is a residual-minimizing short-recurrence Krylov subspace method for solving a linear system in which the coefficient matrix has a low-rank skew-Hermitian part. We analyze this algorithm, observing a critical instability that makes the method unsuitable for some problems. To work around this issue we introduce a different short-term recurrence method based on Krylov subspaces for such matrices, which can be used as either a solver or a preconditioner. Numerical experiments compare this method to alternative algorithms. Key words. GMRES, MINRES, nearly Hermitian matrices, low-rank modifications AMS subject classifications. 65F10, 65N12, 15B57, 45B05, 45A05 1. Introduction. A
Linear Algebra and its Applications, 2011
SIAM Journal on Scientific Computing, 2016
SIAM Journal on Matrix Analysis and Applications, 2012
SIAM Journal on Matrix Analysis and Applications, 2011
ETNA - Electronic Transactions on Numerical Analysis, 2022
The Lanczos method is one of the standard approaches for computing a few eigenpairs of a large, s... more The Lanczos method is one of the standard approaches for computing a few eigenpairs of a large, sparse, symmetric matrix. It is typically used with restarting to avoid unbounded growth of memory and computational requirements. Thick-restart Lanczos is a popular restarted variant because of its simplicity and numerically robustness. However, convergence can be slow for highly clustered eigenvalues so more effective restarting techniques and the use of preconditioning is needed. In this paper, we present a thick-restart preconditioned Lanczos method, TRPL+K, that combines the power of locally optimal restarting (+K) and preconditioning techniques with the efficiency of the thick-restart Lanczos method. TRPL+K employs an inner-outer scheme where the inner loop applies Lanczos on a preconditioned operator while the outer loop augments the resulting Lanczos subspace with certain vectors from the previous restart cycle to obtain eigenvector approximations with which it thick restarts the ...
We analyze several important properties of invariant pairs of nonlinear algebraic eigenvalue prob... more We analyze several important properties of invariant pairs of nonlinear algebraic eigenvalue problems of the form T (λ)v = 0. Invariant pairs are generalizations of invariant subspaces in association with block Rayleigh quotients of square matrices to a nonlinear matrix-valued function T (·). They play an important role in the analysis of nonlinear eigenvalue problems and algorithms. In this paper, we first show that the algebraic, partial, and geometric multiplicities together with the Jordan chains corresponding to an eigenvalue of T (λ)v = 0 are completely represented by the Jordan canonical form of a simple invariant pair that captures this eigenvalue. We then investigate approximation errors and perturbations of a simple invariant pair. We also show that second order accuracy in eigenvalue approximation can be achieved by the two-sided block Rayleigh functional for non-defective eigenvalues. Finally, we study the matrix representation of the Fréchet derivative of the eigenprob...
We consider the solution of large-scale nonlinear algebraic Hermitian eigenproblems of the form T... more We consider the solution of large-scale nonlinear algebraic Hermitian eigenproblems of the form T (λ)v = 0 that admit a variational characterization of eigenvalues. These problems arise in a variety of applications and are generalizations of linear Hermitian eigenproblems Av=λBv. In this paper, we propose a Preconditioned Locally Minimal Residual (PLMR) method for efficiently computing interior eigenvalues of problems of this type. We discuss the development of search subspaces, preconditioning, and eigenpair extraction procedure based on the refined Rayleigh-Ritz projection. Extension to the block methods is presented, and a moving-window style soft deflation is described. Numerical experiments demonstrate that PLMR methods provide a rapid and robust convergence towards interior eigenvalues. The approach is also shown to be efficient and reliable for computing a large number of extreme eigenvalues, dramatically outperforming standard preconditioned conjugate gradient methods.
We study the local convergence rates of several single-vector Newton-like methods for the solutio... more We study the local convergence rates of several single-vector Newton-like methods for the solution of a semi-simple or defective eigenvalue of nonlinear algebraic eigenvalue problems of the form T (λ)v = 0. This problem has not been fully understood, since the Jacobian associated with the single-vector Newton’s method is singular at the desired eigenpair, and the standard convergence theory is not applicable. In addition, Newton’s method generally converges only linearly towards singular roots. In this paper, we show that faster convergence can be achieved for degenerate eigenvalues. For semi-simple eigenvalues, we show that the convergence of Newton’s method, Rayleigh functional iteration and the Jacobi-Davidson method are quadratic, and the latter two converge cubically for locally symmetric problems. For defective eigenvalues, all these methods converge only linearly in general. We then study two accelerated methods for defective eigenvalues, which exhibit quadratic convergence a...
SIAM Journal on Scientific Computing
Mathematics of Computation, 2015
Efficient computation of extreme eigenvalues of large-scale linear Hermitian eigenproblems can be... more Efficient computation of extreme eigenvalues of large-scale linear Hermitian eigenproblems can be achieved by preconditioned conjugate gradient (PCG) methods. In this paper, we study PCG methods for computing extreme eigenvalues of nonlinear Hermitian eigenproblems of the form T ( λ ) v = 0 T(\lambda )v=0 that admit a nonlinear variational principle. We investigate some theoretical properties of a basic CG method, including its global and asymptotic convergence. We propose several variants of single-vector and block PCG methods with deflation for computing multiple eigenvalues, and compare them in arithmetic and memory cost. Variable indefinite preconditioning is shown to be effective to accelerate convergence when some desired eigenvalues are not close to the lowest or highest eigenvalue. The efficiency of variants of PCG is illustrated by numerical experiments. Overall, the locally optimal block preconditioned conjugate gradient (LOBPCG) is the most efficient method, as in the lin...
Numerische Mathematik, 2014
SIAM Journal on Matrix Analysis and Applications, 2012
Numerische Mathematik, 2013
IMA Journal of Numerical Analysis, 2013
The Progressive GMRES algorithm, introduced by Beckermann and Reichel in 2008, is a residual-mini... more The Progressive GMRES algorithm, introduced by Beckermann and Reichel in 2008, is a residual-minimizing short-recurrence Krylov subspace method for solving a linear system in which the coefficient matrix has a low-rank skew-Hermitian part. We analyze this algorithm, observing a critical instability that makes the method unsuitable for some problems. To work around this issue we introduce a different short-term recurrence method based on Krylov subspaces for such matrices, which can be used as either a solver or a preconditioner. Numerical experiments compare this method to alternative algorithms. Key words. GMRES, MINRES, nearly Hermitian matrices, low-rank modifications AMS subject classifications. 65F10, 65N12, 15B57, 45B05, 45A05 1. Introduction. A
Linear Algebra and its Applications, 2011
SIAM Journal on Scientific Computing, 2016
SIAM Journal on Matrix Analysis and Applications, 2012
SIAM Journal on Matrix Analysis and Applications, 2011
ETNA - Electronic Transactions on Numerical Analysis, 2022
The Lanczos method is one of the standard approaches for computing a few eigenpairs of a large, s... more The Lanczos method is one of the standard approaches for computing a few eigenpairs of a large, sparse, symmetric matrix. It is typically used with restarting to avoid unbounded growth of memory and computational requirements. Thick-restart Lanczos is a popular restarted variant because of its simplicity and numerically robustness. However, convergence can be slow for highly clustered eigenvalues so more effective restarting techniques and the use of preconditioning is needed. In this paper, we present a thick-restart preconditioned Lanczos method, TRPL+K, that combines the power of locally optimal restarting (+K) and preconditioning techniques with the efficiency of the thick-restart Lanczos method. TRPL+K employs an inner-outer scheme where the inner loop applies Lanczos on a preconditioned operator while the outer loop augments the resulting Lanczos subspace with certain vectors from the previous restart cycle to obtain eigenvector approximations with which it thick restarts the ...
We analyze several important properties of invariant pairs of nonlinear algebraic eigenvalue prob... more We analyze several important properties of invariant pairs of nonlinear algebraic eigenvalue problems of the form T (λ)v = 0. Invariant pairs are generalizations of invariant subspaces in association with block Rayleigh quotients of square matrices to a nonlinear matrix-valued function T (·). They play an important role in the analysis of nonlinear eigenvalue problems and algorithms. In this paper, we first show that the algebraic, partial, and geometric multiplicities together with the Jordan chains corresponding to an eigenvalue of T (λ)v = 0 are completely represented by the Jordan canonical form of a simple invariant pair that captures this eigenvalue. We then investigate approximation errors and perturbations of a simple invariant pair. We also show that second order accuracy in eigenvalue approximation can be achieved by the two-sided block Rayleigh functional for non-defective eigenvalues. Finally, we study the matrix representation of the Fréchet derivative of the eigenprob...
We consider the solution of large-scale nonlinear algebraic Hermitian eigenproblems of the form T... more We consider the solution of large-scale nonlinear algebraic Hermitian eigenproblems of the form T (λ)v = 0 that admit a variational characterization of eigenvalues. These problems arise in a variety of applications and are generalizations of linear Hermitian eigenproblems Av=λBv. In this paper, we propose a Preconditioned Locally Minimal Residual (PLMR) method for efficiently computing interior eigenvalues of problems of this type. We discuss the development of search subspaces, preconditioning, and eigenpair extraction procedure based on the refined Rayleigh-Ritz projection. Extension to the block methods is presented, and a moving-window style soft deflation is described. Numerical experiments demonstrate that PLMR methods provide a rapid and robust convergence towards interior eigenvalues. The approach is also shown to be efficient and reliable for computing a large number of extreme eigenvalues, dramatically outperforming standard preconditioned conjugate gradient methods.
We study the local convergence rates of several single-vector Newton-like methods for the solutio... more We study the local convergence rates of several single-vector Newton-like methods for the solution of a semi-simple or defective eigenvalue of nonlinear algebraic eigenvalue problems of the form T (λ)v = 0. This problem has not been fully understood, since the Jacobian associated with the single-vector Newton’s method is singular at the desired eigenpair, and the standard convergence theory is not applicable. In addition, Newton’s method generally converges only linearly towards singular roots. In this paper, we show that faster convergence can be achieved for degenerate eigenvalues. For semi-simple eigenvalues, we show that the convergence of Newton’s method, Rayleigh functional iteration and the Jacobi-Davidson method are quadratic, and the latter two converge cubically for locally symmetric problems. For defective eigenvalues, all these methods converge only linearly in general. We then study two accelerated methods for defective eigenvalues, which exhibit quadratic convergence a...
SIAM Journal on Scientific Computing
Mathematics of Computation, 2015
Efficient computation of extreme eigenvalues of large-scale linear Hermitian eigenproblems can be... more Efficient computation of extreme eigenvalues of large-scale linear Hermitian eigenproblems can be achieved by preconditioned conjugate gradient (PCG) methods. In this paper, we study PCG methods for computing extreme eigenvalues of nonlinear Hermitian eigenproblems of the form T ( λ ) v = 0 T(\lambda )v=0 that admit a nonlinear variational principle. We investigate some theoretical properties of a basic CG method, including its global and asymptotic convergence. We propose several variants of single-vector and block PCG methods with deflation for computing multiple eigenvalues, and compare them in arithmetic and memory cost. Variable indefinite preconditioning is shown to be effective to accelerate convergence when some desired eigenvalues are not close to the lowest or highest eigenvalue. The efficiency of variants of PCG is illustrated by numerical experiments. Overall, the locally optimal block preconditioned conjugate gradient (LOBPCG) is the most efficient method, as in the lin...
Numerische Mathematik, 2014
SIAM Journal on Matrix Analysis and Applications, 2012
Numerische Mathematik, 2013
IMA Journal of Numerical Analysis, 2013