E. Tyrtyshnikov - Academia.edu (original) (raw)
Papers by E. Tyrtyshnikov
Journal of Inverse and Ill-posed Problems, Oct 1, 2020
subspace methods and minimal residuals
Linear Algebra and its Applications, 2006
The main result is the "black dot algorithm" and its fast version for the construction of a new c... more The main result is the "black dot algorithm" and its fast version for the construction of a new circulant preconditioner for Toeplitz matrices. This new preconditioner C is sought directly as a solution to one of possible settings of the approximation problem A ≈ C + R, where A is a given matrix and R should be a "low-rank" matrix. This very problem is a key to the analysis of superlinear convergence properties of already established circulant and other matrix-algebra preconditioners. In this regard, our new preconditioner is likely to be the best of all possible circulant preconditioners. Moreover, in contrast to several "functionbased" circulant preconditioners used for "bad" symbols, it is constructed entirely from the entries of a given matrix and performs equally as the best of the known or better than those for the same symbols.
Matrix Methods: Theory, Algorithms and Applications, 2010
Journal of Numerical Mathematics, 2005
The goal of this work is the presentation of some new formats which are useful for the approximat... more The goal of this work is the presentation of some new formats which are useful for the approximation of (large and dense) matrices related to certain classes of functions and nonlocal (integral, integrodifferential) operators, especially for high-dimensional problems. These new formats elaborate on a sum of few terms of Kronecker products of smaller-sized matrices (cf. [37, 38]). In addition to this we need that the Kronecker factors possess a certain data-sparse structure. Depending on the construction of the Kronecker factors we are led to so-called "profile-low-rank matrices" or hierarchical matrices (cf. [18, 19]). We give a proof for the existence of such formats and expound a gainful combination of the Kronecker-tensor-product structure and the arithmetic for hierarchical matrices.
Calcolo, 1996
If a matrix has a small rank then it can be multiphed by a vector with many savings in memory and... more If a matrix has a small rank then it can be multiphed by a vector with many savings in memory and arithmetic. As was recently shown by the author, the same applies to the matrices which might be of full classical rank but have a small mosaic rank. The mosaic-skeleton approximations seem to have imposing applications to the solution of large dense unstructured linear systems. In this paper, we propose a suitable modification of Brandt's definition of an asymptotically smooth function f(x,y). Then we consider (~) (n) n x n matrices An = [f(x i ,y~)1 for quasiuniform meshes {x}a) } and {y~")} in some bounded domain in tile m-dimensional space. For such matrices, we prove that the approximate mosaic ranks grow logarithmically in n. From practical point of view, the results obtained lead immediately to O(nlogn) matrix-vector multiplication algorithms.
Journal of Inverse and Ill-Posed Problems, Nov 1, 2020
Numerical Linear Algebra with Applications, 2011
Kong. It was a sequel to a similar conference held in Hong Kong (in 2002 and 2006). The conferenc... more Kong. It was a sequel to a similar conference held in Hong Kong (in 2002 and 2006). The conference had two major objectives: (i) to improve the dialogue and collaboration between matrix/tensor theoreticians and the computational scientists and (ii) to reduce the gap between the researchers working on the fundamentals and those working on real-life applications, with the aim in mind that emerging applications will stimulate new theoretical research and that better theoretical tools in turn can be exported back to various fields of application. The special issue contains nine papers from invited speakers of the conference. The contributions cover different aspects in the research field of structured matrices and tensors with applications. The first five papers [1-3, 5, 6] are devoted to matrix computation, and the last four papers [7-10] are devoted to tensor computation. Noschese and Reichel [1] study generalized circulant Strangtype preconditioners for structured matrices. Chun and Park [2] design two decoupling techniques to speed-up the LLL-aided OSIC algorithm for solving clustered integer least squares problems related to structured matrices in communication. Cavoretto et al. [3] study spectral analysis and preconditioning techniques for radial basis function collocation matrices, which have almost a multilevel Toeplitz structure. Stoll and Wathen [4] propose preconditioners for the saddle point problems that arise when a primal dual active set method is used for solving optimal control problems with partial differential equations. Yin et al. [5] develop fast adaptively accelerated Arnoldi method for computing PageRank where the weights are calculated on the basis of the current residual of the approximate PageRank vector. Lee et al. [6] develop fast exponential time integration scheme based on Toeplitz structure for option pricing with jumps; the shift-and-invert Arnoldi method is employed to produce fast approximations to the computation. Savostyanov et al. [7] propose a fast algorithm based on the cross approximation of Gram matrices, for mode rank truncation of the result of a bilinear operation on three tensors given in the Tucker or canonical form. Ling et al. [8] present some bounds on the optimal value of a quadratically constrained multivariate biquadratic polynomial optimization problem via approximately solving the related bilinear semidefinite programming relaxation. Hackbusch et al. [9] study eigenvalue problems for elliptic operators in higher dimensions and, for their discretizations, admit approximations by tensor products of low rank and derive conditions providing an exponential decrease of the error with respect to the rank. Li et al. [10] develop image segmentation methods for hyperspectral space object material identification and study the segmentation with a hyperspectral image data denoising/deblurring model.
Recent experiments showed that the circulant preconditioners are eecient in many cases (especiall... more Recent experiments showed that the circulant preconditioners are eecient in many cases (especially if the system's matrix A is Toeplitz). There are two main types of circulant preconditioners: optimal, minimizing functional k A ? C k F , and super-optimal, minimizing functional k I ? C ?1 A k F. All circulant matrces C are diagonalized by the discrete Fourier transform: C = F F. Here F is the Fourier matrix and is a diagonal matrix. Thus, it is natural to generalize the idea of cir-culant preconditioners to a wider class of matrices such that every element C of this class can be presented in the form C = F F, where F is a given orthogonal matrix and is diagonal. In this paper we study spectral properties of the generalized optimal and super-optimal preconditioners and prove that if A is symmetric and positive deenite then both generalized preconditioners are symmetric and positive deenite as well. Some algebraic and geometric properties of operator c(A), which establishes the correspondence between matrices A and their generalized optimal preconditioners, are studied. c(A) turned out to be an ortho-projector from the space of Hermitian matrices to the subspace of pseudo-circulants. In the recent time a lot of attention is paid to the exploration of the iterative methods with preconditioning. Generally, the preconditioner should possess two main features. First, it should reeect the structure of the system's matrix A. It could be achieved, for example, by the minimization of functional k A ? C k E or functional k I ? C ?1 A k E by a class of matrices C. Second, it should be easily invertible in the case of explicit preconditioning and easily multiplicable by a vector in the case of implicit preconditioning. Such are the circulant matrices C, which can be expressed in the form C = F F; where is a diagonal matrix, F = f km ] n?1 k;m=0 is the Fourier matrix (f km = exp(i 2km n)), n is the order of the matrix and i is the imaginary unit. It is known that the inverted circulant matrix still is a circulant. So, while using circulants as preconditioners there is no diierence between explicit and implicit precon-ditioning. The multiplication of the circulant matrix by a vector can be performed in O(n log n) arithmetic operations. The review of the literature devoted to the circulant preconditioning is given …
Linear Algebra and its Applications, 2013
SIAM Journal on Matrix Analysis and Applications, 2000
ABSTRACT
Linear Algebra and its Applications, 2012
Numerical Linear Algebra with Applications, 2011
In the general case of multilevel Toeplitz matrices, we recently proved that any multilevel circu... more In the general case of multilevel Toeplitz matrices, we recently proved that any multilevel circulant preconditioner is not superlinear (a cluster it may provide cannot be proper). The proof was based on the concept of quasi-equimodular matrices, although this concept does not apply, for example, to the sine-transform matrices. In this paper, with a new concept of partially equimodular matrices, we cover all trigonometric matrix algebras widely used in the literature. We propose a technique for proving the non-superlinearity of certain frequently used preconditioners for some representative sample multilevel matrices. At the same time, we show that these preconditioners are, in a certain sense, the best among the sublinear preconditioners (with only a general cluster) for multilevel Toeplitz matrices.
Most successful numerical algorithms for multi-dimensional problems usually involve multi-index a... more Most successful numerical algorithms for multi-dimensional problems usually involve multi-index arrays, also called tensors, and capitalize on those tensor decompositions that reduce, one way or another, to low-rank matrices associated with the given tensors. It can be argued that the most of recent progress is due to the TT and HT decompostions [1]. The differences between the two decompositions may look as rather subtle, because the both are based on the same dimensionality reduction tree and exploit seemingly the same idea. In this talk, we analyze the differences between the two decompositions and present them in a clear and simple way. Besides that, we demonstrate some new applciations of tensor approximations in numerical analysis [2].
Computational Methods in Applied Mathematics
We show that the recent tensor-train (TT) decompositions of matrices come up from its recursive K... more We show that the recent tensor-train (TT) decompositions of matrices come up from its recursive Kronecker-product representations with a systematic use of common bases. The names TTM and QTT used in this case stress the relation with multilevel matrices or quantization that increases artificially the number of levels. Then we investigate how the tensor-train ranks of a matrix can be related to those of its inverse. In the case of a banded Toeplitz matrix, we prove that the tensor-train ranks of its inverse are bounded above by 1+(l+u)^2, where l and u are the bandwidths in the lower and upper parts of the matrix without the main diagonal.
SAR and QSAR in Environmental Research
Linear Algebra and its Applications
The mathematical modeling of problems of the real world often leads to problems in linear algebra... more The mathematical modeling of problems of the real world often leads to problems in linear algebra involving structured matrices where the entries are defined by few parameters according to a compact formula. Matrix patterns and structural properties provide a uniform means for describing different features of the problem that they model. The analysis of theoretical and computational properties of these structures is a fundamental step in the design of efficient solution algorithms. Certain structures are encountered very frequently and reflect specific features that are common to different problems arising in diverse fields of theoretical and applied mathematics and engineering. In particular, properties of shift invariance, shared by many mathematical entities like point-spread functions, integral kernels, probability distributions, convolutions, etc., are the common feature which originates Toeplitz matrices. In fact, Toeplitz matrices, characterized by having constant entries along their diagonals, are encountered in fields like image processing, signal processing, digital filtering, queueing theory, computer algebra, linear prediction and in the numerical solution of certain difference and differential equations, just to mention a few. The interest in this class of matrices is not motivated only by the applications; in fact, Toeplitz matrices are endowed with a very rich set of mathematical properties and there exists a very wide literature dated back to the first half of the last century on their analytic, algebraic, spectral and computational properties. Other classes of structured matrices are less pervasive in terms of applications but nevertheless they are not less important. Frobenius matrices, Hankel matrices, Sylvester matrices and Bezoutians, encountered in control theory, in stability issues, and in polynomial computations have a rich variety of theoretical properties and have been object of many studies. Vandermonde matrices, Cauchy matrices, Loewner matrices and Pick matrices are more frequently encountered in the framework of interpolation problems. Tridiagonal and more general banded matrices and their inverses, which are semiseparable matrices, are very familiar in numerical analysis. Their extension to more general classes and the design of efficient algorithms for them has recently received much attention. Multi-dimensional problems lead to matrices which can be represented as structured block matrices with a structure within the blocks themselves. Kronecker product
Journal of Inverse and Ill-posed Problems, Oct 1, 2020
subspace methods and minimal residuals
Linear Algebra and its Applications, 2006
The main result is the "black dot algorithm" and its fast version for the construction of a new c... more The main result is the "black dot algorithm" and its fast version for the construction of a new circulant preconditioner for Toeplitz matrices. This new preconditioner C is sought directly as a solution to one of possible settings of the approximation problem A ≈ C + R, where A is a given matrix and R should be a "low-rank" matrix. This very problem is a key to the analysis of superlinear convergence properties of already established circulant and other matrix-algebra preconditioners. In this regard, our new preconditioner is likely to be the best of all possible circulant preconditioners. Moreover, in contrast to several "functionbased" circulant preconditioners used for "bad" symbols, it is constructed entirely from the entries of a given matrix and performs equally as the best of the known or better than those for the same symbols.
Matrix Methods: Theory, Algorithms and Applications, 2010
Journal of Numerical Mathematics, 2005
The goal of this work is the presentation of some new formats which are useful for the approximat... more The goal of this work is the presentation of some new formats which are useful for the approximation of (large and dense) matrices related to certain classes of functions and nonlocal (integral, integrodifferential) operators, especially for high-dimensional problems. These new formats elaborate on a sum of few terms of Kronecker products of smaller-sized matrices (cf. [37, 38]). In addition to this we need that the Kronecker factors possess a certain data-sparse structure. Depending on the construction of the Kronecker factors we are led to so-called "profile-low-rank matrices" or hierarchical matrices (cf. [18, 19]). We give a proof for the existence of such formats and expound a gainful combination of the Kronecker-tensor-product structure and the arithmetic for hierarchical matrices.
Calcolo, 1996
If a matrix has a small rank then it can be multiphed by a vector with many savings in memory and... more If a matrix has a small rank then it can be multiphed by a vector with many savings in memory and arithmetic. As was recently shown by the author, the same applies to the matrices which might be of full classical rank but have a small mosaic rank. The mosaic-skeleton approximations seem to have imposing applications to the solution of large dense unstructured linear systems. In this paper, we propose a suitable modification of Brandt's definition of an asymptotically smooth function f(x,y). Then we consider (~) (n) n x n matrices An = [f(x i ,y~)1 for quasiuniform meshes {x}a) } and {y~")} in some bounded domain in tile m-dimensional space. For such matrices, we prove that the approximate mosaic ranks grow logarithmically in n. From practical point of view, the results obtained lead immediately to O(nlogn) matrix-vector multiplication algorithms.
Journal of Inverse and Ill-Posed Problems, Nov 1, 2020
Numerical Linear Algebra with Applications, 2011
Kong. It was a sequel to a similar conference held in Hong Kong (in 2002 and 2006). The conferenc... more Kong. It was a sequel to a similar conference held in Hong Kong (in 2002 and 2006). The conference had two major objectives: (i) to improve the dialogue and collaboration between matrix/tensor theoreticians and the computational scientists and (ii) to reduce the gap between the researchers working on the fundamentals and those working on real-life applications, with the aim in mind that emerging applications will stimulate new theoretical research and that better theoretical tools in turn can be exported back to various fields of application. The special issue contains nine papers from invited speakers of the conference. The contributions cover different aspects in the research field of structured matrices and tensors with applications. The first five papers [1-3, 5, 6] are devoted to matrix computation, and the last four papers [7-10] are devoted to tensor computation. Noschese and Reichel [1] study generalized circulant Strangtype preconditioners for structured matrices. Chun and Park [2] design two decoupling techniques to speed-up the LLL-aided OSIC algorithm for solving clustered integer least squares problems related to structured matrices in communication. Cavoretto et al. [3] study spectral analysis and preconditioning techniques for radial basis function collocation matrices, which have almost a multilevel Toeplitz structure. Stoll and Wathen [4] propose preconditioners for the saddle point problems that arise when a primal dual active set method is used for solving optimal control problems with partial differential equations. Yin et al. [5] develop fast adaptively accelerated Arnoldi method for computing PageRank where the weights are calculated on the basis of the current residual of the approximate PageRank vector. Lee et al. [6] develop fast exponential time integration scheme based on Toeplitz structure for option pricing with jumps; the shift-and-invert Arnoldi method is employed to produce fast approximations to the computation. Savostyanov et al. [7] propose a fast algorithm based on the cross approximation of Gram matrices, for mode rank truncation of the result of a bilinear operation on three tensors given in the Tucker or canonical form. Ling et al. [8] present some bounds on the optimal value of a quadratically constrained multivariate biquadratic polynomial optimization problem via approximately solving the related bilinear semidefinite programming relaxation. Hackbusch et al. [9] study eigenvalue problems for elliptic operators in higher dimensions and, for their discretizations, admit approximations by tensor products of low rank and derive conditions providing an exponential decrease of the error with respect to the rank. Li et al. [10] develop image segmentation methods for hyperspectral space object material identification and study the segmentation with a hyperspectral image data denoising/deblurring model.
Recent experiments showed that the circulant preconditioners are eecient in many cases (especiall... more Recent experiments showed that the circulant preconditioners are eecient in many cases (especially if the system's matrix A is Toeplitz). There are two main types of circulant preconditioners: optimal, minimizing functional k A ? C k F , and super-optimal, minimizing functional k I ? C ?1 A k F. All circulant matrces C are diagonalized by the discrete Fourier transform: C = F F. Here F is the Fourier matrix and is a diagonal matrix. Thus, it is natural to generalize the idea of cir-culant preconditioners to a wider class of matrices such that every element C of this class can be presented in the form C = F F, where F is a given orthogonal matrix and is diagonal. In this paper we study spectral properties of the generalized optimal and super-optimal preconditioners and prove that if A is symmetric and positive deenite then both generalized preconditioners are symmetric and positive deenite as well. Some algebraic and geometric properties of operator c(A), which establishes the correspondence between matrices A and their generalized optimal preconditioners, are studied. c(A) turned out to be an ortho-projector from the space of Hermitian matrices to the subspace of pseudo-circulants. In the recent time a lot of attention is paid to the exploration of the iterative methods with preconditioning. Generally, the preconditioner should possess two main features. First, it should reeect the structure of the system's matrix A. It could be achieved, for example, by the minimization of functional k A ? C k E or functional k I ? C ?1 A k E by a class of matrices C. Second, it should be easily invertible in the case of explicit preconditioning and easily multiplicable by a vector in the case of implicit preconditioning. Such are the circulant matrices C, which can be expressed in the form C = F F; where is a diagonal matrix, F = f km ] n?1 k;m=0 is the Fourier matrix (f km = exp(i 2km n)), n is the order of the matrix and i is the imaginary unit. It is known that the inverted circulant matrix still is a circulant. So, while using circulants as preconditioners there is no diierence between explicit and implicit precon-ditioning. The multiplication of the circulant matrix by a vector can be performed in O(n log n) arithmetic operations. The review of the literature devoted to the circulant preconditioning is given …
Linear Algebra and its Applications, 2013
SIAM Journal on Matrix Analysis and Applications, 2000
ABSTRACT
Linear Algebra and its Applications, 2012
Numerical Linear Algebra with Applications, 2011
In the general case of multilevel Toeplitz matrices, we recently proved that any multilevel circu... more In the general case of multilevel Toeplitz matrices, we recently proved that any multilevel circulant preconditioner is not superlinear (a cluster it may provide cannot be proper). The proof was based on the concept of quasi-equimodular matrices, although this concept does not apply, for example, to the sine-transform matrices. In this paper, with a new concept of partially equimodular matrices, we cover all trigonometric matrix algebras widely used in the literature. We propose a technique for proving the non-superlinearity of certain frequently used preconditioners for some representative sample multilevel matrices. At the same time, we show that these preconditioners are, in a certain sense, the best among the sublinear preconditioners (with only a general cluster) for multilevel Toeplitz matrices.
Most successful numerical algorithms for multi-dimensional problems usually involve multi-index a... more Most successful numerical algorithms for multi-dimensional problems usually involve multi-index arrays, also called tensors, and capitalize on those tensor decompositions that reduce, one way or another, to low-rank matrices associated with the given tensors. It can be argued that the most of recent progress is due to the TT and HT decompostions [1]. The differences between the two decompositions may look as rather subtle, because the both are based on the same dimensionality reduction tree and exploit seemingly the same idea. In this talk, we analyze the differences between the two decompositions and present them in a clear and simple way. Besides that, we demonstrate some new applciations of tensor approximations in numerical analysis [2].
Computational Methods in Applied Mathematics
We show that the recent tensor-train (TT) decompositions of matrices come up from its recursive K... more We show that the recent tensor-train (TT) decompositions of matrices come up from its recursive Kronecker-product representations with a systematic use of common bases. The names TTM and QTT used in this case stress the relation with multilevel matrices or quantization that increases artificially the number of levels. Then we investigate how the tensor-train ranks of a matrix can be related to those of its inverse. In the case of a banded Toeplitz matrix, we prove that the tensor-train ranks of its inverse are bounded above by 1+(l+u)^2, where l and u are the bandwidths in the lower and upper parts of the matrix without the main diagonal.
SAR and QSAR in Environmental Research
Linear Algebra and its Applications
The mathematical modeling of problems of the real world often leads to problems in linear algebra... more The mathematical modeling of problems of the real world often leads to problems in linear algebra involving structured matrices where the entries are defined by few parameters according to a compact formula. Matrix patterns and structural properties provide a uniform means for describing different features of the problem that they model. The analysis of theoretical and computational properties of these structures is a fundamental step in the design of efficient solution algorithms. Certain structures are encountered very frequently and reflect specific features that are common to different problems arising in diverse fields of theoretical and applied mathematics and engineering. In particular, properties of shift invariance, shared by many mathematical entities like point-spread functions, integral kernels, probability distributions, convolutions, etc., are the common feature which originates Toeplitz matrices. In fact, Toeplitz matrices, characterized by having constant entries along their diagonals, are encountered in fields like image processing, signal processing, digital filtering, queueing theory, computer algebra, linear prediction and in the numerical solution of certain difference and differential equations, just to mention a few. The interest in this class of matrices is not motivated only by the applications; in fact, Toeplitz matrices are endowed with a very rich set of mathematical properties and there exists a very wide literature dated back to the first half of the last century on their analytic, algebraic, spectral and computational properties. Other classes of structured matrices are less pervasive in terms of applications but nevertheless they are not less important. Frobenius matrices, Hankel matrices, Sylvester matrices and Bezoutians, encountered in control theory, in stability issues, and in polynomial computations have a rich variety of theoretical properties and have been object of many studies. Vandermonde matrices, Cauchy matrices, Loewner matrices and Pick matrices are more frequently encountered in the framework of interpolation problems. Tridiagonal and more general banded matrices and their inverses, which are semiseparable matrices, are very familiar in numerical analysis. Their extension to more general classes and the design of efficient algorithms for them has recently received much attention. Multi-dimensional problems lead to matrices which can be represented as structured block matrices with a structure within the blocks themselves. Kronecker product