Low rank differential equations for Hamiltonian matrix nearness problems (original) (raw)

Solving matrix nearness problems via Hamiltonian systems, matrix factorization, and optimization

arXiv (Cornell University), 2022

These notes were written for the summer school on "Recent stability issues for linear dynamical systems-Matrix nearness problems and eigenvalue optimization" organized by Nicola Guglielmi and Christian Lubich at the Centro Internazionale Matematico Estivo (CIME) in September 2021; see http://php.math.unifi.it/users/cime/Courses/2021/course.php?codice=20216\. The aim of these notes is to summarize our recent contributions to compute nearest stable systems from unstable ones, namely Slides The slides presented during the summer school are available from https://www.dropbox. com/s/b33wd0j9pyiflar/CIME_Gillis_slides.pdf?dl=0. We thank our collaborators, Volker Mehrmann, Michael Karow and Neelam Choudhary for the fruitful and enjoyable moments spent working on these problems.

Finding the Nearest Passive or Nonpassive System via Hamiltonian Eigenvalue Optimization

SIAM Journal on Matrix Analysis and Applications, 2021

We propose and study an algorithm for computing a nearest passive system to a given non-passive linear time-invariant system (with much freedom in the choice of the metric defining 'nearest', which may be restricted to structured perturbations), and also a closely related algorithm for computing the structured distance of a given passive system to non-passivity. Both problems are addressed by solving eigenvalue optimization problems for Hamiltonian matrices that are constructed from perturbed system matrices. The proposed algorithms are two-level methods that optimize the Hamiltonian eigenvalue of smallest positive real part over perturbations of a fixed size in the inner iteration, using a constrained gradient flow. They optimize over the perturbation size in the outer iteration, which is shown to converge quadratically in the typical case of a defective coalescence of simple eigenvalues approaching the imaginary axis. For large systems, we propose a variant of the algorithm that takes advantage of the inherent low-rank structure of the problem. Numerical experiments illustrate the behavior of the proposed algorithms.

Perturbation Theory for Hamiltonian Matrices and the Distance to Bounded-Realness

SIAM Journal on Matrix Analysis and Applications, 2011

Motivated by the analysis of passive control systems, we undertake a detailed perturbation analysis of Hamiltonian matrices that have eigenvalues on the imaginary axis. We construct minimal Hamiltonian perturbations that move and coalesce eigenvalues of opposite sign characteristic to form multiple eigenvalues with mixed sign characteristics, which are then moved from the imaginary axis to specific locations in the complex plane by small Hamiltonian perturbations. We also present a numerical method to compute upper bounds for the minimal perturbations that move all eigenvalues of a given Hamiltonian matrix outside a vertical strip along the imaginary axis.

On computing the distance to stability for matrices using linear dissipative Hamiltonian systems

Automatica, 2017

In this paper, we consider the problem of computing the nearest stable matrix to an unstable one. We propose new algorithms to solve this problem based on a reformulation using linear dissipative Hamiltonian systems: we show that a matrix A is stable if and only if it can be written as A = (J − R)Q, where J = −J T , R 0 and Q ≻ 0 (that is, R is positive semidefinite and Q is positive definite). This reformulation results in an equivalent optimization problem with a simple convex feasible set. We propose three strategies to solve the problem in variables (J, R, Q): (i) a block coordinate descent method, (ii) a projected gradient descent method, and (iii) a fast gradient method inspired from smooth convex optimization. These methods require O(n 3) operations per iteration, where n is the size of A. We show the effectiveness of the fast gradient method compared to the other approaches and to several state-of-the-art algorithms.

On approximating the nearest Ω‐stable matrix

Numerical Linear Algebra with Applications, 2020

In this paper, we consider the problem of approximating a given matrix with a matrix whose eigenvalues lie in some specific region Ω of the complex plane. More precisely, we consider three types of regions and their intersections: conic sectors, vertical strips and disks. We refer to this problem as the nearest Ω-stable matrix problem. This includes as special cases the stable matrices for continuous and discrete time linear time-invariant systems. In order to achieve this goal, we parametrize this problem using dissipative Hamiltonian matrices and linear matrix inequalities. This leads to a reformulation of the problem with a convex feasible set. By applying a block coordinate descent method on this reformulation, we are able to compute solutions to the approximation problem, which is illustrated on some examples.

Nearest matrix with prescribed eigenvalues

This paper concerns the spectral norm distance from AinmathbbCntimesnA \in \mathbb{C}^{n \times n}AinmathbbCntimesn to matrices whose spectrum includes the set Lambda\LambdaLambda consisting of klenk \le nklen prescribed complex numbers. We obtain some lower bounds for this distance in the spectral norm. Also under two mild assumptions, a perturbation matrix Delta\DeltaDelta is constructed such that A+DeltaA+\DeltaA+Delta has Lambda\LambdaLambda in its spectrum and Delta\DeltaDelta is the optimal perturbation of AAA, in the sense that the perturbation Delta\DeltaDelta has minimum spectral norm.

(Anti-)Hermitian Generalized (Anti-)Hamiltonian Solution to a System of Matrix Equations

Mathematical Problems in Engineering, 2014

We mainly solve three problems. Firstly, by the decomposition of the (anti-)Hermitian generalized (anti-)Hamiltonian matrices, the necessary and sufficient conditions for the existence of and the expression for the (anti-)Hermitian generalized (anti-)Hamiltonian solutions to the system of matrix equations = , = are derived, respectively. Secondly, the optimal approximation solution min ∈ ‖̂− ‖ is obtained, where is the (anti-)Hermitian generalized (anti-)Hamiltonian solution set of the above system and̂is the given matrix. Thirdly, the least squares (anti-)Hermitian generalized (anti-)Hamiltonian solutions are considered. In addition, algorithms about computing the least squares (anti-)Hermitian generalized (anti-)Hamiltonian solution and the corresponding numerical examples are presented.

Relative perturbation theory for quadratic Hermitian eigenvalue problems

arXiv: Numerical Analysis, 2016

In this paper, we derive new relative perturbation bounds for eigenvectors and eigenvalues for regular quadratic eigenvalue problems of the form lambda2Mx+lambdaCx+Kx=0\lambda^2 M x + \lambda C x + K x = 0lambda2Mx+lambdaCx+Kx=0, where MMM and KKK are nonsingular Hermitian matrices and CCC is a general Hermitian matrix. We base our findings on new results for an equivalent regular Hermitian matrix pair A−lambdaBA-\lambda BAlambdaB. The new bounds can be applied to many interesting quadratic eigenvalue problems appearing in applications, such as mechanical models with indefinite damping. The quality of our bounds is demonstrated by several numerical experiments.

A structure preserving approximation method for Hamiltonian exponential matrices

Applied Numerical Mathematics, 2012

The approximation of exp(A)V where A is a real matrix and V a rectangular matrix is the key ingredient of many exponential integrators for solving systems of ordinary differential equations. In this paper we give an appropriate structure preserving approximation method to exp(A)V when A is a Hamiltonian or skew-Hamiltonian 2n-by-2n real matrix. Our approach is based on Krylov subspace methods that preserve Hamiltonian or skew-Hamiltonian structure. In this regard we use a symplectic Lanczos algorithm to compute the desired approximation.

A Riemannian Optimization Approach for Computing Low-Rank Solutions of Lyapunov Equations

SIAM Journal on Matrix Analysis and Applications, 2010

We propose a new framework based on optimization on manifolds to approximate the solution of a Lyapunov matrix equation by a low-rank matrix. The method minimizes the error on the Riemannian manifold of symmetric positive semi-definite matrices of fixed rank. We detail how objects from differential geometry, like the Riemannian gradient and Hessian, can be efficiently computed for this manifold. As minimization algorithm we use the Riemannian Trust-Region method of [Found. Comput. Math., 7 (2007), pp. 303-330] based on a second-order model of the objective function on the manifold. Together with an efficient preconditioner this method can find low-rank solutions with very little memory. We illustrate our results with numerical examples.