Solving matrix nearness problems via Hamiltonian systems, matrix factorization, and optimization (original) (raw)

Low rank differential equations for Hamiltonian matrix nearness problems

Numerische Mathematik, 2014

For a Hamiltonian matrix with purely imaginary eigenvalues, we aim to determine the nearest Hamiltonian matrix such that some or all eigenvalues leave the imaginary axis. Conversely, for a Hamiltonian matrix with all eigenvalues lying off the imaginary axis, we look for a nearest Hamiltonian matrix that has a pair of imaginary eigenvalues. The Hamiltonian matrices can be allowed to be complex or restricted to be real. Such Hamiltonian matrix nearness problems are motivated by applications such as the analysis of passive control systems. They are closely related to the problem of determining extremal points of Hamiltonian pseudospectra. We obtain a characterization of optimal perturbations, which turn out to be of low rank and are attractive stationary points of low-rank differential equations that we derive. We use a two-level approach, where in the inner level we determine extremal points of the Hamiltonian ε-pseudospectrum for a given ε by following the low-rank differential equations into a stationary point, and on the outer level we optimize for ε. This permits us to give fast algorithms-exhibiting quadratic convergence-for solving the considered Hamiltonian matrix nearness problems.

On approximating the nearest Ω‐stable matrix

Numerical Linear Algebra with Applications, 2020

In this paper, we consider the problem of approximating a given matrix with a matrix whose eigenvalues lie in some specific region Ω of the complex plane. More precisely, we consider three types of regions and their intersections: conic sectors, vertical strips and disks. We refer to this problem as the nearest Ω-stable matrix problem. This includes as special cases the stable matrices for continuous and discrete time linear time-invariant systems. In order to achieve this goal, we parametrize this problem using dissipative Hamiltonian matrices and linear matrix inequalities. This leads to a reformulation of the problem with a convex feasible set. By applying a block coordinate descent method on this reformulation, we are able to compute solutions to the approximation problem, which is illustrated on some examples.

On computing the distance to stability for matrices using linear dissipative Hamiltonian systems

Automatica, 2017

In this paper, we consider the problem of computing the nearest stable matrix to an unstable one. We propose new algorithms to solve this problem based on a reformulation using linear dissipative Hamiltonian systems: we show that a matrix A is stable if and only if it can be written as A = (J − R)Q, where J = −J T , R 0 and Q ≻ 0 (that is, R is positive semidefinite and Q is positive definite). This reformulation results in an equivalent optimization problem with a simple convex feasible set. We propose three strategies to solve the problem in variables (J, R, Q): (i) a block coordinate descent method, (ii) a projected gradient descent method, and (iii) a fast gradient method inspired from smooth convex optimization. These methods require O(n 3) operations per iteration, where n is the size of A. We show the effectiveness of the fast gradient method compared to the other approaches and to several state-of-the-art algorithms.

Approximating the nearest stable discrete-time system

Linear Algebra and its Applications, 2019

In this paper, we consider the problem of stabilizing discrete-time linear systems by computing a nearby stable matrix to an unstable one. To do so, we provide a new characterization for the set of stable matrices. We show that a matrix A is stable if and only if it can be written as A = S −1 U BS, where S is positive definite, U is orthogonal, and B is a positive semidefinite contraction (that is, the singular values of B are less or equal to 1). This characterization results in an equivalent nonconvex optimization problem with a feasible set on which it is easy to project. We propose a very efficient fast projected gradient method to tackle the problem in variables (S, U, B) and generate locally optimal solutions. We show the effectiveness of the proposed method compared to other approaches.

Nearest matrix with prescribed eigenvalues

This paper concerns the spectral norm distance from AinmathbbCntimesnA \in \mathbb{C}^{n \times n}AinmathbbCntimesn to matrices whose spectrum includes the set Lambda\LambdaLambda consisting of klenk \le nklen prescribed complex numbers. We obtain some lower bounds for this distance in the spectral norm. Also under two mild assumptions, a perturbation matrix Delta\DeltaDelta is constructed such that A+DeltaA+\DeltaA+Delta has Lambda\LambdaLambda in its spectrum and Delta\DeltaDelta is the optimal perturbation of AAA, in the sense that the perturbation Delta\DeltaDelta has minimum spectral norm.

The matrix nearness problem for symmetric matrices associated with the matrix equation [ A T XA, B T XB] = [ C, D

Linear Algebra and Its Applications, 2006

A direct method, based on the projection theorem in inner products spaces, the generalized singular value decomposition and the canonical correlation decomposition, is presented for finding the optimal approximate solution X in the set S E to a given matrix X, where S E denotes the least-squares symmetric solution set of the matrix equation [A T XA, B T XB] = [C, D]. The analytical expression of the optimal approximate solution X is obtained, and an algorithm for finding this solution is also suggested.

Finding the Nearest Passive or Nonpassive System via Hamiltonian Eigenvalue Optimization

SIAM Journal on Matrix Analysis and Applications, 2021

We propose and study an algorithm for computing a nearest passive system to a given non-passive linear time-invariant system (with much freedom in the choice of the metric defining 'nearest', which may be restricted to structured perturbations), and also a closely related algorithm for computing the structured distance of a given passive system to non-passivity. Both problems are addressed by solving eigenvalue optimization problems for Hamiltonian matrices that are constructed from perturbed system matrices. The proposed algorithms are two-level methods that optimize the Hamiltonian eigenvalue of smallest positive real part over perturbations of a fixed size in the inner iteration, using a constrained gradient flow. They optimize over the perturbation size in the outer iteration, which is shown to converge quadratically in the typical case of a defective coalescence of simple eigenvalues approaching the imaginary axis. For large systems, we propose a variant of the algorithm that takes advantage of the inherent low-rank structure of the problem. Numerical experiments illustrate the behavior of the proposed algorithms.

A study of measuring the distance of a stable matrix to the unstable matrices

2018

This paper determine the 2-norm and frobenius norm distance form a given matrix A to the nearest matrix, we explain a bisection method due to Byers, with an Eigen value on the imaginary axis. If A has an Eigen value plane, then, this distance measures how “nearly unstable” A is. Each bisection step provides a rigorous upper bound or lower bound on the distance. Here, we use Bayer’s Bisection method to estimate the distance to the nearest matrix with an Eigen value on the unit circle.

On matrix approximation

Proceedings of the American Mathematical Society, 1975

In this paper we give an algebraic characterization of the best approximants to a given matrix A from a real line spanned by a matrix B. The distance | A-aßfl is taken to be the spectral norm of A-aB.

Nearest matrix with two prescribed eigenvalues

Linear Algebra and its Applications, 2005

Given a square complex matrix A and two complex numbers z 1 and z 2 , we find the distance from A to the set of matrices that have z 1 , z 2 as some of their eigenvalues. We use the distance between two matrices associated with the spectral norm.