A robust solution of the generalized polynomial Bezout identity (original) (raw)
Related papers
On the Computation of Minimal Polynomial Bases
2004
The problem of determination of a minimal polynomial basis of a rational vector space is the starting point of many control analysis, synthesis and design techniques. In this paper, we propose a new algorithm for the computation of a minimal polynomial basis of the left kernel of a given polynomial matrix F (s). The proposed method exploits the structure of the left null space of generalized Wolovich or Sylvester resultants, in order to compute ef ciently row polynomial vectors that form a minimal polynomial basis of the left kernel of the given polynomial matrix. One of the advantages of the algorithm is that it can be implemented using only orthogonal transformations of constant matrices and the result is a minimal basis with orthonormal coef cients.
The null space of the Bezout matrix in any basis and gcd's
arXiv (Cornell University), 2014
This manuscript presents a generalization of the structure of the null space of the Bezout matrix in the monomial basis, see [15], to an arbitrary basis. In addition, two methods for computing the gcd of several polynomials, using also Bezout matrices, without having to convert them to the monomial basis. The main point is that the presented results are expressed with respect to an arbitrary polynomial basis. In recent years, many problems in polynomial systems, stability theory, CAGD, etc., are solved using Bezout matrices in distinct specific bases. Therefore, it is very useful to have results and tools that can be applied to any basis.
A new method for computing a column reduced polynomial matrix
Systems & Control Letters, 1988
A new algorithm is presented for computing a column reduced form of a given full column rank polynomial matrix. The method is based on reformulating the problem as a problem of constructing a minimal basis for the rigth nullspace of a polynomial matrix closely related to the original one. The latter problem can easily be solved in a numerically reliable way. Two examples illustrating the method are included.
On the Computation of the Minimal Polynomial of a Polynomial Matrix
International Journal of Applied Mathematics and Computer Science, 2005
The main contribution of this work is to provide two algorithms for the computation of the minimal polynomial of univariate polynomial matrices. The first algorithm is based on the solution of linear matrix equations while the second one employs DFT techniques. The whole theory is illustrated with examples.
Numerical computation of minimal polynomial bases A generalized resultant approach.
We propose a new algorithm for the computation of a minimal polynomial basis of the left kernel of a given polynomial matrix F (s). The proposed method exploits the structure of the left null space of generalized Wolovich or Sylvester resultants to compute row polynomial vectors that form a minimal polynomial basis of left kernel of the given polynomial matrix. The entire procedure can be implemented using only orthogonal transformations of constant matrices and results to a minimal basis with orthonormal coefficients.
On the computation of the minimal polynomial of a two-variable polynomial matrix
The Fourth International Workshop on Multidimensional Systems, 2005. NDS 2005., 2005
The main contribution of this work is to provide an algorithm for the computation of the minimal polynomial of a two variable polynomial matrix, based on the solution of linear matrix equations. The whole theory is implemented via an illustrative example.
Fast error-free algorithms for polynomial matrix computations
29th IEEE Conference on Decision and Control, 1990
Matrices of pol nomials over rings and fields provide a unifying framework $r many control system design problems. These include dynamic compensator design, infinite dimensional systems, controllers for nonlinear systems, and even controllers for discrete event s stems. An important obstacle for utilizing these owerful matiematical tools in practical applications has been &e non-availability of accurate and efficient algorithms to carry through the precise error-free computations required b these algebraic methods. In this paper we develop highly ekcient, error-free a1 orithms, for most of the important computations needed in %near systems over fields or rings. We show that the structure of the underlying rings and modules is critical in designing such algorithms.
Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix
2010
In this paper we propose a numerical algorithm to compute the minimal nullspace basis of a univariate polynomial matrix of arbitrary size. In order to do so a sequence of structured matrices is obtained from the given polynomial matrix. The nullspace of the polynomial matrix can be computed from the nullspaces of these structured matrices.
Extensions of Faddeev's algorithms to polynomial matrices
2009
Starting from algorithms introduced in [Ky M. Vu, An extension of the Faddeev's algorithms, in: Proceedings of the IEEE Multi-conference on Systems and Control on September 3-5th, 2008, San Antonio, TX] which are applicable to one-variable regular polynomial matrices, we introduce two dual extensions of the Faddeev's algorithm to one-variable rectangular or singular matrices. Corresponding algorithms for symbolic computing the Drazin and the Moore-Penrose inverse are introduced. These algorithms are alternative with respect to previous representations of the Moore-Penrose and the Drazin inverse of one-variable polynomial matrices based on the Leverrier-Faddeev's algorithm. Complexity analysis is performed. Algorithms are implemented in the symbolic computational package MATHEMATICA and illustrative test examples are presented.