Linear transformations that preserve the assignment (original) (raw)

On some new matrix transformations

Journal of Inequalities and Applications, 2013

In this paper, we characterize some matrix classes (ω(p, s), V λ σ ), (ω p (s), V λ σ ) and (ω p (s), V λ σ ) reg under appropriate conditions.

On The Linear Transformation of Division Matrices

In this study, we deal with functions from the square matrices to square matrices, which the same order. Such a function will be called a linear transformation, defined as follows: Let M n (R) be a set of square matrices of order n, n ϵ S, and A be regular matrix in M n (R), then the special function T A : M n (R) → M n (R)   A X X T X A  is called a linear transformation of M n (R) to M n (R) the following two properties are true for all X,Y ϵ M n (R), and scalars α ϵ R: i. T A (X+Y) = T A (X) + T A (Y). (We say that T A preserves additivity) ii. T A (αX)= αT A (X) (We say that T A preserves scalar multiplication) In this case the matrix A is called the standard matrix of the function T A. Here, we transfer some well known properties of linear transformations to the above defined elements in the set all { T A : A regular in M n (R)} [1].

Special Issue on “Structured Matrices: Analysis, Algorithms and Applications”

Linear Algebra and its Applications

The mathematical modeling of problems of the real world often leads to problems in linear algebra involving structured matrices where the entries are defined by few parameters according to a compact formula. Matrix patterns and structural properties provide a uniform means for describing different features of the problem that they model. The analysis of theoretical and computational properties of these structures is a fundamental step in the design of efficient solution algorithms. Certain structures are encountered very frequently and reflect specific features that are common to different problems arising in diverse fields of theoretical and applied mathematics and engineering. In particular, properties of shift invariance, shared by many mathematical entities like point-spread functions, integral kernels, probability distributions, convolutions, etc., are the common feature which originates Toeplitz matrices. In fact, Toeplitz matrices, characterized by having constant entries along their diagonals, are encountered in fields like image processing, signal processing, digital filtering, queueing theory, computer algebra, linear prediction and in the numerical solution of certain difference and differential equations, just to mention a few. The interest in this class of matrices is not motivated only by the applications; in fact, Toeplitz matrices are endowed with a very rich set of mathematical properties and there exists a very wide literature dated back to the first half of the last century on their analytic, algebraic, spectral and computational properties. Other classes of structured matrices are less pervasive in terms of applications but nevertheless they are not less important. Frobenius matrices, Hankel matrices, Sylvester matrices and Bezoutians, encountered in control theory, in stability issues, and in polynomial computations have a rich variety of theoretical properties and have been object of many studies. Vandermonde matrices, Cauchy matrices, Loewner matrices and Pick matrices are more frequently encountered in the framework of interpolation problems. Tridiagonal and more general banded matrices and their inverses, which are semiseparable matrices, are very familiar in numerical analysis. Their extension to more general classes and the design of efficient algorithms for them has recently received much attention. Multi-dimensional problems lead to matrices which can be represented as structured block matrices with a structure within the blocks themselves. Kronecker product

Rhotrix Linear Transformation

Advances in Linear Algebra Matrix Theory, 2012

This paper considers rank of a rhotrix and characterizes its properties, as an extension of ideas to the rhotrix theory rhomboidal arrays, introduced in 2003 as a new paradigm of matrix theory of rectangular arrays. Furthermore, we present the necessary and sufficient condition under which a linear map can be represented over rhotrix.

Applications of Linear Transformations to Matrix Equations

Linear Algebra and Its Applications, 1997

We consider the linear transformation T(X) = AX -CXB where A, C E M,, B E M,. We show a new approach to obtaining conditions for the existence and uniqueness of the solution X of the matrix equation T(X) = R. As a consequence of our approach we present a simple characterization of a full-rank solution to the matrix equation. We apply the existence theorem to a general form of the observer matrix equation and characterize the existence of a full-rank solution. 0 1997 Elsevier Science Inc. NOTATION AND KEY WORDS The following symbols and key words are used in this paper: M cl0 Set of n-by-m complex matrices; M,,, = M,. Column space of X E M,, k. N(T) Null space of a linear transformation T : M,, s -+ M,, $. A o B Kronecker product of matrices A and B [lo, Chapter 41. A @ B Direct sum of matrices A and B [9, Chapter 01. LINEARALGEBBAANDITSAPPLICATIONS 267:221-240(1997) 0 1997 Elsevier Science Inc. All rights reserved. ,, where ej E @" is in the ith column, i.e., E, I = [e, 0 *em O] E M,.

Zero Structure Assignment Of Matrix Pencils: The Case Of Structured Additive Transformations

Proceedings of the 44th IEEE Conference on Decision and Control, 2005

Matrix Pencil Models are natural descriptions of linear networks and systems. Changing the values of elements of networks, that is redesigning them implies changes in the zero structure of the associated pencil by structured additive transformations. The paper examines the problem of zero assignment of regular matrix pencils by a special type of structured additive transformations. For a certain family of network redesign problems the additive perturbations may be described as diagonal perturbations and such modifications are considered here. This problem has certain common features with the pole assignment of linear systems by structured static compensators and thus the new powerful methodology of global linearisation [1, 2] can be used. For regular pencils with infinite zeros, families of structured degenerate additive transformations are defined and parameterised and this lead to the derivation of conditions for zero structure assignment, as well as methodology for computing such solutions. Finally the case of regular pencils with no infinite zeros is considered and conditions of zero assignment are developed. The results here provide the means for studying certain problems of linear network redesign by modification of the non-dynamic elements.

Some properties of row-adjusted meet and join matrices

Linear and Multilinear Algebra, 2012

Let (P,) be a lattice, S a finite subset of P and f 1 , f 2 ,. .. , f n complex-valued functions on P. We define row-adjusted meet and join matrices on S by (S) f 1 ,...,fn = (f i (x i ∧ x j)) and [S] f 1 ,...,fn = (f i (x i ∨ x j)). In this paper we determine the structure of the matrix (S) f 1 ,...,fn in general case and in the case when the set S is meet closed we give bounds for rank(S) f 1 ,...,fn and present expressions for det(S) f 1 ,...,fn and (S) −1 f 1 ,...,fn. The same is carried out dually for row-adjusted join matrix of a join closed set S.

A Note on Special Matrices

The word "matrix" comes from the Latin word for "womb" because of the way that the matrix acts as a womb for the data that it holds. The first known example of the use matrices was found in a Chinese text called Nine Chapters of the Mathematical Art, which is thought to have originated somewhere between 300 B.C. and 200 A.D. The modern method of matrix solution was developed by a German mathematician and scientist Carl Friedrich Gauss. There are many different types of matrices used in different modern career fields. We introduce and discuss the different types of matrices that play important roles in various fields.

Linear transformations on matrices: The invariance of the third elementary symmetric function

Canadian Journal of Mathematics, 1970

Let T be a linear transformation on Mn the set of all n × n matrices over the field of complex numbers, . Let A ∈ Mn have eigenvalues λ1, …, λ n and let Er (A) denote the rth elementary symmetric function of the eigenvalues of A : Equivalently, Er (A) is the sum of all the principal r × r subdeterminants of A. T is said to preserve Er if Er [T(A)] = Er (A) for all A ∈ Mn . Marcus and Purves [3, Theorem 3.1] showed that for r ≧ 4, if T preserves Er then T is essentially a similarity transformation; that is, either T: A → UAV for all A ∈ Mn or T: A → UAtV for all A ∈ Mn, where UV = eiθIn, rθ ≡ 0 (mod 2π).