Matrix Reference Manual: Special Matrices (original) (raw)
Go to: Introduction, Notation, Index
Antisymmetric
see skew-symmetric .
Bidiagonal
A is upper bidiagonal if _a(i,j)=_0 unless _i=j_or _i=j-_1.
A is lower bidiagonal if _a(i,j)=_0 unless i=j or_i=j+_1
A bidiagonal matrix is also tridiagonal, triangular and Hessenberg.
Bisymmetric
A[n#_n_] is bisymmetric if it is symmetric about both main diagonals, i.e. ifA=A_T_=JAJ where J is the exchange matrix.
WARNING: The term persymmetric is sometimes used instead of bisymmetric. Also bisymmetric is sometimes used to mean centrosymmetric and sometimes to meansymmetric and perskewsymmetric.
- A bisymmetric matrix is symmetric, persymmetric and centrosymmetric. Any two of these four properties properties implies the other two.
- More generally, symmetry, persymmetry and centrosymmetry can each come in four flavours: symmetric, skew-symmetric, hermitian and skew-hermitian. Any pair of symmetries implies the third and the total number of skew and hermitian flavourings will be even. For example, if A is skew-hermitian and perskew-symmetric, then it will also be centrohermitian.
- If A[2_m_#2_m_] is bisymmetric
- A=[S PT; P JSJ] for some symmetric S[m#_m_] and persymmetricP[m#_m_].
- A is orthogonally similar to [S-JP 0; 0 S+JP]
- A has a set of 2_m_ orthonormal eigenvectors consisting of_m_ skew-symmetric vectors of the form [u; -Ju]/k and_m_ symmetric vectors of the form [v; Jv]/k whereu and v are eigenvectors of S-JP andS+JP respectively and _k_=sqrt(2).
- If A has distinct eigenvalues and rank(P)=1 then if the eigenvalues are arranged in descending order, the corresponding eigenvectors will be alternately symmetric and skew-symmetric with the first one being symmetric or skew-symmetric according to whether the non-zero eigenvalue ofP is positive or negative.
- If A[2_m_+1#2_m_+1] is bisymmetric
- A=[S x PT;xT y xTJ; P Jx JSJ] for some symmetric S[m#_m_] and persymmetric P[m#_m_].
- A is orthogonally similar to [S-JP 0; 0 y kxT; 0 kx S+JP] where _k_=sqrt(2).
- A has a set of 2_m_+1 orthonormal eigenvectors consisting of_m_ skew-symmetric vectors of the form [u; 0; -Ju]/_k_and m+1 symmetric vectors of the form [v; kw; Jv]/k where u and [v; w] are eigenvectors of S-JP and [S+JP kx; kxT y] respectively and_k_=sqrt(2).
- If A has distinct eigenvalues and P=0 then if the eigenvalues are arranged in descending order, the corresponding eigenvectors will be alternately symmetric and skew-symmetric with the first one being symmetric.
Block Diagonal
A is block diagonal if it has the form [A 0 ...0; 0 B ... 0;...;0 0 ... Z] where A,B, ..., Z are matrices (not necessarily square).
- A matrix is block diagonal iff is the direct sum of two or more smaller matrices.
Centrohermitian
A[m#_n_] is centrohermitian if it is rotationally hermitian symmetric about its centre, i.e. ifA_T_=JA_H_J where J is theexchange matrix.
- Centrohermitian matrices are closed under addition, multiplication and (if non-singular) inversion.
Centrosymmetric
A[m#_n_] is centrosymmetric (also called perplectic) if it is rotationally symmetric about its centre, i.e. if A=JAJ where J is the exchange matrix. It is centrohermitian ifA_T_=JA_H_J and centroskew-symmetric if A= -JAJ.
- Centrosymmetric matrices are closed under addition, multiplication and (if non-singular) inversion.
Circulant
A circulant matrix, A[n#n_], is aToeplitz matrix in which ai,j is a function of {(i-j)_ modulo n}. In other words each column ofA is equal to the previous column rotated downwards by one element.
WARNING: The term circular is sometimes used instead of circulant.
- Circulant matrices are closed under addition, multiplication and (if non-singular) inversion.
- A circulant matrix, A[n#n_] , may be expressed uniquely as a polynomial in C, the cyclic permutation matrix, as A = Sum_i_=0:n_-1{ a_i,1C_i} = Sum_i_=0:n_-1{a_1,i C-i}
- All circulant matrices have the same eigenvectors. IfA[n#_n_] is a circulant matrix, the normalized eigenvectors of A are the columns of _n_-½ F., the discrete Fourier Transform matrix. The corresponding eigenvalues are the discrete fourier transform of the first row of A given byFATe1 = (FACe1)_C_= nF-1Ae1 wheree1 is the first comlumn of I.
- F-1AF = _n_-1 FHAF=DIAG(FATe1)
Circular
A Circular matrix, A[n#_n_], is one for which AAC = I.
WARNING: The term circular is sometimes used for a circulant matrix.
- A matrix A is circular iff A=exp (j B) where_j_ = sqrt(-1), B is real and exp() is the matrix exponential function.
- If A = B + jC where B and C are real and j = sqrt(-1) then A is circular iff BC=CBand also BB + CC = I.
Companion Matrix
If p(x) is a polynomial of the form a(0) + a(1)*x +a(2)*x_2 + ... +a(n)*xn_ then the polynomial's companion matrix is n#n and equals [0 I; -a(0:n-1)/a(n)] whereI is _n_-1#_n_-1. For _n_=1, the companion matrix is [-a(0)/a(1)].
The rows and columns are sometimes given in reverse order [-a(_n_-1:0)/a(n) ; I 0].
- The characteristic and minimal polynomials of a companion matrix both equal p(x).
- The eigenvalues of a companion matrix equal the roots of p(x).
Complex
A matrix is complex if it has complex elements.
Complex to Real Isomporphism
We can associate a complex matrix C[m#n_]with a corresponding real matrix R[2_m#2_n_]by replacing each complex element, z, of C by a 2#2 real matrix [z R -zI; zI z _R_]=|z|×[cos(t) -sin(t); sin(t) cos(t)] where _t_=arg(z). We will writeC <=> R for this mapping below.
- This mapping preserves the operations +,-,*,/ and, for square matrices, inversion. It does not however preserve • (Hadamard) or ⊗ (Kronecker) products.
- If C <=> R
- R = C ⊗ [()R -()I; ()I ()_R_] where the operators ()R and ()I take the real and imaginary parts respectively.
- C = (I[m#_m_] ⊗ [1 _j_])R (I[n#_n_] ⊗ [1; 0]) = (I⊗ [1 0]) R (I ⊗ [1; -_j_])=½(I⊗ [1 _j_]) R (I ⊗ [1; -_j_]) where_j_=sqrt(-1).
- CR = (I[m#_m_]⊗ [1 0]) R (I[n#_n_] ⊗ [1; 0]) = (I[m#_m_] ⊗ [0 1]) R(I[n#_n_] ⊗ [0; 1])
- det(R)=|det(C)|2
- tr(R)=2 tr(C)
- R is orthogonal iff C is unitary.
- R is symmetric iff C is hermitian.
* R is positive definite symmetric iff C is positive definite hermitian.
Vector mapping: Under the isomorphism a complex vector maps to a real matrix: z[n_] <=>Y[2_n#2]. We can also define a simpler mapping, <->, from a vector to a vector as z[_n_]<-> x[2_n_] = z ⊗ [()R; ()_I_] = Y [1; 0]
In the results below, we assume z[_n_] <->x[2_n_] , w[_n_] <->u[2_n_] and C <=> R:
- If wHCz is known to be real, thenwHCz = uTRx
- If C is hermitian, then, zHCz = xTRx
- zHz = xTx
To relate the martrix and vector mappings, <-> and <=>, we define the following two block-diagonal matrices: E =I[n#_n_] ⊗ [0 1; 1 0] and N = I[n#_n_] ⊗ [1 0; 0 -1]. We now have the following properties (assuming z[_n_] <->x[2_n_] and C <=>R):
- E2=N2=I
- E_T_=E,N_T_=N
- EN=-NE
- ENEN=NENE=-I
- xTENx=xTNEx=0
- ENRNE = NEREN = R
- RNE = NER
- REN = ENR
- ENREN = NERNE = -R
- CH <=> RT
- CT <=> NRTN
- CC <=> NRN
- jC <=> ENR = REN = -NER = -RNE
- z <-> x and z <=> [x ENx]
- zC <-> Nx andzC<=> [Nx Ex]
- zH <-> xT andzH <=> YT = [xT; xTNE]
- zT <-> x_T_N and zT<=> [xTN;xTE]
Convergent
A matrix A is convergent if Ak tends to0 as k tends to infinity.
- A is convergent iff all its eigenvalues have modulus < 1.
- A is convergent iff there exists a positive definite X such thatX-AHXA is positive definite (Stein's theorem)
- If Sk is defined asI+A+A2+ … +Ak, then A is convergent iff Sk converges as_k_ tends to infinity. If it does converge its limit is (I-A)-1.
See also: Stability
Cyclic Permutation Matrix
The n#n cyclic permutation matrix (or cyclic shift matrix), C, is equal to [0n_-1_T 1;In_-1#n_-1 0n_-1]. Its elements are given by_ci,j = δ_i,1+(j mod_n) where δ_i_,j is the Kronecker delta.
- C is a toeplitz, circulant, permutation matrix.
- Cx is the same as x but with the last element moved to the top and all other elements shifted down by one position.
- C-1 = CT =C_n_-1
- Cn = I
Decomposable
A matrix, A, is fully decomposable (or reducible) if there exists a permutation matrix P such that PTAP is of the form [B C; 0 D] where B and D are square.
A matrix, A, is partly-decomposable if there exist permutation matrices P and Q such thatPTAQ is of the form [B C; 0 D] where B and D are square.
A matrix that is not even partly-decomposable isfully-indecomposable.
Defective[!]
A matrix, X:n#n, is defective if it does not have_n_ linearly independent eigenvectors, otherwise it is simple.
- X is defective iff A is not diagonalizable.
- X is defective iff the geometric and algebraic multiplicity differ for at least one eigenvalue.
- X is non-defective iff its Jordan form is diagonal.
Derogatory
An n*n square matrix is derogatory if its minimal polynomial is of lower order than n.
Diagonal
A is diagonal if a(i,j)=0 unless i=j.
- Diagonal matrices are closed under addition, multiplication and (where possible) inversion.
- The determinant of a square diagonal matrix is the product of its diagonal elements.
- If D is diagonal, DA multiplies each row of A by a constant while BD multiplies each column of B by a constant.
- If D is diagonal then XDXT = sum_i_(di ×xixiT) andXDXH = sum_i_(_di_×xixi H)[1.15]
- If D is diagonal then tr(XDXT) = sum_i_(di ×xiTxi) and tr(XDXH) = sum_i_(di_×xi Hxi) = sum_i(di × |xi|2) [1.16]
- If D is diagonal then AD = DA iff_ai,j_=0 whenever di,i !=dj,j. [1.12]
- If D = DIAG(_c_1I1,c_2I2, ...,cMIM) where the_c k are distinct scalars and theIk are identity matrices, then AD = DAiff A = DIAG(A1, A2, ...,AM) where each Ak is the same size as the corresponding Ik. [1.13]
The functions DIAG(x) and diag(X) respectively convert a vector into a diagonal matrix and the diagonal of a square matrix into a vector. The function sum(X) sums the rows of X to produce a vector. In the expression below, • denotes elementwise (a.k.a. Hadamard) multiplication.
- diag(DIAG(x)) = x
- xT(diag(Y)) = tr(DIAG(x)Y)
- DIAG(x) DIAG(y) =DIAG(x • y)
- diag(XY) = sum(X • YT)
Diagonalizable or Diagonable or Simple or Non-Defective
A matrix, X, is diagonalizable (or, equivalently, simple or diagonable or non-defective) if it is similar to a diagonal matrix otherwise it is defective.
- If X is diagonalizable, it may be written X=EDE-1 where Dis a diagonal matrix of eigenvalues and the columns of E are the corresponding eigenvectors.
- [X, Y diagonalizable]: The diagonalizable matrices, X and Y, commute, i.e. XY=YX, iff they can be decomposed as X=EDE-1 andY=EGE-1 where D and G diagonal and the columns of E for a common set of eigenvectors.
- The following are equivalent:
- X is diagonalizable
- The Jordan form of X is diagonal.
- For each eigenvalue of X, the geometric and algebraic multiplicities are equal.
- X has_n_ linearly independent eigenvectors.
Diagonally Dominant
A square matrix An#n is diagonally dominant if the absolute value of each diagonal element is greater than the sum of absolute values of the non-diagonal elements in its row. That is if for each i we have |ai,i| > sum_j_!= i(|ai,j|) or equivalentlyabs(diag(A)) > ½ABS(A)1n#1.
- [Real]: If the diagonal elements of a square matrix A are all >0 and if A and A_T_are both diagonally dominant then A is positive definite.
- If A is diagonally dominant and irreduciblethen
- A is non singular
- If diag(A) > 0 then all eigenvalues of A have strictly positive real parts.
Discrete Fourier Transform
The discrete fourier transform matrix,F[n#n_], has_fp,q = exp(-2_j_π(_p_-1) (_q_-1) _n_-1).
- Fx is the discrete fourier transform (DFT) of x.
- F is a symmetric, Vandermonde matrix.
- F-1 =_n_-1F_H_=_n_-1FC
- If y = Fx then yHy = n xHx. This is Parseval's theorem.
- F is a Vandermonde matrix.
- det(F) = n_½_n.
- tr(F) = 0.
- FC = DIAG(Fe2) F whereC is the cyclic permutation matrixand e2 is the second column ofI.
- If A[n#_n_] is a circulant matrix, the normalized eigenvectors of A are the columns of _n_-½ F. The corresponding eigenvalues are the discrete fourier transform of the first row of Agiven by FATe1= (FACe1)_C_= nF-1Ae1where e1 is the first column ofI.
- [_n_=2_k_]: F[n#_n_] = GP where:
- P is a symmetric permutation matrix with P = prod_r_=1:k(E_k_-_r_⊗ [ Er_-1 ⊗ [1 0] ;Er_-1 ⊗ [0 1] ] ) whereEs is a 2_s#2_s identity matrix and ⊗ denotes the Kroneker product. If x=0:_n_-1 thenPx consists of the same numbers but arranged in bit-reversed order (e.g. for _n_=8, Px = [0; 4; 2; 6; 1; 5; 3; 7] ).
- G = prod_r_=1:k(E_r_-1⊗ [ [1 1] ⊗ E_k_-r ; [1 -1] ⊗ W_k-_r ]T) where the diagonal "twiddle factor" matrix is Ws =DIAG(exp(-2-s j pi (0:2_s_-1))).
- Calculation of Fx as GPx is the "decimation-in-time" FFT (Fast Fourier Transform) while Fx =FTx =P GTx is the "decimation-in-frequency" FFT. In each case only O(n log2_n_) non-trivial arithmetic operations are required because most of the non-zero elements of the factors of G are equal to ±1.
Doubly-Stochastic
A real non-negative square matrix A is doubly-stochasticif its rows and columns all sum to 1.
See under stochastic for properties.
Essential
An essential matrix, E, is the product E=US of a 3#3 orthogonal matrix, U, and a 3#3 skew-symmetric matrix, S = SKEW(s). In 3-D euclidean space, a translation+rotation transformation is associated with an essential matrix.
- If E=U SKEW(s) is an essential matrix then
- E=SKEW(Us) U
- ETE = (sTs) I -ssT
- EET = (sTs) I -UssTU
- tr(ETE) = tr(EET) = 2sTs
- If E is an essential matrix then so are ET,kE and WEV where k is a non-zero scalar andW and V are orthogonal.
- E is an essential matrix iff rank(E)=2 andEETE = ½tr(EET)E. This defines a set of nine homogeneous cubic equations.
- E is an essential matrix iff its singular values are k, k and 0 for some _k_>0.
- If the singular value decomposition ofE is E = Q DIAG([k; k; 0])RT, then we can write E = US whereU=Q [0 1 0; -1 0 0; 0 0 1] RT and S = R [0 -k 0;k 0 0; 0 0 0] RT = SKEW(R [0; 0; _k_]).
- If E is an essential matrix then A = kE for some k iff Ex × Ax = 0 for all xwhere × denotes the vector cross product.
Exchange
The exchange matrix J[_n#n_] is equal to [en e_n_-1 …e2 e1] whereei is the _i_th column of I. It is equal to I but with the columns in reverse order.
- J is Hankel, Orthogonal, Symmetric, Permutation, Doubly Stochastic.
- J2 = I
- JAT, JAJ andATJ are versions of the matrix A that have been rotated anti-clockwise by 90, 180 and 270 degrees
- JA, JATJ, AJ andAT are versions of the matrix A that have been reflected in lines at 0, 45, 90 and 135 degrees to the horizontal measured anti-clockwise.
- det(Jn#n) = (-1)n(_n_-1)/2 i.e. it equals +1 if n mod 4 equals 0 or 1 and -1 if n mod 4 equals 2 or 3
Givens Reflection
[_Real_]: A Givens Reflectionis an n#n matrix of the form PT[Q 0 ; 0 I]P where P is any permutation matrix andQ is a matrix of the form [cos(x) sin(x); sin(x) -cos(x)].
- A Givens reflection is symmetric and orthogonal.
- The determinant of a Givens reflection = -1.
- [2*2]: A 2#2 matrix is a Givens reflection iff it is a Householder matrix.
Givens Rotation
[_Real_]: A Givens Rotation is an n#n matrix of the form PT[Q 0 ; 0 I]Pwhere P is a permutation matrix and Qis a matrix of the form [cos(x) sin(x); -sin(x) cos(x)].
- A Givens rotation is orthogonal and a Rotation matrix.
- The determinant of a Givens rotation = +1.
Hadamard [!]
An n*n Hadamard matrix has orthogonal columns whose elements are all equal to +1 or -1.
- Hadamard matrices exist only for _n_=2 or n a multiple of 4.
- If A is an n*n Hadamard matrix thenATA = n* I. ThusA/sqrt(n) is orthogonal.
- If A is an n*n Hadamard matrix then det(A) =nn/2.
Hamiltonian
A real 2_n*2_n matrix, A, is Hamiltonian ifKA is symmetric where K = [0 I; -I 0].
See also: symplectic
Hankel
A Hankel matrix has constant anti-diagonals. In other words_a(i,j)_ depends only on (i+j).
- A Hankel matrix is symmetric.
- [A:Hankel] If J is the exchange matrix, then JAJ is Hankel; JA andAJ are Toepliz.
- [A:Hankel] A+B and A-Bare Hankel.
Hermitian
A square matrix A is Hermitian if A =AH, that isA(i,j)=conj(A(j,i))
For real matrices, Hermitian and symmetricare equivalent. Except where stated, the following properties apply to real symmetric matrices as well.
- [_Complex_]: A is Hermitian iffxHAx is real for all (complex) x.
- The following are equivalent
- A is Hermitian and +ve semidefinite
- A=BHB for some B
- A=C2 for some Hermitian C.
- Any matrix A has a unique decomposition A = B +jC where B and C are Hermitian: B = (A+AH)/2 and C=(A-AH)/2_j_
- Hermitian matrices are closed under addition, multiplication by a scalar, raising to an integer power, and (if non-singular) inversion.
- Hermitian matrices are normal with real eigenvalues, that is A = UDUH for some unitary Uand real diagonal D.
- A is Hermitian iffxHAy=xHAHyfor all x and y.
- If A and B are hermitian then so are AB+BA and_j_(AB-BA) where j =sqrt(-1).
- For any complex a with |a|=1, there is a 1-to-1 correspondence between the unitary matrices, U, not having a as an eigenvalue and hermitian matrices, H, given byU=a(jH-I)(jH+I)-1andH=j(U+aI)(U-aI)-1 where_j_ =sqrt(-1). These are Caley's formulae.
- Taking _a_=-1 givesU=(I-jH)(I+jH)-1=(I+jH)-1(I-jH) andH=j(U-I)(U+I)-1=j(U+I)-1(U-I).
See also: Definiteness, Loewner partial order
Hessenberg
A Hessenberg matrix is like a triangular matrix except that the elements adjacent to the main diagonal can be non-zero.
A is upper Hessenberg if A(i,j)=0 whenever i>j+1. It is like an upper triangular matrix except for the elements immediately below the main diagonal.
A is lower Hessenberg if _a(i,j)_=0 whenever_i<j_-1. It is like a lower triangular matrix except for the elements immediately above the main diagonal.
- A symmetric or HermitianHessenberg matrix is tridiagonal.
- If A is upper triangular and B is upper Hessenberg then AB is upper Hessenberg.
Hilbert
A Hilbert matrix is a square Hankel matrix with elements_a(i,j)_=1/(_i+j_-1).
- The inverse of a hilbert matrix has integer elements.
Homogeneous
If we define an equivalence relation in which X ~ Y iff X = cY for some non-zero scalar c, then the equivalence classes are called homogeneous matrices and homogeneous vectors.
- Multiplication: If X ~ A and Y ~ B, thenXY ~ AB
- Addition: If X ~ A and Y ~ B then it isnot generally true that X+Y ~ A+B
- The projective spaceRP_n, consists of all non-zero homogeneous vectors from R_n+1.
Householder
A Householder matrix (also called Householder reflection or transformation) is a matrix of the form**(I-2vv**H) for some vector v with ||v||=1.
Multiplying a vector by a Householder transformation reflects it in the hyperplane that is orthogonal to v.
Householder matrices are important because they can be chosen to annihilate any contiguous block of elements in any chosen vector.
- A Householder matrix is symmetric and orthogonal.
- Given a vector x, we can choose a Householder matrix P such that Px=[-k 0 0 ... 0]H where_k_=sgn(x(1))*||x||. To do so, we choose v = (x + ke1)/||x +ke1|| where e1 is the first column of the identity matrix. The first row of P equals -_k_-1xT and the remaining rows form an orthonormal basis for the null spaceof xT.
- [2*2]: A 2*2 matrix is Householder iff it is a Givens Reflection.
Hypercompanion
The hypercompanion matrix of the polynomial_p_(x)=(x-a)n is an n#n_upper bidiagonal matrix, H, that is zero except for the value a along the main diagonal and the value 1 on the diagonal immediately above it. That is, hi,j = a if_j=i, 1 if j=i+1 and 0 otherwise.
If the real polynomial_p_(x)=(x_2_-ax-b)n with_a_2+4_b_<0 (i.e. the quadratic term has no real factors) then its Real hypercompanion matrix is a 2_n_#2_n_ tridiagonal matrix that is zero except for a_at even positions along the main diagonal, b at odd positions along the sub-diagonal and 1 at all positions along the super-diagonal. Thus for odd_i, hi,j = 1 if j=i+1 and 0 otherwise while for even i, hi,j = 1 if j=i+1, a if_j=i_ and b if _j=i_-1.
- The characteristic and minimal polynomials of the hypercompanion matrix equal_p(x)_.
Idempotent [!]
P matrix P is idempotent if P2 =P . An idempotent matrix that is also hermitianis called a projection matrix.
WARNING: Some people call any idempotent matrix a projection matrix and call it an orthogonal projection matrix if it is also hermitian.
- The following conditions are equivalent
- P is idempotent
- P is similar to a diagonal matrix each of whose diagonal elements equals 0 or 1.
- 2P-I is involutary.
- If P is idempotent, then:
- rank(P)=tr(P).
- The eigenvalues of P are all either 0 or 1. The geometric multiplicity of the eigenvalue 1 is rank(P).
- PH, I-P and I-P_H_are all idempotent.
- P(I-P) = (I-P)P = 0.
- Px=x iff x lies in the range of P.
- The null space of P equals the range of I-P. In other wordsPx=0 iff x lies in the range of I-P.
- P is its own generalized inverse, P#.
- [A: n#n, F,G: n#r] IfA=FGH where F and G are of full rank, then A is idempotent iff GHF =I.
Identity[!]
The identity matrix , I, has _a(i,i)_=1 for all i and_a(i,j)_=0 for all i !=j
Impotent
A non-negative matrix T is impotent if min(diag(Tn)) = 0 for all integers _n_>0 [seepotency].
Incidence
An incidence matrix is one whose elements all equal 1 or 0.
Integral
An Integral matrix is one whose elements are all integers.
Involutary (also written Involutory)
An Involutary matrix is one whose square equals the identity.
- A is involutary iff ½(A+I) is idempotent.
- A[2#2] is involutary iff A = +-I or elseA = [a b; (1-_a_2)/b -_a_] for some real or complex a and b.
Irreducible
see under Reducible
Jacobi
see under Tridiagonal
Monotone
A matrix, A, is monotone iff A-1is non-negative, i.e. all its entries are >=0.
In computer science a matrix is monotone if its entries are monotonically non-decreasing as you move away from the main diagonal along either a row or column.
Nilpotent [!]
A matrix A is nilpotent to index k ifAk = 0 but A_k-_1 != 0.
- The determinant of a nilpotent matrix is 0.
- The eigenvalues of a nilpotent matrix are all 0.
- If A is nilpotent to index k, its minimal polynomial is_tk_.
Non-negative
see under positive
Normal
A square matrix A is normal ifAHA = AAH
- An#n is normal iff any of the following equivalent conditions is true
- A is unitarily similar to a diagonal matrix.
- A has an orthonormal set of n eigenvectors
- eig(A)_H_eig(A) = ||A||_F_2 where ||A||F is the Frobenius norm.
- The following types of matrix are normal: diagonal,hermitian, skew-hermitian and unitary.
- A normal matrix is hermitian iff its eigenvalues are all real.
- A normal matrix is skew-hermitian iff its eigenvalues all have zero real parts.
- A normal matrix is unitary iff its eigenvalues all have an absolute value of 1.
- For any Xm#n,XHX and XXH are normal.
- The singular values of a normal matrix are the absolute values of the eigenvalues.
- [A: normal] The eigenvalues ofAH are the conjugates of the eigenvalues of Aand have the same eigenvectors.
- Normal matrices are closed under raising to an integer power and (if non-singular) inversion.
- If A and B are normal and AB=BA then ABis normal.
Orthogonal [!]
A real square matrix Q is orthogonal if Q'Q =I. It is a proper orthogonal matrix if det(Q)=1 and an_improper orthogonal_ matrix if det(Q)=-1.
For real matrices, orthogonal and unitary mean the same thing. Most properties are listed under unitary.
Geometrically: Orthogonal matrices in 2 and 3 dimensions correspond to rotations and reflections.
- The determinant of an orthogonal matrix equals +-1 according to whether it is proper or improper.
- Q is a proper orthogonal matrix iff Q = exp(K) orK=ln(Q) for some real skew-symmetric K.
- A 2#2 orthogonal matrix is either a Givens rotation or a Givens reflection according to whether it is proper or improper.
- A 3#3 orthogonal matrix is either a rotation matrix or else a rotation matrix plus a reflection in the plane of the rotation according to whether it is proper or improper.
- For _a_=+1 or _a_=-1, there is a 1-to-1 correspondence between real skew-symmetric matrices, K, and orthogonal matrices, Q, not having a as an eigenvalue given byQ=a(K-I)(K+I)-1 andK=(aI+Q)(aI-Q)-1. These are Caley's formulae.
- For a=-1 this givesQ=(I-K)(I+K)-1 andK=(I-Q)(I+Q)-1. Note that (I+K) is always non-singular.
Permutation
A square matrix P is a permutation matrix if its columns are a permutation of the columns of I.
- A permutation matrix is orthogonal and doubly stochastic.
- The set of permutation matrices is closed under multiplication and inversion.1
- If P is a permutation matrix:
- P-1 = PT
- P2 = I iff P is symmetric
- P is a permutation matrix iff each row and each column contains a single 1 with all other elements equal to 0.
Persymmetric
A matrix A[n#_n_] is persymmetricif it is symmetric about its anti-diagonal, i.e. ifA=JATJ where J is the exchange matrix. It is perhermitian ifA=JAHJ and perskewsymmetric if A= -JATJ.
WARNING: The term persymmetric is sometimes used for a bisymmetric matrix.
- If A is persymmetric then so is Ak for any positive or, providing A is non-singular, negative k.
- A Toeplitz matrix is persymmetric.
Polynomial Matrix
A polynomial matrix of order p is one whose elements are polynomials of a single variable x. ThusA=A(0)+A(1)x+...+A(p)_xp_where the A(i) are constant matrices and A(p) is not all zero.
See also regular.
Positive
A real matrix is positive if all its elements are strictly > 0.
A real matrix is non-negative if all its elements are >= 0.
- [Perron's theorem] If An#n is_positive_ with spectral radius r, then the real positive value r is an eigenvalue with the following properties:
- the eigenvalue r is simple, i.e. its algebraic and geometric multiplicities equal 1.
- the eigenvector, x, satisfying Ax = rxcan be chosen to have strictly positive real elements.
- the eigenvector, y, satisfyingATy = ry can be chosen to have strictly positive real elements.
- all other eigenvalues have magnitude strictly less than r and their corresponding eigenvectors cannot be chosen to have all elements strictly positive and real.
- The rank-1 impotent matrix, T =xyT/xTy, is the projection onto the eigenspace spanned by x. The limit, lim_m_->inf(_r_-1A)_m_= T = xyT/xTy.
- [Perron-Frobenius theorem] If An#n isirreducible and non-negative with spectral radius r, then the real positive value r is an eigenvalue with the following properties:
- the eigenvalue r is simple, i.e. its algebraic and geometric multiplicities equal 1.
- the eigenvector, x, satisfying Ax = rxcan be chosen to have strictly positive real elements.
- the eigenvector, y, satisfyingATy = ry can be chosen to have strictly positive real elements.
- the eigenvectors associated with any other eigenvalue cannot be chosen to have all elements strictly positive and real.
- If there are h eigenvalues of magnitude r, then these eigenvalues are simple and are given by_r_ exp(2_j_π_k_/h) for _k_=0, 1, …, _h_-1. h is the period.
Positive Definite
see under definiteness
Primitive
If k is the eigenvalue of a matrix An#n having the largest absolute value, then A is primitive if the absolute values of all other eigenvalues are < |k|.
- If An#n is non-negative then A is primitive iffAm is positive for some m>0.
- If An#n is non-negative and primitive then lim_m_->inf(_r_-1A)_m_= xyT where r is the spectral radius of A and x andy are positive eigenvectors satisfying Ax =rx, ATy = ryand xTy = 1.
Projection
A projection matrix (or orthogonal projection matrix) is a square matrix that is hermitian and idempotent: i.e.P_H_=P2=P.
WARNING: Some people call any idempotentmatrix a projection matrix and call it an _orthogonal projection_matrix if it is also hermitian.
- If P is a projection matrix then P is positive semi-definite.
- I-P is a projection matrix iff P is a projection matrix.
- X(XHX)#X_H_is a projection whose range is the subspace spanned by the columns of X.
- If X has full column rank, we can equivalently writeX(XHX)-1XH
- xxH/xHx is a projection onto the 1-dimensional subspace spanned by x.
- If P and Q are projection matrices, then the following are equivalent:
- P-Q is a projection matrix
- P-Q is positive semidefinite
- ||Px|| >= ||Qx|| for all x.
- PQ=Q
- QP=Q
- [**A**: idempotent] A is a projection matrix iff ||Ax|| <= ||x|| for all x.
Quaternion
Quaternions are a generalization of complex numbers. A quaternion consists of a real component and three independent imaginary components and is written as r+xi+yj+zk where_i_2=_j_2=_k_2=ijk_=-1. It is approximately true that whereas the polar decomposition of a complex number has a magnitude and 2-dimensional rotation, that of a quaternion has a magnitude and a 3-dimensionl rotation (see below). Quaternions form a_division ring rather than a field because although every non-zero quaternion has a multiplicative inverse, multiplication is not in general commutative (e.g. ij=-ji=k). Quaternions are widely used to represent three-dimensional rotations in computer graphics and computer vision as an alternative to orthogonal matrices with the following advantages: (a) more compact, (b) possible to interpolate, (c) does not suffer from "gimbal lock", (d) easy to correct for drift due to rounding errors.
We can represent a quaternion either as a real 4-vectorq_R_=[_r x y z_]T or a complex 2-vector q_C_=[_r+jy x+jz_]T. This gives r+xi+yj+zk = [1 _i j k_]qR = [1_i_]qC. We can also represent it as a real 4#4 matrix Q_R_=[r -x -y -z; x r -z y; y z r -x; _z -y x r_] or a complex 2#2 matrixQ_C_=[r+jy -x+jz; _x+jz r-jy_]. Both the real and the complex quaternion matrices obey the same arithmetic rules as quaternions, i.e. the quaternion matrix representing the result of applying +, -, * and / operations to quaternions is the same as the result of applying the same operations to the corresponding quaternion matrices. Note thatq_R_=Q_R_[1 0 0 0]T andq_C_=Q_C_[1 0]T; we can also define the inverse functionsQ_R_=QUATR(qR) andQ_C_=QUATC(qC). Note that the real and complex representations given above are not the only possible choices.
In the following,P_R_=QUATR(pR),Q_R_=QUATR(qR),K=DIAG([-1 1 1 1]) and q_R_=[_r x y z_]_T_=[r; w].PC,pC,Q_C_and qC are the corresponding complex quantities; the subscripts R and C are omitted below for results that apply to both real and complex representations.
- The magnitude of the quaternion is_m_=|q|=sqrt(_r_2+_x_2+_y_2+_z_2). . A unit quaternion has m = 1.
- det(QR)=_m_4. det(QC)=qHq=_m_2
- Any quaternion may be written as m times a unit quaternion.
- Q-1=(qHq)-1Q_H_is the reciprocal of the quaternion.
- QH is the conjugate of the quaternion; this corresponds to reversing the signs of x, y and_z_.
- PQ=QUAT(Pq) andP+Q=QUAT(p+q). This illustrates that we may often use the quaternion vectors rather than the matrices when performing arithmetic with a resultant saving in computation.
- PRq_R_=KQRTKpR. Note however that KQRTK is not a quaternion matrix unless Q is a multiple of I (i.e. the corresponding quaternion is purely real).
- (QRK)2=(KQR)2
- (PRQRK)2=(PRK)2(QRK)2
- Q_R_=rI+[0 -wT; w SKEW(w)]
- [|q**|=1](QRK)2=(KQR)2=[10**; 0 S] where S is a 3#3 rotation matrix corresponding to an angle of 2cos-1(r) about an axis whose unit vector isw/sqrt(1-_r_2).
- Every 3#3 rotation matrix corresponds to a unit quaternion matrix that is unique except for its sign, i.e. +Q and -Q correspond to the same rotation matrix. Thus the decomposition of a quaternion into a magnitude and 3-dimensional rotation is only invertible to within a sign ambiguity.
- [|p|=|q|=1] If (PRK)2==[1 0; 0 R] and (QRK)2==[1 0;0 S], then (PRK)2(QRK)2=(PRQRK)2=[10; 0 RS]. This shows that multiplying unit quaternions is equivalent to multiplying rotation matrices but may be more efficient computationally if it is possible to use quaternion vectors rather than matrices for intermediate results.
Rank-one
A non-zero matrix A is a rank-one matrix iff it can be decomposed as A=xyT.
- If A=xyT is a rank-one matrix then
- If A=pqT thenp=kx and q=y/k for some scalar k. That is, the decomposition is unique to within a scalar multiple.
- If A=xyT is a square rank-one matrix then
- A has a single non-zero eigenvalue equal toxTy=yTx. The associated right and left eigenvectors are respectively x andy.
- Frobenius Norm: ||A||_F2_=tr(AHA)=xHx×yHy
- Pseudoinverse:A+=AH/ ||A||_F2_=AH/tr(AHA)=AH/(xHx×yHy) where ||A||F is the Frobenius Norm.
Reducible
A matrix An#n is reducible (or fully decomposable) if if there exists a permutation matrix P such thatPTAP is of the form [B C; 0 D] where B and D are square. As a special case01#1 is regarded as reducible. A matrix that is not_reducible_ is irreducible.
WARNING: The term reducible is sometimes used to mean one that has more than one block in its Jordan Normal Form.
- An irreducible matrix has at least one non-zero off-diagonal element in each row and column.
- An#n is irreducible iff (I +ABS(A))_n_-1 is positive.
Regular
A polynomial matrix, A, of order _p_is regular if det(A) is non-zero.
- An n#n square polynomial matrix, A(x), of order_p_ is regular iff det(A) is a polynomial in x of degree_n*p_.
Rotation Matrix
[_Real_]: A Rotation matrix,R, is an n*n matrix of the form R=U [Q 0 ; 0 I]UT where U is any orthogonal matrix and Q is a matrix of the form [cos(x) -sin(x); sin(x) cos(x)]. Multiplying a vector by R rotates it by an angle x in the plane containingu and v, the first two columns of U. The direction of rotation is such that if _x_=90 degrees, u will be rotated tov.
- A Rotation matrix is orthogonal with a determinant of +1.
- All but two of the eigenvalues of R equal unity and the remaining two are exp(jx) and exp(-jx) where j is the square root of -1. The corresponding unit modulus eigenvectors are [u v][1 -_j_]T/sqrt(2) and [u v][1 +_j_]T/sqrt(2).
- R=I+(cos(x)-1)(uuT+vvT)+sin(x)(vu_T_-uvT) where u and v are the first two columns of U.
- If _x_=90 degrees thenR=I-uu_T_-vvT+vu_T_-uvT.
- If _x_=180 degrees thenR=I-2uu_T_-2vvT
- If _x_=270 degrees thenR=I-uu_T_-vv_T_-vuT+uvT
- [3#3] R =wwT+cos(x)(I-wwT)+sin(x)SKEW(w) =I+sin(x)SKEW(w)+(1-cos(x))SKEW(w)2where the unit vector w = u × v is the axis of rotation. [See skew-symmetric for the definition and properties of SKEW()].
- tr(R) = 2 cos(x) + 1
- Every 3#3 orthogonal matrix is either a rotation matrix or else a rotation matrix plus a reflection in the plane of the rotation according to whether its determinant is +1 or -1.
- The product of two 3#3 rotation matrices is a rotation matrix.
- A 3#3 rotation matrix may be expressed as the product of three rotations about the x, y and z axes respectively. The corresponding rotation angles are the Euler angles. The order in which the rotations are performed is significant and is not standardised. Using Euler angles is often a bad idea because their relation to the rotation axis direction is not continuous.
- R=(I-K)(I+K)-1 whereK=-tan(x/2)* SKEW(w) except when x=180 degrees. This is the Caley transform.
- If _x_=90 degrees thenR=wwT+SKEW(w) =(I+SKEW(w))(I-SKEW(w))-1
- If _x_=180 degrees thenR=2ww_T_-I
- If _x_=270 degrees thenR=wwT-SKEW(w)=(I-SKEW(w))(I+SKEW(w))-1
- ADJ(R-I)=2(1-cos(x))wwT whereADJ() denotes the adjoint. All columns of this rank-1 matrix are multiples of w.
- Every 3#3 rotation matrix corresponds to a quaternion matrix that is unique except for its sign.
Shift Matrix
A shift matrix, or lower shift matrix, Z, is a matrix with ones below the main diagonal and zeros elsewhere.
ZT has ones above the main diagonal and zeros elsewhere and is an upper shift matrix.
- ZA, ZTA, AZ,AZT, ZAZT are equal to the matrix A shifted one position down, up left, right, and down along the main diagonal respectively.
- Z_n_#n is nilpotent.
Signature
A signature matrix is a diagonal matrix whose diagonal entries are all +1 or -1.
- A signature matrix is involutary.
Simple
An n*n square matrix is simple (or, equivalently,diagonalizable or diagonalizable or_non-defective_) if all its eigenvalues are regular, otherwise it isdefective.
Singular
A matrix is singular if it has no inverse.
- A matrix A is singular iff det(A)=0.
Skew-Hermitian
A square matrix K is Skew-Hermitian (or_antihermition_) if K = -KH, that is_a(i,j)_=-conj(a(j,i))
For real matrices, Skew-Hermitian and skew-symmetric are equivalent. The following properties apply also to real skew-symmetric matrices.
- S is Hermitian iff jS is skew-Hermitian where j = sqrt(-1)
- K is skew-Hermitian iff xHKy = -xHKHy for all xand y.
- Skew-Hermitian matrices are closed under addition, multiplication by a scalar, raising to an odd power and (if non-singular) inversion.
- Skew-Hermitian matrices are normal.
- If K is skew-hermitian, then K2 is hermitian.
- The eigenvalues of a skew-Hermitian matrix are either 0 or pure imaginary.
- Any matrix A has a unique decomposition A = S +K where S is Hermitian and K is skew-hermitian.
- K is skew-hermitian iff K=ln(U) orU=exp(K) for some unitary U.
- For any complex a with |a|=1, there is a 1-to-1 correspondence between the unitary matrices, U, not having a as an eigenvalue and skew-hermitian matrices, K, given by U=a(K-I)(I+K)-1andK=(aI+U)(aI-U)-1. These are Caley's formulae.
- Taking _a_=-1 givesU=(I-K)(I+K)-1 andK=(I-U)(I+U)-1.
Skew-Symmetric[!]
A square matrix K is skew-symmetric (or_antisymmetric_) if K = -KT, that is_a(i,j)_=-a(j,i)
For real matrices, skew-symmetric and Skew-Hermitian are equivalent. Most properties are listed under skew-Hermitian .
- Skew-symmetry is preserved by congruence.
- The diagonal elements of a skew-symmetric matrix are all 0. [1.10]
- The rank of a real or complex skew-symmetric matrix is even. [1.11]
- [_Real_] The non-zero eigenvalues of a real skew-symmetric matrix are all purely imaginary and occur in complex conjugate pairs.
- If K is skew-symmetric, then I - K is non-singular
- [_Real_] If A is skew-symmetric, then xTAx = 0 for all real x.
- [_Real_] If _a_=+1 or _a_=-1, there is a 1-to-1 correspondence between real skew-symmetric matrices,K, and those orthogonal matrices, Q, not having a as an eigenvalue given byQ=a(K-I)(K+I)-1 andK=(aI+Q)(aI-Q)-1. These are Caley's formulae.
- K is real skew-symmetric iff K=ln(Q) or Q = exp(K) for some real proper orthogonal matrixQ.
- [Real 3#3] All 3#3 skew-symmetric matrices have the form SKEW(a) = [0 -_a_3 _a_2; _a_3 0 -_a_1; -_a_2 _a_1 0] for some vector a.
- SKEW(ka) = k SKEW(a) for any scalar k
- The vector cross product is given by a × b =SKEW(a) b = -SKEW(b) a
- SKEW(a) b = 0 iff a = kb for some scalar k
- SKEW(a)2_n_=(-aTa)_n_-1aaT+(-aTa)nI=(-aTa)_n_-1(aaT-(aTa)I) for integer _n_>=1
* SKEW(a)2=aaT-(aTa)I - SKEW(a)2n+1=(-aTa)nSKEW(a) for integer _n_>=0
* SKEW(a)3=-(aTa)SKEW(a) - The eigenvalues of SKEW(a) are 0 and +-sqrt(-aTa)
* The eigenvector associated with 0 is ka
* [Real a]: Eigenvalues are 0 and +-j|a| where_j_ is sqrt(-1). Unless _q=r=_0 a suitable pair of eigenvectors are [-_q_2-_r_2 _jr_-_pq pr_-_jq_]T and [-_q_2-_r_2 _-jr_-_pq pr+jq_]T. - The singular values of SKEW(a) are |a|, |a| and 0.
* If z=|a| and w=[_z_2 _z_3]T, then a singular value decomposition isSKEW(a)=USVT whereU=[zT;w I+(_z_1-1)-1ww_T_]J,S=DIAG(|a|, |a|, 0) andV=U [0 1 0; -1 0 0; 0 0 1] where J is the exchange matrix (i.e. Iwith the column order reversed). All other decompositions may be obtained by postmultiplying both U and V byDIAG(Q[2#2], 1) for some orthogonal Q and/or negating the final column of one or both of U and V. - SKEW(a)T SKEW(a) =SKEW(a) SKEW(a)T = |a|2 I - aaT
* tr( SKEW(a)T SKEW(a))=2aTa - det([a b c]) = aT SKEW(b) c = bT SKEW(c) a = cT SKEW(a) b, this is the scalar triple product.
* aT SKEW(b) a =aT SKEW(a) b =bT SKEW(a) a = 0 for all aand b - SKEW(a)SKEW(b) =baT-(bTa)I
- SKEW(a)SKEW(b) c =(aTc)b -(aTb)c, this is the vector triple product.
- For any a and B[3#3],
* BTSKEW(Ba)B = det(B) * SKEW(a)
* SKEW(Ba)B = ADJ(B)T SKEW(a) where ADJ(B) denotes the adjoint matrix.
* [det(B)!=0]: SKEW(Ba) = det(B) * B-TSKEW(a)B-1 - SKEW(SKEW(a)b) = baT -abT
- [U orthogonal] The product E =U SKEW(a) = SKEW(Ua) U is an essential matrix
* ETE = (aTa) I -aaT
* tr(ETE) = 2aTa.
Sparse
A matrix is sparse if it has relatively few non-zero elements.
Stability
A Stability or Stable matrix is one whose eigenvalues all have strictly negative real parts.
A semi-stable matrix is one whose eigenvalues all have non-positive real parts.
See also: Convergent
Stochastic
A real non-negative square matrix A isstochastic if all its rows sum to 1.. If all its columns also sum to 1 it is Doubly Stochastic.
- All eigenvalues of A are <= 1.
- 1 is an eigenvalue with eigenvector [1 1 ... 1]T
Sub-stochastic
A real non-negative square matrix A issub-stochastic if all its rows sum to <=1.
Subunitary
A is subunitary if ||AAHx|| = ||AHx|| for all x. A is also called a_partial isometry_.
The following are equivalent:
- A is subunitary
- AHA is a projection matrix
- AAHA = A
- A+ = AH
- A is subunitary iff AH is subunitary iffA+ is subunitary.
- If A is subunitary and non-singular than A is unitary.
Symmetric[!]
A square matrix A is symmetric if A =AT, that is a(i,j) = a(j,i).
Most properties of real symmetric matrices are listed under Hermitian .
- [_Real_]: If A is real, symmetric, then A=0 iff xTAx = 0 for all real x.
- [_Real_]: A real symmetric matrix isorthogonally similar to adiagonal matrix.
- [Real, 2#2] A=[a b; b d_]=RDRT where D is diagonal andR=[cos(t) -sin(t); sin(t) cos(t)] and_t_=½tan-1(2_b/(a-d)).
- A is symmetric iff it is congruent to a diagonal matrix.
- Any square matrix may be uniquely decomposed as the sum of a symmetric matrix and a skew-symmetric matrix.
- Any symmetric matrix A can be expressed asA=UDUT whereU is unitary and D is real, non-negative and diagonal with its diagonal elements arranged in non-increasing order (i.e. di,i <= _dj,j_for i < j). This is the Takagi decomposition and is a special case of thesingular value decomposition.
See also Hankel.
Symmetrizable
A real matrix, A, is symmetrizable ifATM = MA for some positive definite M.
Symplectic
A matrix, A[2_n_#2_n_], issymplectic if AHKA=K where Kis the antisymmetric orthogonal matrix [0 I; -I 0].
- A is symplectic iffA-1=KTAHK
- If a symplectic matrix A=[P Q; R S] whereP,Q,R,S are all n#n, thenA-1=[SH -RH; -QH P_H_]
- The set of symplectic matrices of size 2_n_#2_n_ is closed under multiplication and inversion and so forms a multiplicative group.
- A is symplectic iff it preserves the symplectic formxHK y, that is (Ax)HK(A y) =xHK y for all x and y. This is analogous to the way that a unitary matrix, U, preserves the inner product: (Ux)H(U y)=xHy.
See also: hamiltonian
Toeplitz
A toeplitz matrix, A, has constant diagonals. In other words ai,j depends only on i-j.We defineA=TOE(b[m+_n_-1])[m#_n_]to be the m#n matrix with ai,j =b _i_-j+n. Thus, b is the column vector formed by starting at the top right element of A, going backwards along the top row of A and then down the left column of A.
In the topics below, J is the exchangematrix.
- A toeplitz matrix is persymmetric and so, if it exists, is its inverse. A symmetric toeplitz matrix isbisymmetric.
- If A and B are toeplitz, then so are A+B andA-B. Note that AB and A-1 are not necessarily toeplitz.
- If A is toeplitz, then AT,AH and JAJ are Toeplitz while JA, ATJ, AJ andJAT are Hankel.
- If A[n#_n_] is toeplitz, thenJA_T_J=(JAJ)_T_=A whileJA=ATJ andAJ=JAT are Hankel.
- TOE(a+b) = TOE(a) +TOE(b)
- TOE(b[m+_n_-1])[m#_n_]=TOE(Jb)[n#_m_]T
- TOE(b[2_n_-1])[n#_n_]=TOE(Jb)[n#_n_]T
- If the lower triangular matricesA[n#_n_]=TOE([0[_n_-1];p[_n_]]) andB[n#_n_]=TOE([0[_n_-1];q[_n_]]) then:
- Aq = Bp =conv(p,q)1:n
- AB = BA = TOE([0[_n_-1];Aq]) = TOE([0[_n_-1]; Bp]) =TOE([0[_n_-1];conv(p,q)1:_n_])
- A-1 and B-1 are toeplitz lower triangular if they exist.
- If the upper triangular matricesA[n#_n_]=TOE([ p[_n_]; 0[_n_-1]]) andB[n#_n_]=TOE([ q[_n_]; 0[_n_-1]]) then:
- Aq = Bp =conv(p,q)n:_2n_-1
- AB = BA = TOE([Aq;0[_n_-1]]) = TOE([Bp;0[_n_-1]]) =TOE([conv(p,q)n:_2n_-1;0[_n_-1]])
- A-1 and B-1 are toeplitz lower triangular if they exist.
- The productTOE(a)[m#_r_]TOE(b)[r#_n_]is toeplitz iffar+1:r+_m_-1b1:_n_-1_T_=a1:_m_-1br+1:_r+n-_1_T_[1.21]. This _m_-1#_n_-1rank-one matrix identity is equivalent to requiring one of the following conditions:
- Bothar+1:r+_m_-1=ka1:_m_-1andbr+1:_r+n-_1=kb1:_n_-1for the same scalar k. Note that a1:_m_-1 andar+1:r+_m_-1 will overlap if_m_>r+1 and similarly for b if_n_>r+1.
- For TOE(a) to be square and symmetric,a1:_m_-1 must be either symmetric or antisymmetric with k=+1 or -1 respectively (a similar condition applies toTOE(b)).
- Either ar+1:r+_m_-1= 0 orb1:_n_-1 = 0 and also eithera1:_m_-1= 0 orbr+1:_r+n-_1= 0 . If_m_=_r_=n then this condition is equivalent to requiring that A and B are either both upper triangular or both lower triangular or else one of them is diagonal.
Some special cases of this are:
- TOE(a)[m#_r_]TOE(b)[r#_n_]is toeplitz if ar+1:r+_m_-1 =a1:_m_-1 and br+1:_r+n-_1=b1:_n_-1. Note that this does not make the matrices symmetrical even for square matrices because a1:_m_-1goes backwards along the top row of the matrix.
- TOE([0[_m_-1];a[_r_]])[m#_r_]TOE([0[_n_-1];b[_r_]])[r#_n_] =TOE([0[_n+m-r_-1];conv(a,b)1:_r_])
- TOE([ a[_r_];0[_m_-1]])[m#_r_]TOE([ b[_r_];0[_n_-1]])[r#_n_] =TOE([ conv(a,b)r:2_r_-1;0[_n+m-r_-1]])
- If A=TOE(b)[m#_n_] thenJAJ=TOE(Jb)[m#_n_]
- TOE([0[_n_-_p_];a[_m_];0[_q-m_]])[q-p+1#_n_] b[_n_] =TOE([0[_m_-_p_];b[_n_];0[_q-n_]])[q-p+1#_m_] a[m_] =conv(a,b)p:q provided that_p<=m,n<=q andconv(a,b)i is taken to be 0 for _i_outside the range 1 to _m+n_-1.
- TOE(a[_m_])[m-n+1#_n_] b[_n_] =conv(a,b)n :m
- TOE([0[_n_-_p_];a[_n_]])[n-p+1#_n_] b[_n_] =TOE([0[_n_-_p_];b[_n_]])[n-p+1#_n_] a[_n_] =conv(a,b)p: n
* TOE([0[_n_-1];a[_n_]])[n#_n_] b[_n_] =TOE([0[_n_-1];b[_n_]])[n#n_] a[n_] =conv(a,b)1: n - TOE([ a[_n_];0[q- _n_]])[q-n+1#_n_] b[_n_] =TOE([ b[_n_];0[_q-n_]])[q-n+1#_n_] a[_n_] =conv(a,b)n :q
* TOE([ a[_n_];0[_n_-1]])[n#_n_] b[_n_] =TOE([ b[_n_];0[_n_-1]])[n#_n_] a[_n_] =conv(a,b)n _:_2_n_-1 - TOE([0[_n_-_1_];a[_m_];0[_n-1_]])[m+n-1#_n_] b[_n_] =TOE([0[_m_-_1_];b[_n_];0[_m-1_]])[m+n-1#_m_] a[_m_] =conv(a,b)
- A symmetric toeplitz matrix is of the formS[n#_n_] =TOE([Ja[_n_]; 0[_n_-1]]+[0[_n_-1];a[_n_]])
- JSJ = S
- Sb = (TOE([b[_n_]; 0[_n_-1]])[n#_n_]J+TOE([0[_n_-1];b[_n_]])[n#_n_])a . The matrix on the right is the sum of a lower triangular toeplitz and an upper triangular hankel matrix.
Triangular
A is upper triangular if a(i,j)=0 whenever_i>j._
A is lower triangular if _a(i,j)_=0 whenever i<j.
A is triangular iff it is either upper or lower triangular.
A triangular matrix A is strictly triangular if its diagonal elements all equal 0.
A triangular matrix A is unit triangular if its diagonal elements all equal 1.
- [_Real_]: An orthogonal triangular matrix must be diagonal
- [_n*n_]: The determinant of a triangular matrix is the product of its diagonal elements.
- If A is unit triangular then inv(A) exists and is unit triangular.
- A strictly triangular matrix is nilpotent .
- The set of upper triangular matrices are closed under multiplication and addition and (where possible) inversion.
- The set of lower triangular matrices are closed under multiplication and addition and (where possible) inversion.
Tridiagonal or Jacobi
A is tridiagonal or Jacobi if A(i,j)=0 whenever |i-j|>1. In other words its non-zero elements lie either on or immediately adjacent to the main diagonal.
- A is tridiagonal iff it is both upper and lower Hessenberg.
Unitary
A complex square matrix A is unitary ifAHA = I. A is also sometimes called an isometry.
A real unitary matrix is called orthogonal .The following properties apply to orthogonal matrices as well as to unitary matrices.
- Unitary matrices are closed under multiplication, raising to an integer power and inversion
- U is unitary iff UH is unitary.
- Unitary matrices are normal.
- U is unitary iff ||Ux|| = ||x|| for all x.
- The eigenvalues of a unitary matrix all have an absolute value of 1.
- The determinant of a unitary matrix has an absolute value of 1.
- A matrix is unitary iff its columns form an orthonormal basis.
- U is unitary iff U=exp(K) or K=ln(U) for some skew-hermitian K.
- For any complex a with |a|=1, there is a 1-to-1 correspondence between the unitary matrices, U, not having a as an eigenvalue and skew-hermitian matrices,K, given byU=a(K-I)(I+K)-1 andK=(aI+U)(aI-U)-1. These are Caley's formulae.
- Taking _a_=-1 givesU=(I-K)(I+K)-1 andK=(I-U)(I+U)-1.
Vandermonde
An Vandermonde matrix, V[_n#n_], has the form [1 x x•2 …x•n-1] for some column vector x. (wherex•2 denotes elementwise squaring). A general element is given by v(i,j) = (xi)_j_-1. All elements of the first column of the matrix equal 1. Vandermonde matrices arise in connection with fitting polynomials to data.
WARNING: Some authors define a Vandermonde matrix to be either the transpose or the horizontally flipped version of the above definition.
Vectorized Transpose Matrix
The vectorized transpose matrix, TVEC(m,n), is the mn#mn permutation matrix whose i,_j_th element is 1 if_j_=1+m(_i_-1)-(_mn_-1)floor((_i_-1)/n) or 0 otherwise.
For clarity, we write Tm,n =TVEC(m,n) in this section.
- [A[_m#n_]](AT): = Tm,n**A:**[see vectorization, R.6]
- Tm,n is a permutationmatrix and is therefore orthogonal.
- T1_,n_ = T_n,_1 =I
- Tn, m =Tm,n T =T_m,n_-1
- [A[_m#n_],B[_p#q_]] B ⊗ A =Tp, m (A ⊗ B) Tn, q
- [A[_m#n_],B[_p#q_]](A ⊗ B)Tn, q =Tm, p (B ⊗A)
- [a[_n_],B[_p#q_]] (a ⊗ B) = Tn,p (B ⊗a)
Zero
The zero matrix, 0, has a(i,j)=0 for all i,j
- [_Complex_]: A=0 iff xHAx = 0 for all x .
- [_Real_]: If A is symmetric, then A=0 iffxTAx = 0 for all x.
- [_Real_]: A=0 iff xTAy = 0 for all x and y.
- A=0 iff AHA = 0
This page is part of The Matrix Reference Manual. Copyright © 1998-2022 Mike Brookes, Imperial College, London, UK. See the file <gfl.html> for copying instructions. Please send any comments or suggestions to "mike.brookes" at "imperial.ac.uk".
Updated: Id:special.html112912021−01−0518:26:10ZdmbId: special.html 11291 2021-01-05 18:26:10Z dmb Id:special.html112912021−01−0518:26:10Zdmb