Eigenvalues and Eigenvectors: Properties (original) (raw)
Setup
This vignette uses an example of a \(3 \times 3\) matrix to illustrate some properties of eigenvalues and eigenvectors. We could consider this to be the variance-covariance matrix of three variables, but the main thing is that the matrix issquare and symmetric, which guarantees that the eigenvalues, \(\lambda_i\) are real numbers. Covariance matrices are also positive semi-definite, meaning that their eigenvalues are non-negative,\(\lambda_i \ge 0\).
A <- matrix(c(13, -4, 2, -4, 11, -2, 2, -2, 8), 3, 3, byrow=TRUE)
A
## [,1] [,2] [,3]
## [1,] 13 -4 2
## [2,] -4 11 -2
## [3,] 2 -2 8
Get the eigenvalues and eigenvectors using eigen()
; this returns a named list, with eigenvalues named values
and eigenvectors named vectors
.
ev <- eigen(A)
# extract components
(values <- ev$values)
## [1] 17 8 7
## [,1] [,2] [,3]
## [1,] 0.7454 0.6667 0.0000
## [2,] -0.5963 0.6667 0.4472
## [3,] 0.2981 -0.3333 0.8944
The eigenvalues are always returned in decreasing order, and each column of vectors
corresponds to the elements invalues
.
Properties of eigenvalues and eigenvectors
The following steps illustrate the main properties of eigenvalues and eigenvectors. We use the notation \(A = V' \Lambda V\) to express the decomposition of the matrix \(A\), where \(V\) is the matrix of eigenvectors and \(\Lambda = diag(\lambda_1, \lambda_2, \dots, \lambda_p)\) is the diagonal matrix composed of the ordered eigenvalues, \(\lambda_1 \ge \lambda_2 \ge \dots \lambda_p\).
- Orthogonality: Eigenvectors are always orthogonal, \(V' V = I\).
zapsmall()
is handy for cleaning up tiny values.
## [,1] [,2] [,3]
## [1,] 1.000e+00 3.053e-16 5.551e-17
## [2,] 3.053e-16 1.000e+00 0.000e+00
## [3,] 5.551e-17 0.000e+00 1.000e+00
zapsmall(crossprod(vectors))
## [,1] [,2] [,3]
## [1,] 1 0 0
## [2,] 0 1 0
## [3,] 0 0 1
- trace(A) = sum of eigenvalues, \(\sum \lambda_i\).
library(matlib) # use the matlib package
tr(A)
## [1] 32
## [1] 32
- sum of squares of A = sum of squares of eigenvalues, \(\sum \lambda_i^2\).
## [1] 402
## [1] 402
- determinant = product of eigenvalues, \(\det(A) = \prod \lambda_i\). This means that the determinant will be zero if any \(\lambda_i = 0\).
## [1] 952
## [1] 952
- rank = number of non-zero eigenvalues
## [1] 3
## [1] 3
- eigenvalues of the inverse \(A^{-1}\) = 1/eigenvalues of A. The eigenvectors are the same, except for order, because eigenvalues are returned in decreasing order.
## [,1] [,2] [,3]
## [1,] 0.08824 0.02941 -0.01471
## [2,] 0.02941 0.10504 0.01891
## [3,] -0.01471 0.01891 0.13340
## [1] 0.14286 0.12500 0.05882
## [,1] [,2] [,3]
## [1,] 0.0000 0.6667 0.7454
## [2,] 0.4472 0.6667 -0.5963
## [3,] 0.8944 -0.3333 0.2981
- There are similar relations for other powers of a matrix, \(A^2, \dots A^p\):
values(mpower(A,p)) = values(A)^p
, wherempower(A,2) = A %*% A
, etc.
## eigen() decomposition
## $values
## [1] 289 64 49
##
## $vectors
## [,1] [,2] [,3]
## [1,] 0.7454 0.6667 0.0000
## [2,] -0.5963 0.6667 0.4472
## [3,] 0.2981 -0.3333 0.8944
eigen(A %*% A %*% A)$values
## [1] 4913 512 343
eigen(mpower(A, 4))$values
## [1] 83521 4096 2401