Einstein notation (original) (raw)
In mathematics, especially in applications of linear algebra to physics, the Einstein notation or Einstein summation convention is a notational convention useful when dealing with coordinate equations or formulas.
According to this convention, when an index variable appears twice in a single term, it implies that we are summing over all of its possible values. In typical applications, these are 1,2,3 (for calculations in Euclidean space), or 0,1,2,3 or 1,2,3,4 (for calculations in Minkowski space), but they can have any range, even (in some applications) an infinite set. Furthermore, abstract index notation uses Einstein notation without requiring any range of values.
Sometimes, the index is required to appear once as a superscript and once as a subscript; in other applications, all indices are subscripts. See Dual vector space and Tensor product.
Definitions
In the traditional usage, one has in mind a vector space V with finite dimension n, and a specific basis of V. We can write the basis vectors as e1,e2,...,en. Then if v is a vector in V, it has coordinates _v_1,...,v n relative to this basis.
The basic rule is:
v = v i ei.
In this expression, it is assumed that the term on the right side is to be summed as i goes from 1 to n, because the index i appears on both sides. In that case, the equation is indeed true.
The i is known as a dummy index since the result is not dependent on it; thus we could also write, for example:
v = v j ej.
An index that is not summed over is a free index and should be found in each term of the equation or formula.
In contexts where the index must appear once as a subscript and once as a superscript, the basis vectors ei retain subscripts but the coordinates become v i with superscripts. Then the basic rule is:
v = v i ei.
The value of the Einstein convention is that it applies to other vector spaces built from V using the tensor product and duality. For example, V V, the tensor product of V with itself, has a basis consisting of tensors of the form ei j := ei ej. Any tensor T in V V can be written as:
T = T i j ei j.
V*, the dual of V, has a basis e1,e2,...,en, which obeys the rule:
ei(ej) = d_i_ j.
Here d (or δ) is the Kronecker delta, so d_i_ j is 1 if i = j and 0 otherwise.
We have also used a superscript for the dual basis, which fits in with a convention requiring summed indices to appear once as a subscript and once as a superscript. In this case, if L is an element in V*, then:
L = L i ei.
If instead every index is required to be a subscript, then a different letter must be used for the dual basis, say di := ei.
The real purpose of the Einstein notation is for formulas and equations that make no mention of the chosen basis. For example, if L and v are as above, then
L(v) = L i vi,
and this is true for every basis. The next few sections contain further examples of such equations.
Elementary vector algebra and matrix algebra
If V be Euclidean _n_-space Rn, then there is a standard basis for V, in which ei is (0,...,0,1,0,...,0), with the 1 in the _i_th position. Then _n_-by-n matrices can be thought of as elements of V* V. We can also think of vectors in V as column vectors, or _n_-by-1 matrices; elements of V* are row vectors, or 1-by-n matrices.
In these examples, all indices will appear as superscripts. (Ultimately, this is because V has an inner product and the chosen basis is orthonormal, as explained in the next section.)
If H is a matrix and v is a column vector, then Hv is another column vector. To define w := Hv, we can write:
w i := H i j v j.
Notice that the free index i appears once in every term, while the dummy index j appears twice in a single term.
The distributive law, that H(u + v) = Hu + Hv, can be written:
H i j (u j + v j) = H i j u j + H i j v j.
This example also indicates the proof of the distributive law, since the index equation makes direct reference only to certain real numbers, and its validity follows directly from the distributive law for real numbers.
The transpose of a column vector is a row vector with the same components, and the transpose of a matrix is another matrix whose components are given by swapping the indices. Suppose that we're interested in the product of vT and _H_T. If w (a row vector) is this product, then:
w i = v i H j i.
Thus to say that taking the transpose of a product switches the order of multiplication, we can write:
H j i v i = v i H j i.
Again, this is obviously true, by the commutative law for real numbers.
The dot product of two vectorss u and v can be written:
u � v = u i v i.
If n = 3, then we can also write the cross product, using the Levi-Civita symbol. Specifically, if w is u � v, then:
w i = e_i_ j k u j v k.
Here, the Levi-Civita symbol e (or ε) satisfies e_i_ j k is 1 if (i,j,k) is a positive permutation of (1,2,3), -1 if it's a negative permutation, and 0 if it's not a permutation of (1,2,3) at all.
You may have noticed in these examples that we often introduced a vector w that would normally not have to be given a specific name using coordinate-free notation. This vector wouldn't need to be given a specific name using only index notation either, but the translation between the notations is easier to describe by giving it a name.
With no implicit inner product
If you review the above examples, you'll find that all of them through the distributive law make sense if a summed index must appear once as a subscript and once as a superscript. But the examples from the transpose on don't make sense in that case. This is because they implicitly use the standard inner product on Euclidean space, while the earlier examples do not.
In some applications, there is no inner product on V. In these cases, requiring a summed index to appear once as a subscript and once as a superscript can help one avoid errors in calculation, in much the same way as dimensional analysis does. Perhaps more significantly, the inner product may be a primary object of study that shouldn't be suppressed in the notation; this is the case, for example, in general relativity. Then the difference between a subscript and a superscript can be quite significant.
When an inner product is explicitly referred to, its components are often referred to as g i j. Note that g i j = g j i. Then the formula for the dot product becomes:
u � v = g i j u i v j.
We can also lower the index on u i by defining:
u i := g i j u j.
Then we have:
u � v = u i v i.
Note that we have implicitly used g i j = g j i here.
Similarly, we can raise an index using the corresponding inner product on V*. The coordinates of this inner product are g i j, which is (as a matrix) the inverse of g i j. If you raise an index and then lower it (or the other way around), then you get back where you started. If you raise the i in g i j, then you get d_i_ j, and if you raise the j in d_i_ j, then you get g i j.
If the chosen basis of V is orthonormal, then g i j = d_i_ j and u i = u i. In this case, the formula for the dot product from the previous section may be recovered. But if the basis is not orthonormal, then this will not be true; thus, if you're studying the inner product and can't know ahead of time whether a given basis is orthonormal, you'll need to refer to g i j explicitly. Furthermore, if the inner product is not positive-definite (as is the case, for example, in special relativity), then g i j = d_i_ j will not be true even if the basis is chosen to be orthnormal, since you will sometimes have -1 instead of 1 when i = j. Thus, raising and lowering indices are important operations in these applications.
Extension to complex vector spaces and Hilbert spaces
Coming soon
See also Bra-ket notation.
Extension to spinors
Maybe not coming