Ricci calculus


In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields. It is also the modern name for what used to be called the absolute differential calculus, developed by Gregorio Ricci-Curbastro in 1887–1896, and subsequently popularized in a paper written with his pupil Tullio Levi-Civita in 1900. Jan Arnoldus Schouten developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to general relativity and differential geometry in the early twentieth century.
A component of a tensor is a real number that is used as a coefficient of a basis element for the tensor space. The tensor is the sum of its components multiplied by their corresponding basis elements. Tensors and tensor fields can be expressed in terms of their components, and operations on tensors and tensor fields can be expressed in terms of operations on their components. The description of tensor fields and operations on them in terms of their components is the focus of the Ricci calculus. This notation allows an efficient expression of such tensor fields and operations. While much of the notation may be applied with any tensors, operations relating to a differential structure are only applicable to tensor fields. Where needed, the notation extends to components of non-tensors, particularly multidimensional arrays.
A tensor may be expressed as a linear sum of the tensor product of vector and covector basis elements. The resulting tensor components are labelled by indices of the basis. Each index has one possible value per dimension of the underlying vector space. The number of indices equals the degree of the tensor.
For compactness and convenience, the notational convention implies summation over indices repeated within a term and universal quantification over free indices. Expressions in the notation of the Ricci calculus may generally be interpreted as a set of simultaneous equations relating the components as functions over a manifold, usually more specifically as functions of the coordinates on the manifold. This allows intuitive manipulation of expressions with familiarity of only a limited set of rules.

Notation for indices

Basis-related distinctions

Space and time coordinates

Where a distinction is to be made between the space-like basis elements and a time-like element in the four-dimensional spacetime of classical physics, this is conventionally done through indices as follows:
Some sources use 4 instead of 0 as the index value corresponding to time; in this article, 0 is used. Otherwise, in general mathematical contexts, any symbols can be used for the indices, generally running over all dimensions of the vector space.

Coordinate and index notation

The author will usually make it clear whether a subscript is intended as an index or as a label.
For example, in 3-D Euclidean space and using Cartesian coordinates; the coordinate vector shows a direct correspondence between the subscripts 1, 2, 3 and the labels. In the expression, is interpreted as an index ranging over the values 1, 2, 3, while the subscripts are not variable indices, more like "names" for the components. In the context of spacetime, the index value 0 conventionally corresponds to the label.

Upper and lower indices

Ricci calculus, and index notation more generally, distinguishes between lower indices and upper indices ; the latter are not exponents, even though they may look as such to the reader only familiar with other parts of mathematics.
It is in special cases possible to drop the distinction between upper and lower indices, and then all indices could be written in the lower position – coordinate formulae in linear algebra such as for the product of matrices can sometimes be understood as examples of this – but in general the notation requires that the distinction between upper and lower indices is observed and maintained.

Covariant tensor components">Covariance and contravariance of vectors">Covariant tensor components

A lower index indicates covariance of the components with respect to that index:

Contravariant tensor components">Covariance and contravariance of vectors">Contravariant tensor components

An upper index indicates contravariance of the components with respect to that index:

Mixed-variance tensor components">Mixed tensor">Mixed-variance tensor components

A tensor may have both upper and lower indices:
Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience.

Tensor type and degree

The number of each upper and lower indices of a tensor gives its type: a tensor with upper and lower indices is said to be of type, or to be a type- tensor.
The number of indices of a tensor, regardless of variance, is called the degree of the tensor. Thus, a tensor of type has degree.

[Summation convention]

The same symbol occurring twice within a term indicates a pair of indices that are summed over:
The operation implied by such a summation is called tensor contraction:
This summation may occur more than once within a term with a distinct symbol per pair of indices, for example:
Other combinations of repeated indices within a term are considered to be ill-formed, such as
The reason for excluding such formulae is that although these quantities could be computed as arrays of numbers, they would not in general transform as tensors under a change of basis.

[Multi-index notation]

If a tensor has a list of all upper or lower indices, one shorthand is to use a capital letter for the list:
where and.

Sequential summation

A pair of vertical bars around a set of all-upper indices or all-lower indices, associated with contraction with another set of indices:
means a restricted sum over index values, where each index is constrained to being strictly less than the next. The vertical bars are placed around either the upper set or the lower set of contracted indices, not both sets. Normally when contracting indices, the sum is over all values. In this notation, the summations are restricted as a computational convenience. This is useful when the expression is completely antisymmetric in each of the two sets of indices, as might occur on the tensor product of a -vector with a -form. More than one group can be summed in this way, for example:
When using multi-index notation, an underarrow is placed underneath the block of indices:
where

[Raising and lowering indices]

By contracting an index with a non-singular metric tensor, the type of a tensor can be changed, converting a lower index to an upper index or vice versa:
The base symbol in many cases is retained, and when there is no ambiguity, repositioning an index may be taken to imply this operation.

Correlations between index positions and invariance

This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a passive transformation between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation.
The Kronecker delta is used, [|see also below].

General outlines for index notation and operations

Tensors are equal if and only if every corresponding component is equal; e.g., tensor equals tensor if and only if
for all. Consequently, there are facets of the notation that are useful in checking that an equation makes sense.

Free and dummy indices">Einstein notation#Introduction">Free and dummy indices

Indices not involved in contractions are called free indices. Indices used in contractions are termed dummy indices, or summation indices.

A tensor equation represents many ordinary (real-valued) equations

The components of tensors are just real numbers. Since the indices take various integer values to select specific components of the tensors, a single tensor equation represents many ordinary equations. If a tensor equality has free indices, and if the dimensionality of the underlying vector space is, the equality represents equations: each index takes on every value of a specific set of values.
For instance, if
is in four dimensions, then because there are three free indices, there are 43 = 64 equations. Three of these are:
This illustrates the compactness and efficiency of using index notation: many equations which all share a similar structure can be collected into one simple tensor equation.

Indices are replaceable labels

Replacing any index symbol throughout by another leaves the tensor equation unchanged. This can be useful when manipulating indices, such as using index notation to verify vector calculus identities or identities of the Kronecker delta and Levi-Civita symbol. An example of a correct change is:
whereas an erroneous change is:
In the first replacement, replaced and replaced everywhere, so the expression still has the same meaning. In the second, did not fully replace, and did not fully replace , which is entirely inconsistent for reasons shown next.

Indices are the same in every term

The free indices in a tensor expression always appear in the same position throughout every term, and in a tensor equation the free indices are the same on each side. Dummy indices need not be the same, for example:
as for an erroneous expression:
In other words, non-repeated indices must be of the same type in every term of the equation. In the above identity, line up throughout and occurs twice in one term due to a contraction, and thus it is a valid expression. In the invalid expression, while lines up, and do not, and appears twice in one term and once in another term, which is inconsistent.

Brackets and punctuation used once where implied

When applying a rule to a number of indices, the bracket or punctuation symbols denoting the rules are only shown on one group of the indices to which they apply.
If the brackets enclose covariant indices – the rule applies only to all covariant indices enclosed in the brackets, not to any contravariant indices which happen to be placed intermediately between the brackets.
Similarly if brackets enclose contravariant indices – the rule applies only to all enclosed contravariant indices, not to intermediately placed covariant indices.

Symmetric and antisymmetric parts

Symmetric">Symmetric tensor">Symmetric part of tensor

, around multiple indices denotes the symmetrized part of the tensor. When symmetrizing indices using to range over permutations of the numbers 1 to, one takes a sum over the permutations of those indices for, and then divides by the number of permutations:
For example, two symmetrizing indices mean there are two indices to permute and sum over:
while for three symmetrizing indices, there are three indices to sum over and permute:
The symmetrization is distributive over addition;
Indices are not part of the symmetrization when they are:
Here the and indices are symmetrized, is not.

Antisymmetric">Antisymmetric tensor">Antisymmetric or alternating part of tensor

, around multiple indices denotes the antisymmetrized part of the tensor. For antisymmetrizing indices – the sum over the permutations of those indices multiplied by the signature of the permutation is taken, then divided by the number of permutations:
where is the generalized Kronecker delta of degree, with scaling as defined below.
For example, two antisymmetrizing indices imply:
while three antisymmetrizing indices imply:
as for a more specific example, if represents the electromagnetic tensor, then the equation
represents Gauss's law for magnetism and Faraday's law of induction.
As before, the antisymmetrization is distributive over addition;
As with symmetrization, indices are not antisymmetrized when they are:
Here the and indices are antisymmetrized, is not.

Sum of symmetric and antisymmetric parts

Any tensor can be written as the sum of its symmetric and antisymmetric parts on two indices:
as can be seen by adding the above expressions for and. This does not hold for other than two indices.

Differentiation

For compactness, derivatives may be indicated by adding indices after a comma or semicolon.

[Partial derivative]

While most of the expressions of the Ricci calculus are valid for arbitrary bases, the expressions involving partial derivatives of tensor components with respect to coordinates apply only with a coordinate basis: a basis that is defined through differentiation with respect to the coordinates. Coordinates are typically denoted by, but do not in general form the components of a vector. In flat spacetime with linear coordinatization, a tuple of differences in coordinates,, can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. Aside from use in this special case, the partial derivatives of components of tensors are useful in building expressions that are covariant, albeit still with a coordinate basis if the partial derivatives are explicitly used, as with the covariant and Lie derivatives below.
To indicate partial differentiation of the components of a tensor field with respect to a coordinate variable, a comma is placed before an appended lower index of the coordinate variable.
This may be repeated :
These components do not transform covariantly, unless the expression being differentiated is a scalar. This derivative is characterized by the product rule and the derivatives of the coordinates
where is the Kronecker delta.

[Covariant derivative]

To indicate covariant differentiation of any tensor field, a semicolon is placed before an appended lower index. Less common alternatives to the semicolon include a forward slash or in three-dimensional curved space a single vertical bar.
For a contravariant vector, its covariant derivative is:
where is a Christoffel symbol of the second kind.
For a covariant vector, its covariant derivative is:
For an arbitrary tensor:
The components of this derivative of a tensor field transform covariantly, and hence form another tensor field. This derivative is characterized by the product rule and applied to the metric tensor it gives zero:
The covariant formulation of the directional derivative of any tensor field along a vector may be expressed as its contraction with the covariant derivative, e.g.:
One alternative notation for the covariant derivative of any tensor is the subscripted nabla symbol. For the case of a vector field :

[Lie derivative]

The Lie derivative is another derivative that is covariant, but which should not be confused with the covariant derivative. It is defined even in the absence of a metric tensor. The Lie derivative of a type tensor field along a contravariant vector field may be expressed as
This derivative is characterized by the product rule and the fact that the derivative of the given contravariant vector field is zero.
The Lie derivative of a type relative tensor field of weight along a contravariant vector field may be expressed as

Notable tensors

[Kronecker delta]

The Kronecker delta is like the identity matrix
when multiplied and contracted. The components are the same in any basis and form an invariant tensor of type, i.e. the identity of the tangent bundle over the identity mapping of the base manifold, and so its trace is an invariant.
Its trace is the dimensionality of the space; for example, in four-dimensional spacetime,
The Kronecker delta is one of the family of generalized Kronecker deltas. The generalized Kronecker delta of degree may be defined in terms of the Kronecker delta by :
and acts as an antisymmetrizer on indices:

[Metric tensor]

The metric tensor is used for lowering indices and gives the length of any space-like curve
where is any smooth strictly monotone parameterization of the path. It also gives the duration of any time-like curve
where is any smooth strictly monotone parameterization of the trajectory. See also line element.
The inverse matrix of the metric tensor is another important tensor, used for raising indices:

[Riemann curvature tensor]

If this tensor is defined as
then it is the commutator of the covariant derivative with itself:
since the connection is torsionless, which means that the torsion tensor
vanishes.
This can be generalized to get the commutator for two covariant derivatives of an arbitrary tensor as follows:
which are often referred to as the Ricci identities.