Vandermonde matrix


In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row, i.e., an m × n matrix
or
for all indices i and j. The identical term Vandermonde matrix was used for the transpose of the above matrix by Macon and Spitzbart. The Vandermonde matrix used for the Discrete Fourier Transform matrix satisfies both definitions.
The determinant of a square Vandermonde matrix can be expressed as
This is called the Vandermonde determinant or Vandermonde polynomial. It is non-zero if and only if all are distinct.
The Vandermonde determinant was sometimes called the discriminant, although, presently, the discriminant of a polynomial is the square of the Vandermonde determinant of the roots of the polynomial. The Vandermonde determinant is an alternating form in the, meaning that exchanging two changes the sign, while permuting the by an even permutation does not change the value of the determinant. It thus depends on the choice of an order for the, while its square, the discriminant, does not depend on any order, and this implies, by Galois theory, that the discriminant is a polynomial function of the coefficients of the polynomial that has the as roots.

Proofs

The main property of a square Vandermonde matrix
is that its determinant has the simple form
Three proofs of this equality are given below. The first one uses polynomial properties, especially the unique factorization property of multivariate polynomials. Although conceptually simple, it involves non-elementary concepts of abstract algebra. The second proof does not requires any explicit computation, but involves the concepts of the determinant of a linear map and change of basis. It provides also the structure of the LU decomposition of the Vandermonde matrix. The third one is more elementary and more complicated, as using only elementary row and column operations.

Using polynomial properties

By the Leibniz formula, is a polynomial in the with integer coefficients. All entries of the th column have total degree. Thus, again by the Leibniz formula, all terms of the determinant have total degree
.
If, for, one substitutes for, one gets a matrix with two equal rows, which has thus a zero determinant. Thus, by the factor theorem, is a divisor of. By the unique factorization property of multivariate polynomials, the product of all divides, that is
where is a polynomial. As the product of all and have the same degree the polynomial is, in fact, a constant. This constant is one, because the product of the diagonal entries of is
which is also the monomial that is obtained by taking the first term of all factors in This proves that

Using linear maps

Let be a field containing all and the vector space of the polynomials of degree less than with coefficients in. Let
be the linear map defined by
The Vandermonde matrix is the matrix of on the canonical bases of and
Changing the basis of amounts to multiply the Vandermonde matrix by a change-of-basis matrix. This does not changes the determinant, if the determinant of is.
The polynomials are monic of respective degrees. Their matrix on the monomial basis is an upper-triangular matrix , with all diagonal entries equal to one. This matrix is thus a change-of-basis matrix of determinant one. The matrix of on this new basis is
Thus Vandermonde determinant equals the determinant of this matrix, which is the product of its diagonal entries.
This proves the desired equality. Moreover, one gets the LU decomposition of as

By row and column operations

This second proof is based on the fact that, if one adds to a row of a matrix the product by a scalar of another row , the determinant remains unchanged.
If one subtracts the first row of from all the other rows, the determinant is not changed, and the new matrix has the form
where is a row matrix, is a column of zeros, and is a square matrix, such that
The entry of the th row and the th column of is
Dividing out from the th row of, for, one gets a matrix such that
The coefficient of the th row and the th column of is
for, and setting
Thus, subtracting, for running from down to 2, the th column of multiplied by from the th column, one gets an Vandermonde matrix in which has the same determinant as. Iterating this process on this smaller Vandermonde matrix, one gets eventually the desired expression of as the product of the

Resulting properties

An rectangular Vandermonde matrix such that has maximum rank if and only if all are distinct.
An rectangular Vandermonde matrix such that has maximum rank if and only if there are of the that are distinct.
A square Vandermonde matrix is invertible if and only if the are distinct. An explicit formula for the inverse is known.

Applications

The Vandermonde matrix evaluates a polynomial at a set of points; formally, it is the matrix of the linear map that maps the vector of coefficients of a polynomial to the vector of the values of the polynomial at the values appearing in the Vandermonde matrix. The non-vanishing of the Vandermonde determinant for distinct points shows that, for distinct points, the map from coefficients to values at those points is a one-to-one correspondence, and thus that the polynomial interpolation problem is solvable with a unique solution; this result is called the unisolvence theorem, and is a special case of the Chinese remainder theorem for polynomials.
This may be useful in polynomial interpolation, since inverting the Vandermonde matrix allows expressing the coefficients of the polynomial in terms of the and the values of the polynomial at the.
However, the interpolation polynomial is generally easier to compute with the Lagrange interpolation formula, which may be used for deriving a formula for the inverse of a Vandermonde matrix.
The Vandermonde determinant is used in the representation theory of the symmetric group.
When the values belong to a finite field, then the Vandermonde determinant is also called a
Moore determinant and has specific properties that are used, for example, in the theory of BCH code and Reed–Solomon error correction codes.
The discrete Fourier transform is defined by a specific Vandermonde matrix, the DFT matrix, where the numbers αi are chosen to be roots of unity.
The Laughlin wavefunction with filling factor one, by the formula for the Vandermonde determinant, can be seen to be a Slater determinant. This is not true anymore for filling factors different from one, i.e., in the fractional Quantum Hall effect.

Confluent Vandermonde matrices

As described before, a Vandermonde matrix describes the linear algebra interpolation problem of finding the coefficients of a polynomial of degree based on the values, where are distinct points. If are not distinct, then this problem does not have a unique solution. However, if we give the values of the derivatives at the repeated points, then the problem can have a unique solution. For example, the problem
where is a polynomial of degree, has a unique solution for all. In general, suppose that are numbers, and suppose for ease of notation that equal values come in continuous sequences in the list. That is
where and are distinct. Then the corresponding interpolation problem is
And the corresponding matrix for this problem is called a confluent Vandermonde matrices. In our case the formula for it is given as follows: if, then for some . Then, we have
This generalization of the Vandermonde matrix makes it non-singular while retaining most properties of the Vandermonde matrix. Its rows are derivatives of the original Vandermonde rows.
Another way to receive this formula is to let some of the 's go arbitrarily close to each other. For example, if, then letting in the original Vandermonde matrix, the difference between the first and second rows yields the corresponding row in the confluent Vandermonde matrix. This allows us to link the generalized interpolation problem to the original case where all points are distinct: Being given is similar to being given where is very small.