Linear subspace


In mathematics, and more specifically in linear algebra, a linear subspace, also known as a vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually called simply a subspace when the context serves to distinguish it from other types of subspaces.

Definition

If V is a vector space over a field K and if W is a subset of V, then W is a subspace of V if under the operations of V, W is a vector space over K. Equivalently, a nonempty subset W is a subspace of V if, whenever are elements of W and are elements of K, it follows that is in W.

Examples

Example I

Let the field K be the set R of real numbers, and let the vector space V be the real coordinate space R3.
Take W to be the set of all vectors in V whose last component is 0.
Then W is a subspace of V.
Proof:
  1. Given u and v in W, then they can be expressed as u = and v =. Then u + v = =. Thus, u + v is an element of W, too.
  2. Given u in W and a scalar c in R, if u = again, then cu = =. Thus, cu is an element of W too.

    Example II

Let the field be R again, but now let the vector space V be the Cartesian plane R2.
Take W to be the set of points of R2 such that x = y.
Then W is a subspace of R2.
Proof:
  1. Let p = and q = be elements of W, that is, points in the plane such that p1 = p2 and q1 = q2. Then p + q = ; since p1 = p2 and q1 = q2, then p1 + q1 = p2 + q2, so p + q is an element of W.
  2. Let p = be an element of W, that is, a point in the plane such that p1 = p2, and let c be a scalar in R. Then cp = ; since p1 = p2, then cp1 = cp2, so cp is an element of W.
In general, any subset of the real coordinate space Rn that is defined by a system of homogeneous linear equations will yield a subspace.
Geometrically, these subspaces are points, lines, planes, and so on, that pass through the point 0.

Example III

Again take the field to be R, but now let the vector space V be the set RR of all functions from R to R.
Let C be the subset consisting of continuous functions.
Then C is a subspace of RR.
Proof:
  1. We know from calculus that.
  2. We know from calculus that the sum of continuous functions is continuous.
  3. Again, we know from calculus that the product of a continuous function and a number is continuous.

    Example IV

Keep the same field and vector space as before, but now consider the set Diff of all differentiable functions.
The same sort of argument as before shows that this is a subspace too.
Examples that extend these themes are common in functional analysis.

Properties of subspaces

From the definition of vector spaces, it follows that subspaces are nonempty and are closed under sums and under scalar multiples. Equivalently, subspaces can be characterized by the property of being closed under linear combinations. That is, a nonempty set W is a subspace if and only if every linear combination of finitely many elements of W also belongs to W.
The equivalent definition states that it is also equivalent to consider linear combinations of two elements at a time.
In a topological vector space X, a subspace W need not be topologically closed, but a finite-dimensional subspace is always closed. The same is true for subspaces of finite codimension, that is, subspaces determined by a finite number of continuous linear functionals.

Descriptions

Descriptions of subspaces include the solution set to a homogeneous system of linear equations, the subset of Euclidean space described by a system of homogeneous linear parametric equations, the span of a collection of vectors, and the null space, column space, and row space of a matrix. Geometrically, a subspace is a flat in an n-space that passes through the origin.
A natural description of a 1-subspace is the scalar multiplication of one non-zero vector v to all possible scalar values. 1-subspaces specified by two vectors are equal if and only if one vector can be obtained from another with scalar multiplication:
This idea is generalized for higher dimensions with linear span, but criteria for equality of k-spaces specified by sets of k vectors are not so simple.
A dual description is provided with linear functionals. One non-zero linear functional F specifies its kernel subspace F = 0 of codimension 1. Subspaces of codimension 1 specified by two linear functionals are equal if and only if one functional can be obtained from another with scalar multiplication :
It is generalized for higher codimensions with a system of equations. The following two subsections will present this latter description in details, and [|the remaining] four subsections further describe the idea of linear span.

Systems of linear equations

The solution set to any homogeneous system of linear equations with n variables is a subspace in the coordinate space Kn:
For example, the set of all vectors satisfying the equations
is a one-dimensional subspace. More generally, that is to say that given a set of n independent functions, the dimension of the subspace in Kk will be the dimension of the null set of A, the composite matrix of the n functions.

Null space of a matrix

In a finite-dimensional space, a homogeneous system of linear equations can be written as a single matrix equation:
The set of solutions to this equation is known as the null space of the matrix. For example, the subspace described above is the null space of the matrix
Every subspace of Kn can be described as the null space of some matrix.

Linear parametric equations

The subset of Kn described by a system of homogeneous linear parametric equations is a subspace:
For example, the set of all vectors parameterized by the equations
is a two-dimensional subspace of K3, if K is a number field.

Span of vectors

In linear algebra, the system of parametric equations can be written as a single vector equation:
The expression on the right is called a linear combination of the vectors
and. These two vectors are said to span the resulting subspace.
In general, a linear combination of vectors v1, v2, ... , vk is any vector of the form
The set of all possible linear combinations is called the span:
If the vectors v1, ... , vk have n components, then their span is a subspace of Kn. Geometrically, the span is the flat through the origin in n-dimensional space determined by the points v1, ... , vk.
; Example

Column space and row space

A system of linear parametric equations in a finite-dimensional space can also be written as a single matrix equation:
In this case, the subspace consists of all possible values of the vector x. In linear algebra, this subspace is known as the column space of the matrix A. It is precisely the subspace of Kn spanned by the column vectors of A.
The row space of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is the orthogonal complement of the null space.

Independence, basis, and dimension

In general, a subspace of Kn determined by k parameters has dimension k. However, there are exceptions to this rule. For example, the subspace of K3 spanned by the three vectors,, and is just the xz-plane, with each point on the plane described by infinitely many different values of.
In general, vectors v1, ... , vk are called linearly independent if
for
≠.
If are linearly independent, then the coordinates for a vector in the span are uniquely determined.
A basis for a subspace S is a set of linearly independent vectors whose span is S. The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors.
; Example

Operations and relations on subspaces

Inclusion

The set-theoretical inclusion binary relation specifies a partial order on the set of all subspaces.
A subspace cannot lie in any subspace of lesser dimension. If dim U = k, a finite number, and UW, then dim W = k if and only if U = W.

Intersection

Given subspaces U and W of a vector space V, then their intersection UW := is also a subspace of V.
Proof:
  1. Let v and w be elements of UW. Then v and w belong to both U and W. Because U is a subspace, then v + w belongs to U. Similarly, since W is a subspace, then v + w belongs to W. Thus, v + w belongs to UW.
  2. Let v belong to UW, and let c be a scalar. Then v belongs to both U and W. Since U and W are subspaces, cv belongs to both U and W.
  3. Since U and W are vector spaces, then 0 belongs to both sets. Thus, 0 belongs to UW.
For every vector space V, the set and V itself are subspaces of V.

Sum

If U and W are subspaces, their sum is the subspace
For example, the sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequality
Here the minimum only occurs if one subspace is contained in the other, while the maximum is the most general case. The dimension of the intersection and the sum are related:

Lattice of subspaces

The operations intersection and sum make the set of all subspaces a bounded modular lattice, where the subspace, the least element, is an identity element of the sum operation, and the identical subspace V, the greatest element, is an identity element of the intersection operation.

Orthogonal complements

If V is an inner product space and N is a subset of V, then the orthogonal complement N of N is again a subspace. If V is finite-dimensional and N is a subspace, then the dimensions of N and N satisfy the complementation relationship. Moreover, no vector is orthogonal to itself, so and V is the direct sum of N and N. Applying orthogonal complements twice returns the original subspace: = N for every subspace N.
This operation, understood as negation, makes the lattice of subspaces a orthocomplemented lattice.
In spaces with other bilinear forms, some but not all of these results still hold. In pseudo-Euclidean spaces and symplectic vector spaces, for example, orthogonal complements exist. However, these spaces may have null vectors that are orthogonal to themselves, and consequently there exist subspaces N such that. As a result, this operation does not turn the lattice of subspaces into a Boolean algebra.

Algorithms

Most [|algorithms] for dealing with subspaces involve row reduction. This is the process of applying elementary row operations to a matrix until it reaches either row echelon form or reduced row echelon form. Row reduction has the following important properties:
  1. The reduced matrix has the same null space as the original.
  2. Row reduction does not change the span of the row vectors, i.e. the reduced matrix has the same row space as the original.
  3. Row reduction does not affect the linear dependence of the column vectors.

    Basis for a row space

See the article on row space for an example.
If we instead put the matrix A into reduced row echelon form, then the resulting basis for the row space is uniquely determined. This provides an algorithm for checking whether two row spaces are equal and, by extension, whether two subspaces of Kn are equal.

Subspace membership

Basis for a column space

See the article on column space for an example.
This produces a basis for the column space that is a subset of the original column vectors. It works because the columns with pivots are a basis for the column space of the echelon form, and row reduction does not change the linear dependence relationships between the columns.

Coordinates for a vector

If the final column of the reduced row echelon form contains a pivot, then the input vector v does not lie in S.

Basis for a null space

See the article on null space for an example.

Basis for the sum and intersection of two subspaces

Given two subspaces and of, a basis of the sum and the intersection can be calculated using the Zassenhaus algorithm

Equations for a subspace

; Example

Textbooks

*