Sparse matrix
In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. By contrast, if most of the elements are nonzero, then the matrix is considered dense. The number of zero-valued elements divided by the total number of elements is called the sparsity of the matrix. Using those definitions, a matrix will be sparse when its sparsity is greater than 0.5.
Conceptually, sparsity corresponds to systems with few pairwise interactions. Consider a line of balls connected by springs from one to the next: this is a sparse system as only adjacent balls are coupled. By contrast, if the same line of balls had springs connecting each ball to all other balls, the system would correspond to a dense matrix. The concept of sparsity is useful in combinatorics and application areas such as network theory, which have a low density of significant data or connections.
Large sparse matrices often appear in scientific or engineering applications when solving partial differential equations.
When storing and manipulating sparse matrices on a computer, it is beneficial and often necessary to use specialized algorithms and data structures that take advantage of the sparse structure of the matrix. Specialized computers have been made for sparse matrices, as they are common in the machine learning field. Operations using standard dense-matrix structures and algorithms are slow and inefficient when applied to large sparse matrices as processing and memory are wasted on the zeros. Sparse data is by nature more easily compressed and thus requires significantly less storage. Some very large sparse matrices are infeasible to manipulate using standard dense-matrix algorithms.
Storing a sparse matrix
A matrix is typically stored as a two-dimensional array. Each entry in the array represents an element of the matrix and is accessed by the two indices and. Conventionally, is the row index, numbered from top to bottom, and is the column index, numbered from left to right. For an matrix, the amount of memory required to store the matrix in this format is proportional to .In the case of a sparse matrix, substantial memory requirement reductions can be realized by storing only the non-zero entries. Depending on the number and distribution of the non-zero entries, different data structures can be used and yield huge savings in memory when compared to the basic approach. The trade-off is that accessing the individual elements becomes more complex and additional structures are needed to be able to recover the original matrix unambiguously.
Formats can be divided into two groups:
- Those that support efficient modification, such as DOK, LIL, or COO. These are typically used to construct the matrices.
- Those that support efficient access and matrix operations, such as CSR or CSC.
Dictionary of keys (DOK)
List of lists (LIL)
LIL stores one list per row, with each entry containing the column index and the value. Typically, these entries are kept sorted by column index for faster lookup. This is another format good for incremental matrix construction.Coordinate list (COO)
COO stores a list of tuples. Ideally, the entries are sorted first by row index and then by column index, to improve random access times. This is another format that is good for incremental matrix construction.Compressed sparse row (CSR, CRS or Yale format)
The compressed sparse row or compressed row storage or Yale format represents a matrix by three arrays, that respectively contain nonzero values, the extents of rows, and column indices. It is similar to COO, but compresses the row indices, hence the name. This format allows fast row access and matrix-vector multiplications. The CSR format has been in use since at least the mid-1960s, with the first complete description appearing in 1967.The CSR format stores a sparse matrix in row form using three arrays. Let denote the number of nonzero entries in.
- The arrays and are of length, and contain the non-zero values and the column indices of those values respectively.
- The array has one element per row in the matrix and encodes the index in where the given row starts..
is a matrix with 4 nonzero elements, hence
V =
COL_INDEX =
ROW_INDEX =
assuming a zero-indexed language.
To extract a row, we first define:
row_start = ROW_INDEX
row_end = ROW_INDEX
Then we take slices from V and COL_INDEX starting at row_start and ending at row_end.
To extract the row 1 of this matrix we set
row_start=0
and row_end=2
. Then we make the slices V =
and COL_INDEX =
. We now know that in row 1 we have two elements at columns 0 and 1 with values 5 and 8.In this case the CSR representation contains 13 entries, compared to 16 in the original matrix. The CSR format saves on memory only when.
Another example, the matrix
is a matrix with 8 nonzero elements, so
V =
COL_INDEX =
ROW_INDEX =
The whole is stored as 21 entries.
- splits the array into rows:
- aligns values in columns:
The Yale sparse matrix formats are instances of the CSR scheme. The old Yale format works exactly as described above, with three arrays; the new format combines and into a single array and handles the diagonal of the matrix separately.
For logical adjacency matrices, the data array can be omitted, as the existence of an entry in the row array is sufficient to model a binary adjacency relation.
It is likely known as the Yale format because it was proposed in the 1977 Yale Sparse Matrix Package report from Department of Computer Science at Yale University.
Compressed sparse column (CSC or CCS)
is similar to CSR except that values are read first by column, a row index is stored for each value, and column pointers are stored. For example, CSC is, where is an array of the non-zero values of the matrix; is the row indices corresponding to the values; and, is the list of indexes where each column starts. The name is based on the fact that column index information is compressed relative to the COO format. One typically uses another format for construction. This format is efficient for arithmetic operations, column slicing, and matrix-vector products. See .This is the traditional format for specifying a sparse matrix in MATLAB.
Special structure
Banded
An important special type of sparse matrices is band matrix, defined as follows. The lower bandwidth of a matrix is the smallest number such that the entry vanishes whenever. Similarly, the upper bandwidth is the smallest number such that whenever . For example, a tridiagonal matrix has lower bandwidth and upper bandwidth. As another example, the following sparse matrix has lower and upper bandwidth both equal to 3. Notice that zeros are represented with dots for clarity.Matrices with reasonably small upper and lower bandwidth are known as band matrices and often lend themselves to simpler algorithms than general sparse matrices; or one can sometimes apply dense matrix algorithms and gain efficiency simply by looping over a reduced number of indices.
By rearranging the rows and columns of a matrix it may be possible to obtain a matrix with a lower bandwidth. A number of algorithms are designed for bandwidth minimization.
Diagonal
A very efficient structure for an extreme case of band matrices, the diagonal matrix, is to store just the entries in the main diagonal as a one-dimensional array, so a diagonal matrix requires only entries.Symmetric
A symmetric sparse matrix arises as the adjacency matrix of an undirected graph; it can be stored efficiently as an adjacency list.Block diagonal
A block-diagonal matrix consists of sub-matrices along its diagonal blocks. A block-diagonal matrix A has the formwhere Ak is a square matrix for all k = 1,..., n.
Reducing fill-in
The fill-in of a matrix are those entries that change from an initial zero to a non-zero value during the execution of an algorithm. To reduce the memory requirements and the number of arithmetic operations used during an algorithm, it is useful to minimize the fill-in by switching rows and columns in the matrix. The symbolic Cholesky decomposition can be used to calculate the worst possible fill-in before doing the actual Cholesky decomposition.There are other methods than the Cholesky decomposition in use. Orthogonalization methods are common, for example, when solving problems by least squares methods. While the theoretical fill-in is still the same, in practical terms the "false non-zeros" can be different for different methods. And symbolic versions of those algorithms can be used in the same manner as the symbolic Cholesky to compute worst case fill-in.
Solving sparse matrix equations
Both iterative and direct methods exist for sparse matrix solving.Iterative methods, such as conjugate gradient method and GMRES utilize fast computations of matrix-vector products, where matrix is sparse. The use of preconditioners can significantly accelerate convergence of such iterative methods.
Software
Many software libraries support sparse matrices, and provide solvers for sparse matrix equations. The following are open-source:- , a suite of sparse matrix algorithms, geared toward the direct solution of sparse linear systems.
- PETSc, a large C library, containing many different matrix solvers for a variety of matrix storage formats.
- Trilinos, a large C++ library, with sub-libraries dedicated to the storage of dense and sparse matrices and solution of corresponding linear systems.
- Eigen3 is a C++ library that contains several sparse matrix solvers. However, none of them are parallelized.
- MUMPS, written in Fortran90, is a frontal solver.
- deal.II, a finite element library that also has a sub-library for sparse linear systems and their solution.
- .
- .
- Armadillo provides a user-friendly C++ wrapper for .
- SciPy provides support for several sparse matrix formats, linear algebra, and solvers.
- R package for sparse matrices.
- Handling sparse arrays with literally astronomical numbers of elements.
- ALGLIB is a C++ and C# library with sparse linear algebra support
- ARPACK Fortran 77 library for sparse matrix diagonalization and manipulation, using the Arnoldi algorithm
- Reference NIST package for sparse matrix diagonalization
- SLEPc Library for solution of large scale linear systems and sparse matrices
History