Configuration model


In network science, the configuration model is a method for generating random networks from given degree sequence. It is widely used as a reference model for real-life social networks, because it allows the user to incorporate arbitrary degree distributions.

Rationale for the model

In the configuration model, the degree of each vertex is pre-defined, rather than having a probability distribution from which the given degree is chosen. As opposed to the Erdős-Rényi model, the degree sequence of the configuration model is not restricted to have a Poisson-distribution, the model allows the user to give the network any desired degree distribution.

Algorithm

The following algorithm describes the generation of the model:
  1. Take a degree sequence, i. e. assign a degree to each vertex. The degrees of the vertices are represented as half-links or stubs. The sum of stubs must be even in order to be able to construct a graph. The degree sequence can be drawn from a theoretical distribution or it can represent a real network.
  2. Choose two stubs uniformly at random and connect them to form an edge. Choose another pair from the remaining stubs and connect them. Continue until you run out of stubs. The result is a network with the pre-defined degree sequence. The realization of the network changes with the order in which the stubs are chosen, they might include cycles, self-loops or multi-links Yet, the expected number of self-loops and multi-links goes to zero in the N → ∞ limit..

    Self-loops, multi-edges and implications

The algorithm described above matches any stubs with the same probability. The uniform distribution of the matching is an important property in terms of calculating other features of the generated networks. The network generation process does not exclude the event of generating a self-loop or a multi-link. If we designed the process where self-loops and multi-edges are not allowed, the matching of the stubs would not follow a uniform distribution. However, the average number of self-loops and multi-edges is a constant for large networks, so the density of self-loops and multi-links goes to zero as .
A further consequence of self-loops and multi-edges is that not all possible networks are generated with the same probability. In general, all possible realizations can be generated by permuting the stubs of all vertices in every possible way. The number of permutation of the stubs of node is, so the number of realizations of a degree sequence is. This would mean that each realization occurs with the same probability. However, self-loops and multi-edges can change the number of realizations, since permuting self-edges can result an unchanged realization. Given that the number of self-loops and multi-links vanishes as , the variation in probabilities of different realization will be small but present.

Properties

Edge probability

A stub of node can be connected to other stubs. The vertex has stubs to which node can be connected with the same probability. The probability of a stub of node being connected to one of these stubs is. Since node has stubs, the probability of being connected to is . The probability of self-edges cannot be described by this formula, but as the density of self-edges goes to zero as, it usually gives a good estimate.
Given a configuration model with a degree distribution, the probability of a randomly chosen node having degree is. But if we took one of the vertices to which we can arrive following one of edges of i, the probability of having degree k is . This fraction depends on as opposed to the degree of the typical node with. Thus, a neighbor of a typical node is expected to have higher degree than the typical node itself. This feature of the configuration model describes well the phenomenon of "my friends having more friends than I do".

Clustering coefficient

The global clustering coefficient is computed as follows:
where denotes the probabilistic distributions of vertices and having edges, respectively.
After transforming the equation above, we get approximately
, where is the number of vertices, and the size of the constant depends on. Thus, the global clustering coefficient becomes small at large n limit.

Giant component

In the configuration model, a giant component exists if
where and are the first and second moments of the degree distribution. That means that, the critical threshold solely depends on quantities which are uniquely determined by the degree distribution.
Configuration model generates locally tree-like networks, meaning that any local neighborhood in such a network takes the form of a tree. More precisely, if you start at any node in the network and form the set of all nodes at distance or less from that starting node, the set will, with probability tending to 1 as n → ∞, take the form of a tree. In tree-like structures, the number of second neighbors averaged over the whole network,, is:
Then, in general, the average number at distance can be written as:
Which implies that if the ratio of is larger than one, then the network can have a giant component. This is famous as the Molloy-Reed criterion. The intuition behind this criterion is that if the giant component exists, then the average degree of a randomly chosen vertex in a connected component should be at least 2. Molloy-Reed criterion can also be expressed as: which implies that, although the size of the GC may depend on and, the number of nodes of degree 0 and 2 have no contribution in the existence of the giant component.

Diameter

Configuration model can assume any degree distribution and shows the small-world effect, since to leading order the diameter of the configuration model is just.

Components of finite size

As total number of vertices tends to infinity, the probability to find two giant components is vanishing. This means that in the sparse regime, the model consist of one giant component and multiple connected components of finite size. The sizes of the connected components are characterized by their size distribution - the probability that a randomly sampled vertex belongs to a connected component of size There is a correspondence between the degree distribution and the size distribution When total number of vertices tends to infinity,, the following relation takes place:
where and denotes the -fold convolution power. Moreover, explicit asymptotes for are known when and is close to zero. The analytical expressions for these asymptotes depend on finiteness of the moments of the degree distribution tail exponent, and the sign of Molloy-Reed criterium. The following table summarises these relationships.
Moments ofTail of
light tail-1 or 1
light tail0
heavy tail,-1
heavy tail,0
heavy tail,1
heavy tail,-1
heavy tail,0
heavy tail,1
heavy tail,-1
heavy tail,0
heavy tail,1
heavy tail,1
heavy tail,1

Modelling

Comparison to real-world networks

Three general properties of complex networks are heterogeneous degree distribution, short average path length and high clustering. Having the opportunity to define any arbitrary degree sequence, the first condition can be satisfied by design, but as shown above, the global clustering coefficient is an inverse function of the network size, so for large configuration networks, clustering tends to be small. This feature of the baseline model contradicts the known properties of empirical networks, but extensions of the model can solve this issue.

Application: modularity calculation

The configuration model is applied as benchmark in the calculation of network modularity. Modularity measures the degree of division of the network into modules. It is computed as follows:
in which the adjacency matrix of the network is compared to the probability of having an edge between node and in the configuration model.

Directed configuration model

In the DCM, each node is given a number of half-edges called tails and heads. Then tails and heads are matched uniformly at random to form directed edges. The size of the giant component, the typical distance, and the diameter of DCM have been studied mathematically. Some real-world complex networks have been modelled by DCM, such as neural networks, finance and social networks.