Continuum percolation theory


In mathematics and probability theory, continuum percolation theory is a branch of mathematics that extends discrete percolation theory to continuous space. More specifically, the underlying points of discrete percolation form types of lattices whereas the underlying points of continuum percolation are often randomly positioned in some continuous space and form a type of point process. For each point, a random shape is frequently placed on it and the shapes overlap each with other to form clumps or components. As in discrete percolation, a common research focus of continuum percolation is studying the conditions of occurrence for infinite or giant components. Other shared concepts and analysis techniques exist in these two types of percolation theory as well as the study of random graphs and random geometric graphs.
Continuum percolation arose from an early mathematical model for wireless networks, which, with the rise of several wireless network technologies in recent years, has been generalized and studied in order to determine the theoretical bounds of information capacity and performance in wireless networks. In addition to this setting, continuum percolation has gained application in other disciplines including biology, geology, and physics, such as the study of porous material and semiconductors, while becoming a subject of mathematical interest in its own right.

Early history

In the early 1960s Edgar Gilbert proposed a mathematical model in wireless networks that gave rise to the field of continuum percolation theory, thus generalizing discrete percolation. The underlying points of this model, sometimes known as the Gilbert disk model, were scattered uniformly in the infinite plane according to a homogeneous Poisson process. Gilbert, who had noticed similarities between discrete and continuum percolation, then used concepts and techniques from the probability subject of branching processes to show that a threshold value existed for the infinite or "giant" component.

Definitions and terminology

The exact names, terminology, and definitions of these models may vary slightly depending on the source, which is also reflected in the use of point process notation.

Common models

A number of well-studied models exist in continuum percolation, which are often based on homogeneous Poisson point processes.

Disk model

Consider a collection of points in the plane that form a homogeneous Poisson process with constant density. For each point of the Poisson process that is independent of all the other radii and all the underlying points, then the resulting mathematical structure is known as a random disk model.

Boolean model

Given a random disk model, if the set union of all the disks is taken, then the resulting structure is known as a Boolean–Poisson model, which is a commonly studied model in continuum percolation as well as stochastic geometry. If all the radii are set to some common constant, say,, then the resulting model is sometimes known as the Gilbert disk model.

Germ-grain model

The disk model can be generalized to more arbitrary shapes where, instead of a disk, a random compact shape is placed on each point. Again, each shape has a common distribution and independent to all other shapes and the underlying point process. This model is known as the germ–grain model where the underlying points are the germs and the random compact shapes are the grains. The set union of all the shapes forms a Boolean germ-grain model. Typical choices for the grains include disks, random polygon and segments of random length.
Boolean models are also examples of stochastic processes known as coverage processes. The above models can be extended from the plane to general Euclidean space.

Components and criticality

In the Boolean–Poisson model, disks there can be isolated groups or clumps of disks that do not contact any other clumps of disks. These clumps are known as components. If the area of a component is infinite, one says it is an infinite or "giant" component. A major focus of percolation theory is establishing the conditions when giant components exist in models, which has parallels with the study of random networks. If no big component exists, the model is said to be subcritical. The conditions of giant component criticality naturally depend on parameters of the model such as the density of the underlying point process.

Excluded area theory

The excluded area of a placed object is defined as the minimal area around the object into which an additional object cannot be placed without overlapping with the first object. For example, in a system of randomly oriented homogeneous rectangles of length, width and aspect ratio, the average excluded area is given by:
In a system of identical ellipses with semi-axes and and ratio, and perimeter, the average excluded areas is given by:
The excluded area theory states that the critical number density of a system is inversely proportional to the average excluded area :
It has been shown via Monte-Carlo simulations that percolation threshold in both homogeneous and heterogeneous systems of rectangles or ellipses is dominated by the average excluded areas and can be approximated fairly well by the linear relation
with a proportionality constant in the range 3.1–3.5.

Applications

The applications of percolation theory are various and range from material sciences to wireless communication systems. Often the work involves showing that a type of phase transition occurs in the system.

Wireless networks

Wireless networks are sometimes best represented with stochastic models owing to their complexity and unpredictability, hence continuum percolation have been used to develop stochastic geometry models of wireless networks. For example, the tools of continuous percolation theory and coverage processes have been used to study the coverage and connectivity of sensor networks. One of the main limitations of these networks is energy consumption where usually each node has a battery and an embedded form of energy harvesting. To reduce energy consumption in sensor networks, various sleep schemes have been suggested that entail having a subcollection of nodes go into a low energy-consuming sleep mode. These sleep schemes obviously affect the coverage and connectivity of sensor networks. Simple power-saving models have been proposed such as the simple uncoordinated 'blinking' model where each node independently powers down with some fixed probability. Using the tools of percolation theory, a blinking Boolean Poisson model has been analyzed to study the latency and connectivity effects of such a simple power scheme.