Gibbs sampling


In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is difficult. This sequence can be used to approximate the joint distribution ; to approximate the marginal distribution of one of the variables, or some subset of the variables ; or to compute an integral. Typically, some of the variables correspond to observations whose values are known, and hence do not need to be sampled.
Gibbs sampling is commonly used as a means of statistical inference, especially Bayesian inference. It is a randomized algorithm, and is an alternative to deterministic algorithms for statistical inference such as the expectation-maximization algorithm.
As with other MCMC algorithms, Gibbs sampling generates a Markov chain of samples, each of which is correlated with nearby samples. As a result, care must be taken if independent samples are desired. Generally, samples from the beginning of the chain may not accurately represent the desired distribution and are usually discarded.

Introduction

Gibbs sampling is named after the physicist Josiah Willard Gibbs, in reference to an analogy between the sampling algorithm and statistical physics. The algorithm was described by brothers Stuart and Donald Geman in 1984, some eight decades after the death of Gibbs.
In its basic version, Gibbs sampling is a special case of the Metropolis–Hastings algorithm. However, in its extended versions, it can be considered a general framework for sampling from a large set of variables by sampling each variable in turn, and can incorporate the Metropolis–Hastings algorithm to implement one or more of the sampling steps.
Gibbs sampling is applicable when the joint distribution is not known explicitly or is difficult to sample from directly, but the conditional distribution of each variable is known and is easy to sample from. The Gibbs sampling algorithm generates an instance from the distribution of each variable in turn, conditional on the current values of the other variables. It can be shown that the sequence of samples constitutes a Markov chain, and the stationary distribution of that Markov chain is just the sought-after joint distribution.
Gibbs sampling is particularly well-adapted to sampling the posterior distribution of a Bayesian network, since Bayesian networks are typically specified as a collection of conditional distributions.

Implementation

Gibbs sampling, in its basic incarnation, is a special case of the Metropolis–Hastings algorithm. The point of Gibbs sampling is that given a multivariate distribution it is simpler to sample from a conditional distribution than to marginalize by integrating over a joint distribution. Suppose we want to obtain samples of from a joint distribution. Denote the th sample by. We proceed as follows:
  1. We begin with some initial value.
  2. We want the next sample. Call this next sample. Since is a vector, we sample each component of the vector,, from the distribution of that component conditioned on all other components sampled so far. But there is a catch: we condition on 's components up to, and thereafter condition on 's components, starting from to. To achieve this, we sample the components in order, starting from the first component. More formally, to sample, we update it according to the distribution specified by. We use the value that the th component had in the th sample, not the th sample.
  3. Repeat the above step times.
If such sampling is performed, these important facts hold:
When performing the sampling:
Furthermore, the conditional distribution of one variable given all others is proportional to the joint distribution:
"Proportional to" in this case means that the denominator is not a function of and thus is the same for all values of ; it forms part of the normalization constant for the distribution over. In practice, to determine the nature of the conditional distribution of a factor, it is easiest to factor the joint distribution according to the individual conditional distributions defined by the graphical model over the variables, ignore all factors that are not functions of , and then reinstate the normalization constant at the end, as necessary. In practice, this means doing one of three things:
  1. If the distribution is discrete, the individual probabilities of all possible values of are computed, and then summed to find the normalization constant.
  2. If the distribution is continuous and of a known form, the normalization constant will also be known.
  3. In other cases, the normalization constant can usually be ignored, as most sampling methods do not require it.

    Inference

Gibbs sampling is commonly used for statistical inference. The idea is that observed data is incorporated into the sampling process by creating separate variables for each piece of observed data and fixing the variables in question to their observed values, rather than sampling from those variables. The distribution of the remaining variables is then effectively a posterior distribution conditioned on the observed data.
The most likely value of a desired parameter could then simply be selected by choosing the sample value that occurs most commonly; this is essentially equivalent to maximum a posteriori estimation of a parameter. More commonly, however, the expected value of the sampled values is chosen; this is a Bayes estimator that takes advantage of the additional data about the entire distribution that is available from Bayesian sampling, whereas a maximization algorithm such as expectation maximization is capable of only returning a single point from the distribution. For example, for a unimodal distribution the mean is usually similar to the mode, but if the distribution is skewed in one direction, the mean will be moved in that direction, which effectively accounts for the extra probability mass in that direction.
Although some of the variables typically correspond to parameters of interest, others are uninteresting variables introduced into the model to properly express the relationships among variables. Although the sampled values represent the joint distribution over all variables, the nuisance variables can simply be ignored when computing expected values or modes; this is equivalent to marginalizing over the nuisance variables. When a value for multiple variables is desired, the expected value is simply computed over each variable separately.
Supervised learning, unsupervised learning and semi-supervised learning can all be handled by simply fixing the values of all variables whose values are known, and sampling from the remainder.
For observed data, there will be one variable for each observation—rather than, for example, one variable corresponding to the sample mean or sample variance of a set of observations. In fact, there generally will be no variables at all corresponding to concepts such as "sample mean" or "sample variance". Instead, in such a case there will be variables representing the unknown true mean and true variance, and the determination of sample values for these variables results automatically from the operation of the Gibbs sampler.
Generalized linear models can sometimes be handled by Gibbs sampling as well. For example, probit regression for determining the probability of a given binary choice, with normally distributed priors placed over the regression coefficients, can be implemented with Gibbs sampling because it is possible to add additional variables and take advantage of conjugacy. However, logistic regression cannot be handled this way. One possibility is to approximate the logistic function with a mixture of normal distributions. More commonly, however, Metropolis–Hastings is used instead of Gibbs sampling.

Mathematical background

Suppose that a sample is taken from a distribution depending on a parameter vector of length, with prior distribution. It may be that is very large and that numerical integration to find the marginal densities of the would be computationally expensive. Then an alternative method of calculating the marginal densities is to create a Markov chain on the space by repeating these two steps:
  1. Pick a random index
  2. Pick a new value for according to
These steps define a reversible Markov chain with the desired invariant distribution. This
can be proved as follows. Define if for all and let denote the probability of a jump from to. Then, the transition probabilities are
So
since is an equivalence relation. Thus the detailed balance equations are satisfied, implying the chain is reversible and it has invariant distribution.
In practice, the index is not chosen at random, and the chain cycles through the indexes in order. In general this gives a non-stationary Markov process, but each individual step will still be reversible, and the overall process will still have the desired stationary distribution.

Variations and extensions

Numerous variations of the basic Gibbs sampler exist. The goal of these variations is to reduce the autocorrelation between samples sufficiently to overcome any added computational costs.

Blocked Gibbs sampler

Collapsing Dirichlet distributions
In hierarchical Bayesian models with categorical variables, such as latent Dirichlet allocation and various other models used in natural language processing, it is quite common to collapse out the Dirichlet distributions that are typically used as prior distributions over the categorical variables. The result of this collapsing introduces dependencies among all the categorical variables dependent on a given Dirichlet prior, and the joint distribution of these variables after collapsing is a Dirichlet-multinomial distribution. The conditional distribution of a given categorical variable in this distribution, conditioned on the others, assumes an extremely simple form that makes Gibbs sampling even easier than if the collapsing had not been done. The rules are as follows:
  1. Collapsing out a Dirichlet prior node affects only the parent and children nodes of the prior. Since the parent is often a constant, it is typically only the children that we need to worry about.
  2. Collapsing out a Dirichlet prior introduces dependencies among all the categorical children dependent on that prior — but no extra dependencies among any other categorical children.
  3. After collapsing, the conditional distribution of one dependent children on the others assumes a very simple form: The probability of seeing a given value is proportional to the sum of the corresponding hyperprior for this value, and the count of all of the other dependent nodes assuming the same value. Nodes not dependent on the same prior must not be counted. The same rule applies in other iterative inference methods, such as variational Bayes or expectation maximization; however, if the method involves keeping partial counts, then the partial counts for the value in question must be summed across all the other dependent nodes. Sometimes this summed up partial count is termed the expected count or similar. The probability is proportional to the resulting value; the actual probability must be determined by normalizing across all the possible values that the categorical variable can take.
  4. If a given categorical node has dependent children, the value computed in the previous step must be multiplied by the actual conditional probabilities of all children given their parents. See the article on the Dirichlet-multinomial distribution for a detailed discussion.
  5. In the case where the group membership of the nodes dependent on a given Dirichlet prior may change dynamically depending on some other variable, the same expected counts are still computed, but need to be done carefully so that the correct set of variables is included. See the article on the Dirichlet-multinomial distribution for more discussion, including in the context of a topic model.
    Collapsing other conjugate priors
In general, any conjugate prior can be collapsed out, if its only children have distributions conjugate to it. The relevant math is discussed in the article on compound distributions. If there is only one child node, the result will often assume a known distribution. For example, collapsing an inverse-gamma-distributed variance out of a network with a single Gaussian child will yield a Student's t-distribution.
If there are multiple child nodes, they will all become dependent, as in the Dirichlet-categorical case. The resulting joint distribution will have a closed form that resembles in some ways the compound distribution, although it will have a product of a number of factors, one for each child node, in it.
In addition, and most importantly, the resulting conditional distribution of one of the child nodes given the others will have the same density as the posterior predictive distribution of all the remaining child nodes. Furthermore, the posterior predictive distribution has the same density as the basic compound distribution of a single node, although with different parameters. The general formula is given in the article on compound distributions.
For example, given a Bayes network with a set of conditionally independent identically distributed Gaussian-distributed nodes with conjugate prior distributions placed on the mean and variance, the conditional distribution of one node given the others after compounding out both the mean and variance will be a Student's t-distribution. Similarly, the result of compounding out the gamma prior of a number of Poisson-distributed nodes causes the conditional distribution of one node given the others to assume a negative binomial distribution.
In these cases where compounding produces a well-known distribution, efficient sampling procedures often exist, and using them will often be more efficient than not collapsing, and instead sampling both prior and child nodes separately. However, in the case where the compound distribution is not well-known, it may not be easy to sample from, since it generally will not belong to the exponential family and typically will not be log-concave.
In the case where the child nodes of the collapsed nodes themselves have children, the conditional distribution of one of these child nodes given all other nodes in the graph will have to take into account the distribution of these second-level children. In particular, the resulting conditional distribution will be proportional to a product of the compound distribution as defined above, and the conditional distributions of all of the child nodes given their parents. This follows from the fact that the full conditional distribution is proportional to the joint distribution. If the child nodes of the collapsed nodes are continuous, this distribution will generally not be of a known form, and may well be difficult to sample from despite the fact that a closed form can be written, for the same reasons as described above for non-well-known compound distributions. However, in the particular case that the child nodes are discrete, sampling is feasible, regardless of whether the children of these child nodes are continuous or discrete. In fact, the principle involved here is described in fair detail in the article on the Dirichlet-multinomial distribution.

Gibbs sampler with ordered overrelaxation

It is also possible to extend Gibbs sampling in various ways. For example, in the case of variables whose conditional distribution is not easy to sample from, a single iteration of slice sampling or the Metropolis–Hastings algorithm can be used to sample from the variables in question. A more efficient alternative is the application of the adaptive rejection sampling methods for sampling from the full-conditional densities. When the ARS techniques cannot be applied, the adaptive rejection Metropolis sampling algorithms are often employed. Furthermore, other alternatives can be found in literature. The auxiliary samples generated by the internal sampler can be recycled within the final Gibbs estimators, improving their efficiency with no extra cost.
It is also possible to incorporate variables that are not random variables, but whose value is deterministically computed from other variables. Generalized linear models, e.g. logistic regression, can be incorporated in this fashion.

Failure modes

There are two ways that Gibbs sampling can fail. The first is when there are islands of high-probability states, with no paths between them. For example, consider a probability distribution over 2-bit vectors, where the vectors and each have probability ½, but the other two vectors and have probability zero. Gibbs sampling will become trapped in one of the two high-probability vectors, and will never reach the other one. More generally, for any distribution over high-dimensional, real-valued vectors, if two particular elements of the vector are perfectly correlated, those two elements will become stuck, and Gibbs sampling will never be able to change them.
The second problem can happen even when all states have nonzero probability and there is only a single island of high-probability states. For example, consider a probability distribution over 100-bit vectors, where the all-zeros vector occurs with probability ½, and all other vectors are equally probable, and so have a probability of each. If you want to estimate the probability of the zero vector, it would be sufficient to take 100 or 1000 samples from the true distribution. That would very likely give an answer very close to ½. But you would probably have to take more than samples from Gibbs sampling to get the same result. No computer could do this in a lifetime.
This problem occurs no matter how long the burn-in period is. This is because in the true distribution, the zero vector occurs half the time, and those occurrences are randomly mixed in with the nonzero vectors. Even a small sample will see both zero and nonzero vectors. But Gibbs sampling will alternate between returning only the zero vector for long periods, then only nonzero vectors for long periods. Thus convergence to the true distribution is extremely slow, requiring much more than steps; taking this many steps is not computationally feasible in a reasonable time period. The slow convergence here can be seen as a consequence of the curse of dimensionality.
A problem like this can be solved by block sampling the entire 100-bit vector at once.

Software

The OpenBUGS software does a Bayesian analysis of complex statistical models using Markov chain Monte Carlo.
JAGS is a GPL program for analysis of Bayesian hierarchical models using Markov Chain Monte Carlo.
Church is free software for performing Gibbs inference over arbitrary distributions that are specified as probabilistic programs.
PyMC3 is an open source Python library for Bayesian learning of general Probabilistic Graphical Model with advanced features and easy to use interface.
is a Matlab code of the Independent Doubly Adaptive Rejection Metropolis Sampling method for drawing from the full-conditional densities.
is a Julia package that allows multiple sampler types to be run as components of Gibbs sampling.