Beta distribution


In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval parameterized by two positive shape parameters, denoted by α and β, that appear as exponents of the random variable and control the shape of the distribution. The generalization to multiple variables is called a Dirichlet distribution.
The beta distribution has been applied to model the behavior of random variables limited to intervals of finite length in a wide variety of disciplines.
In [|Bayesian inference], the beta distribution is the conjugate prior probability distribution for the Bernoulli, binomial, negative binomial and geometric distributions. The beta distribution is a suitable model for the random behavior of percentages and proportions.
The formulation of the beta distribution discussed here is also known as the beta distribution of the first kind, whereas beta distribution of the second kind is an alternative name for the beta prime distribution.

Definitions

Probability density function

The probability density function of the beta distribution, for, and shape parameters α, β > 0, is a power function of the variable x and of its reflection as follows:
where Γ is the gamma function. The beta function,, is a normalization constant to ensure that the total probability is 1. In the above equations x is a realization—an observed value that actually occurred—of a random process X.
This definition includes both ends and, which is consistent with definitions for other continuous distributions supported on a bounded interval which are special cases of the beta distribution, for example the arcsine distribution, and consistent with several authors, like N. L. Johnson and S. Kotz. However, the inclusion of and does not work for ; accordingly, several other authors, including W. Feller, choose to exclude the ends and, and consider instead.
Several authors, including N. L. Johnson and S. Kotz, use the symbols p and q for the shape parameters of the beta distribution, reminiscent of the symbols traditionally used for the parameters of the Bernoulli distribution, because the beta distribution approaches the Bernoulli distribution in the limit when both shape parameters α and β approach the value of zero.
In the following, a random variable X beta-distributed with parameters α and β will be denoted by:
Other notations for beta-distributed random variables used in the statistical literature are and.

Cumulative distribution function

The cumulative distribution function is
where is the incomplete beta function and is the regularized incomplete beta function.

Alternative parametrizations

Two parameters

Mean and sample size
The beta distribution may also be reparameterized in terms of its mean μ and the addition of both shape parameters. Denoting by αPosterior and βPosterior the shape parameters of the posterior beta distribution resulting from applying Bayes theorem to a binomial likelihood function and a prior probability, the interpretation of the addition of both shape parameters to be sample size = ν = α·Posterior + β·Posterior is only correct for the Haldane prior probability Beta. Specifically, for the Bayes prior Beta the correct interpretation would be sample size = α·Posterior + β Posterior − 2, or ν = + 2. Of course, for sample size much larger than 2, the difference between these two priors becomes negligible. In the rest of this article ν = α + β will be referred to as "sample size", but one should remember that it is, strictly speaking, the "sample size" of a binomial likelihood function only when using a Haldane Beta prior in Bayes theorem.
This parametrization may be useful in Bayesian parameter estimation. For example, one may administer a test to a number of individuals. If it is assumed that each person's score is drawn from a population-level Beta distribution, then an important statistic is the mean of this population-level distribution. The mean and sample size parameters are related to the shape parameters α and β via
Under this parametrization, one may place an uninformative prior probability over the mean, and a vague prior probability over the positive reals for the sample size, if they are independent, and prior data and/or beliefs justify it.
Mode and concentration
The mode and "concentration" can also be used to calculate the parameters for a beta distribution.
Mean (allele frequency) and (Wright's) genetic distance between two populations
The Balding–Nichols model is a two-parameter parametrization of the beta distribution used in population genetics. It is a statistical description of the allele frequencies in the components of a sub-divided population:
where and ; here F is genetic distance between two populations.
See the articles Balding–Nichols model, F-statistics, fixation index and coefficient of relationship, for further information.
Mean and variance
Solving the system of equations given in the above sections as the equations for the mean and the variance of the beta distribution in terms of the original parameters α and β, one can express the α and β parameters in terms of the mean and the variance :
This parametrization of the beta distribution may lead to a more intuitive understanding than the one based on the original parameters α and β. For example, by expressing the mode, skewness, excess kurtosis and differential entropy in terms of the mean and the variance:

Four parameters

A beta distribution with the two shape parameters α and β is supported on the range or. It is possible to alter the location and scale of the distribution by introducing two further parameters representing the minimum, a, and maximum c, values of the distribution, by a linear transformation substituting the non-dimensional variable x in terms of the new variable y and the parameters a and c:
The probability density function of the four parameter beta distribution is equal to the two parameter distribution, scaled by the range,, and with the "y" variable shifted and scaled as follows:
That a random variable Y is Beta-distributed with four parameters α, β, a, and c will be denoted by:
The measures of central location are scaled and shifted, as follows:

The shape parameters of Y can be written in term of its mean and variance as
The statistical dispersion measures are scaled by the range, linearly for the mean deviation and nonlinearly for the variance:
Since the skewness and excess kurtosis are non-dimensional quantities, they are independent of the parameters a and c, and therefore equal to the expressions given above in terms of X :

Properties

Measures of central tendency

Mode

The mode of a Beta distributed random variable X with α, β > 1 is the most likely value of the distribution, and is given by the following expression:
When both parameters are less than one, this is the anti-mode: the lowest point of the probability density curve.
Letting α = β, the expression for the mode simplifies to 1/2, showing that for α = β > 1 the mode, is at the center of the distribution: it is symmetric in those cases. See Shapes section in this article for a full list of mode cases, for arbitrary values of α and β. For several of these cases, the maximum value of the density function occurs at one or both ends. In some cases the value of the density function occurring at the end is finite. For example, in the case of α = 2, β = 1, the density function becomes a right-triangle distribution which is finite at both ends. In several other cases there is a singularity at one end, where the value of the density function approaches infinity. For example, in the case α = β = 1/2, the Beta distribution simplifies to become the arcsine distribution. There is debate among mathematicians about some of these cases and whether the ends can be called modes or not.
The median of the beta distribution is the unique real number for which the regularized incomplete beta function. There is no general closed-form expression for the median of the beta distribution for arbitrary values of α and β. Closed-form expressions for particular values of the parameters α and β follow:
The following are the limits with one parameter finite and the other approaching these limits:
A reasonable approximation of the value of the median of the beta distribution, for both α and β greater or equal to one, is given by the formula
When α, β ≥ 1, the relative error in this approximation is less than 4% and for both α ≥ 2 and β ≥ 2 it is less than 1%. The absolute error divided by the difference between the mean and the mode is similarly small:
File:Relative Error for Approximation to Median of Beta Distribution for alpha and beta from 1 to 5 - J. Rodal.jpg|325px|Abs for Beta distribution for 1 ≤ α ≤ 5 and 1 ≤ β ≤ 5File:Error in Median Apprx. relative to Mean-Mode distance for Beta Distribution with alpha and beta from 1 to 5 - J. Rodal.jpg|325px|Abs for Beta distribution for 1≤α≤5 and 1≤β≤5

Mean

The expected value of a Beta distribution random variable X with two parameters α and β is a function of only the ratio β/α of these parameters:
Letting in the above expression one obtains, showing that for the mean is at the center of the distribution: it is symmetric. Also, the following limits can be obtained from the above expression:
Therefore, for β/α → 0, or for α/β → ∞, the mean is located at the right end,. For these limit ratios, the beta distribution becomes a one-point degenerate distribution with a Dirac delta function spike at the right end,, with probability 1, and zero probability everywhere else. There is 100% probability concentrated at the right end,.
Similarly, for β/α → ∞, or for α/β → 0, the mean is located at the left end,. The beta distribution becomes a 1-point Degenerate distribution with a Dirac delta function spike at the left end, x = 0, with probability 1, and zero probability everywhere else. There is 100% probability concentrated at the left end, x = 0. Following are the limits with one parameter finite and the other approaching these limits:
While for typical unimodal distributions it is known that the sample mean is not as robust as the sample median, the opposite is the case for uniform or "U-shaped" bimodal distributions, with the modes located at the ends of the distribution. As Mosteller and Tukey remark "the average of the two extreme observations uses all the sample information. This illustrates how, for short-tailed distributions, the extreme observations should get more weight." By contrast, it follows that the median of "U-shaped" bimodal distributions with modes at the edge of the distribution is not robust, as the sample median drops the extreme sample observations from consideration. A practical application of this occurs for example for random walks, since the probability for the time of the last visit to the origin in a random walk is distributed as the arcsine distribution Beta: the mean of a number of realizations of a random walk is a much more robust estimator than the median.

Geometric mean

The logarithm of the geometric mean GX of a distribution with random variable X is the arithmetic mean of ln, or, equivalently, its expected value:
For a beta distribution, the expected value integral gives:
where ψ is the digamma function.
Therefore, the geometric mean of a beta distribution with shape parameters α and β is the exponential of the digamma functions of α and β as follows:
While for a beta distribution with equal shape parameters α = β, it follows that skewness = 0 and mode = mean = median = 1/2, the geometric mean is less than 1/2:. The reason for this is that the logarithmic transformation strongly weights the values of X close to zero, as ln strongly tends towards negative infinity as X approaches zero, while ln flattens towards zero as.
Along a line, the following limits apply:
Following are the limits with one parameter finite and the other approaching these limits:
The accompanying plot shows the difference between the mean and the geometric mean for shape parameters α and β from zero to 2. Besides the fact that the difference between them approaches zero as α and β approach infinity and that the difference becomes large for values of α and β approaching zero, one can observe an evident asymmetry of the geometric mean with respect to the shape parameters α and β. The difference between the geometric mean and the mean is larger for small values of α in relation to β than when exchanging the magnitudes of β and α.
N. L.Johnson and S. Kotz suggest the logarithmic approximation to the digamma function ψ ≈ ln which results in the following approximation to the geometric mean:
Numerical values for the relative error in this approximation follow: ; ; ; ; ; ; ; .
Similarly, one can calculate the value of shape parameters required for the geometric mean to equal 1/2. Given the value of the parameter β, what would be the value of the other parameter, α, required for the geometric mean to equal 1/2?. The answer is that, the value of α required tends towards as. For example, all these couples have the same geometric mean of 1/2: , , , , , , .
The fundamental property of the geometric mean, which can be proven to be false for any other mean, is
This makes the geometric mean the only correct mean when averaging normalized results, that is results that are presented as ratios to reference values. This is relevant because the beta distribution is a suitable model for the random behavior of percentages and it is particularly suitable to the statistical modelling of proportions. The geometric mean plays a central role in maximum likelihood estimation, see section "Parameter estimation, maximum likelihood." Actually, when performing maximum likelihood estimation, besides the geometric mean GX based on the random variable X, also another geometric mean appears naturally: the geometric mean based on the linear transformation ––, the mirror-image of X, denoted by G:
Along a line, the following limits apply:
Following are the limits with one parameter finite and the other approaching these limits:
It has the following approximate value:
Although both GX and G are asymmetric, in the case that both shape parameters are equal, the geometric means are equal: GX = G. This equality follows from the following symmetry displayed between both geometric means:

Harmonic mean

The inverse of the harmonic mean of a distribution with random variable X is the arithmetic mean of 1/X, or, equivalently, its expected value. Therefore, the harmonic mean of a beta distribution with shape parameters α and β is:
The harmonic mean of a Beta distribution with α < 1 is undefined, because its defining expression is not bounded in for shape parameter α less than unity.
Letting α = β in the above expression one obtains
showing that for α = β the harmonic mean ranges from 0, for α = β = 1, to 1/2, for α = β → ∞.
Following are the limits with one parameter finite and the other approaching these limits:
The harmonic mean plays a role in maximum likelihood estimation for the four parameter case, in addition to the geometric mean. Actually, when performing maximum likelihood estimation for the four parameter case, besides the harmonic mean HX based on the random variable X, also another harmonic mean appears naturally: the harmonic mean based on the linear transformation, the mirror-image of X, denoted by H1 − X:
The harmonic mean of a Beta distribution with β < 1 is undefined, because its defining expression is not bounded in for shape parameter β less than unity.
Letting α = β in the above expression one obtains
showing that for α = β the harmonic mean ranges from 0, for α = β = 1, to 1/2, for α = β → ∞.
Following are the limits with one parameter finite and the other approaching these limits:
Although both HX and H1−X are asymmetric, in the case that both shape parameters are equal α = β, the harmonic means are equal: HX = H1−X. This equality follows from the following symmetry displayed between both harmonic means:

Measures of statistical dispersion

Variance

The variance of a Beta distribution random variable X with parameters α and β is:
Letting α = β in the above expression one obtains
showing that for α = β the variance decreases monotonically as increases. Setting in this expression, one finds the maximum variance var = 1/4 which only occurs approaching the limit, at.
The beta distribution may also be parametrized in terms of its mean μ and sample size :
Using this parametrization, one can express the variance in terms of the mean μ and the sample size ν as follows:
Since, it must follow that.
For a symmetric distribution, the mean is at the middle of the distribution,, and therefore:
Also, the following limits can be obtained from the above expressions:

Geometric variance and covariance

The logarithm of the geometric variance, ln, of a distribution with random variable X is the second moment of the logarithm of X centered on the geometric mean of X, ln:
and therefore, the geometric variance is:
In the Fisher information matrix, and the curvature of the log likelihood function, the logarithm of the geometric variance of the reflected variable 1 − X and the logarithm of the geometric covariance between X and 1 − X appear:
For a beta distribution, higher order logarithmic moments can be derived by using the representation of a beta distribution as a proportion of two Gamma distributions and differentiating through the integral. They can be expressed in terms of higher order poly-gamma functions. See the section titled "Other moments, Moments of transformed random variables, Moments of logarithmically transformed random variables". The variance of the logarithmic variables and covariance of ln X and ln are:
where the trigamma function, denoted ψ1, is the second of the polygamma functions, and is defined as the derivative of the digamma function:
Therefore,
The accompanying plots show the log geometric variances and log geometric covariance versus the shape parameters α and β. The plots show that the log geometric variances and log geometric covariance are close to zero for shape parameters α and β greater than 2, and that the log geometric variances rapidly rise in value for shape parameter values α and β less than unity. The log geometric variances are positive for all values of the shape parameters. The log geometric covariance is negative for all values of the shape parameters, and it reaches large negative values for α and β less than unity.
Following are the limits with one parameter finite and the other approaching these limits:
Limits with two parameters varying:
Although both ln and ln are asymmetric, when the shape parameters are equal, α = β, one has: ln = ln. This equality follows from the following symmetry displayed between both log geometric variances:
The log geometric covariance is symmetric:

Mean absolute deviation around the mean

The mean absolute deviation around the mean for the beta distribution with shape parameters α and β is:
The mean absolute deviation around the mean is a more robust estimator of statistical dispersion than the standard deviation for beta distributions with tails and inflection points at each side of the mode, Beta distributions with α,β > 2, as it depends on the linear deviations rather than the square deviations from the mean. Therefore, the effect of very large deviations from the mean are not as overly weighted.
Using Stirling's approximation to the Gamma function, N.L.Johnson and S.Kotz derived the following approximation for values of the shape parameters greater than unity :
At the limit α → ∞, β → ∞, the ratio of the mean absolute deviation to the standard deviation becomes equal to the ratio of the same measures for the normal distribution:. For α = β = 1 this ratio equals, so that from α = β = 1 to α, β → ∞ the ratio decreases by 8.5%. For α = β = 0 the standard deviation is exactly equal to the mean absolute deviation around the mean. Therefore, this ratio decreases by 15% from α = β = 0 to α = β = 1, and by 25% from α = β = 0 to α, β → ∞. However, for skewed beta distributions such that α → 0 or β → 0, the ratio of the standard deviation to the mean absolute deviation approaches infinity because the mean absolute deviation approaches zero faster than the standard deviation.
Using the parametrization in terms of mean μ and sample size ν = α + β > 0:
one can express the mean absolute deviation around the mean in terms of the mean μ and the sample size ν as follows:
For a symmetric distribution, the mean is at the middle of the distribution, μ = 1/2, and therefore:
Also, the following limits can be obtained from the above expressions:

Mean absolute difference

The mean absolute difference for the Beta distribution is:
The Gini coefficient for the Beta distribution is half of the relative mean absolute difference:

Skewness

The skewness of the beta distribution is
Letting α = β in the above expression one obtains γ1 = 0, showing once again that for α = β the distribution is symmetric and hence the skewness is zero. Positive skew for α < β, negative skew for α > β.
Using the parametrization in terms of mean μ and sample size ν = α + β:
one can express the skewness in terms of the mean μ and the sample size ν as follows:
The skewness can also be expressed just in terms of the variance var and the mean μ as follows:
The accompanying plot of skewness as a function of variance and mean shows that maximum variance is coupled with zero skewness and the symmetry condition, and that maximum skewness occurs when the mean is located at one end or the other, so that the "mass" of the probability distribution is concentrated at the ends.
The following expression for the square of the skewness, in terms of the sample size ν = α + β and the variance var, is useful for the method of moments estimation of four parameters:
This expression correctly gives a skewness of zero for α = β, since in that case :.
For the symmetric case, skewness = 0 over the whole range, and the following limits apply:
For the asymmetric cases the following limits can be obtained from the above expressions:

Kurtosis

The beta distribution has been applied in acoustic analysis to assess damage to gears, as the kurtosis of the beta distribution has been reported to be a good indicator of the condition of a gear. Kurtosis has also been used to distinguish the seismic signal generated by a person's footsteps from other signals. As persons or other targets moving on the ground generate continuous signals in the form of seismic waves, one can separate different targets based on the seismic waves they generate. Kurtosis is sensitive to impulsive signals, so it's much more sensitive to the signal generated by human footsteps than other signals generated by vehicles, winds, noise, etc. Unfortunately, the notation for kurtosis has not been standardized. Kenney and Keeping use the symbol γ2 for the excess kurtosis, but Abramowitz and Stegun use different terminology. To prevent confusion between kurtosis and excess kurtosis, when using symbols, they will be spelled out as follows:
Letting α = β in the above expression one obtains
Therefore, for symmetric beta distributions, the excess kurtosis is negative, increasing from a minimum value of −2 at the limit as → 0, and approaching a maximum value of zero as → ∞. The value of −2 is the minimum value of excess kurtosis that any distribution can ever achieve. This minimum value is reached when all the probability density is entirely concentrated at each end x = 0 and x = 1, with nothing in between: a 2-point Bernoulli distribution with equal probability 1/2 at each end. The description of kurtosis as a measure of the "potential outliers" of the probability distribution, is correct for all distributions including the beta distribution. When rare, extreme values can occur in the beta distribution, the higher its kurtosis; otherwise, the kurtosis is lower. For α ≠ β, skewed beta distributions, the excess kurtosis can reach unlimited positive values because the side away from the mode will produce occasional extreme values. Minimum kurtosis takes place when the mass density is concentrated equally at each end, and there is no probability mass density in between the ends.
Using the parametrization in terms of mean μ and sample size ν = α + β:
one can express the excess kurtosis in terms of the mean μ and the sample size ν as follows:
The excess kurtosis can also be expressed in terms of just the following two parameters: the variance var, and the sample size ν as follows:
and, in terms of the variance var and the mean μ as follows:
The plot of excess kurtosis as a function of the variance and the mean shows that the minimum value of the excess kurtosis is intimately coupled with the maximum value of variance and the symmetry condition: the mean occurring at the midpoint. This occurs for the symmetric case of α = β = 0, with zero skewness. At the limit, this is the 2 point Bernoulli distribution with equal probability 1/2 at each Dirac delta function end x = 0 and x = 1 and zero probability everywhere else. Variance is maximum because the distribution is bimodal with nothing in between the two modes at each end. Excess kurtosis is minimum: the probability density "mass" is zero at the mean and it is concentrated at the two peaks at each end. Excess kurtosis reaches the minimum possible value when the probability density function has two spikes at each end: it is bi-"peaky" with nothing in between them.
On the other hand, the plot shows that for extreme skewed cases, where the mean is located near one or the other end, the variance is close to zero, and the excess kurtosis rapidly approaches infinity when the mean of the distribution approaches either end.
Alternatively, the excess kurtosis can also be expressed in terms of just the following two parameters: the square of the skewness, and the sample size ν as follows:
From this last expression, one can obtain the same limits published practically a century ago by Karl Pearson in his paper, for the beta distribution. Setting α + β= ν = 0 in the above expression, one obtains Pearson's lower boundary. The limit of α + β = ν → ∞ determines Pearson's upper boundary.
therefore:
Values of ν = α + β such that ν ranges from zero to infinity, 0 < ν < ∞, span the whole region of the beta distribution in the plane of excess kurtosis versus squared skewness.
For the symmetric case, the following limits apply:
For the unsymmetric cases the following limits can be obtained from the above expressions:

Characteristic function

The characteristic function is the Fourier transform of the probability density function. The characteristic function of the beta distribution is Kummer's confluent hypergeometric function :
where
is the rising factorial, also called the "Pochhammer symbol". The value of the characteristic function for t = 0, is one:
Also, the real and imaginary parts of the characteristic function enjoy the following symmetries with respect to the origin of variable t:
The symmetric case α = β simplifies the characteristic function of the beta distribution to a Bessel function, since in the special case α + β = 2α the confluent hypergeometric function reduces to a Bessel function using Kummer's second transformation as follows:
In the accompanying plots, the real part of the characteristic function of the beta distribution is displayed for symmetric and skewed cases.

Other moments

Moment generating function

It also follows that the moment generating function is
In particular MX = 1.

Higher moments

Using the moment generating function, the k-th raw moment is given by the factor
multiplying the term in the series of the moment generating function
where is a Pochhammer symbol representing rising factorial. It can also be written in a recursive form as
Since the moment generating function has a positive radius of convergence, the beta distribution is determined by its moments.

Moments of transformed random variables

Moments of linearly transformed, product and inverted random variables
One can also show the following expectations for a transformed random variable, where the random variable X is Beta-distributed with parameters α and β: X ~ Beta. The expected value of the variable 1 − X is the mirror-symmetry of the expected value based on X:
Due to the mirror-symmetry of the probability density function of the beta distribution, the variances based on variables X and 1 − X are identical, and the covariance on X:
The following transformation by dividing the variable X by its mirror-image X/ results in the expected value of the "inverted beta distribution" or beta prime distribution :
Variances of these transformed variables can be obtained by integration, as the expected values of the second moments centered on the corresponding variables:
The following variance of the variable X divided by its mirror-image results in the variance of the "inverted beta distribution" or beta prime distribution :
The covariances are:
These expectations and variances appear in the four-parameter Fisher information matrix
Moments of logarithmically transformed random variables
Expected values for logarithmic transformations are discussed in this section. The following logarithmic linear transformations are related to the geometric means GX and G :
Where the digamma function ψ is defined as the logarithmic derivative of the gamma function:
Logit transformations are interesting, as they usually transform various shapes into bell-shaped densities over the logit variable, and they may remove the end singularities over the original variable:
Johnson considered the distribution of the logit - transformed variable ln, including its moment generating function and approximations for large values of the shape parameters. This transformation extends the finite support based on the original variable X to infinite support in both directions of the real line.
Higher order logarithmic moments can be derived by using the representation of a beta distribution as a proportion of two Gamma distributions and differentiating through the integral. They can be expressed in terms of higher order poly-gamma functions as follows:
therefore the variance of the logarithmic variables and covariance of ln and ln are:
where the trigamma function, denoted ψ1, is the second of the polygamma functions, and is defined as the derivative of the digamma function:
The variances and covariance of the logarithmically transformed variables X and are different, in general, because the logarithmic transformation destroys the mirror-symmetry of the original variables X and, as the logarithm approaches negative infinity for the variable approaching zero.
These logarithmic variances and covariance are the elements of the Fisher information matrix for the beta distribution. They are also a measure of the curvature of the log likelihood function.
The variances of the log inverse variables are identical to the variances of the log variables:
It also follows that the variances of the logit transformed variables are:

Quantities of information (entropy)

Given a beta distributed random variable, X ~ Beta, the differential entropy of X is, the expected value of the negative of the logarithm of the probability density function:
where f is the probability density function of the beta distribution:
The digamma function ψ appears in the formula for the differential entropy as a consequence of Euler's integral formula for the harmonic numbers which follows from the integral:
The differential entropy of the beta distribution is negative for all values of α and β greater than zero, except at α = β = 1, where the differential entropy reaches its maximum value of zero. It is to be expected that the maximum entropy should take place when the beta distribution becomes equal to the uniform distribution, since uncertainty is maximal when all possible events are equiprobable.
For α or β approaching zero, the differential entropy approaches its minimum value of negative infinity. For α or β approaching zero, there is a maximum amount of order: all the probability density is concentrated at the ends, and there is zero probability density at points located between the ends. Similarly for α or β approaching infinity, the differential entropy approaches its minimum value of negative infinity, and a maximum amount of order. If either α or β approaches infinity all the probability density is concentrated at an end, and the probability density is zero everywhere else. If both shape parameters are equal, α = β, and they approach infinity simultaneously, the probability density becomes a spike concentrated at the middle x = 1/2, and hence there is 100% probability at the middle x = 1/2 and zero probability everywhere else.
The differential entropy was introduced by Shannon in his original paper, as the concluding part of the same paper where he defined the discrete entropy. It is known since then that the differential entropy may differ from the infinitesimal limit of the discrete entropy by an infinite offset, therefore the differential entropy can be negative. What really matters is the relative value of entropy.
Given two beta distributed random variables, X1 ~ Beta and X2 ~ Beta, the cross entropy is
The cross entropy has been used as an error metric to measure the distance between two hypotheses. Its absolute value is minimum when the two distributions are identical. It is the information measure most closely related to the log maximum likelihood ).
The relative entropy, or Kullback-Leibler divergence DKL, is a measure of the inefficiency of assuming that the distribution is X2 ~ Beta when the distribution is really X1 ~ Beta. It is defined as follows.
The relative entropy, or Kullback-Leibler divergence, is always non-negative. A few numerical examples follow:
The Kullback-Leibler divergence is not symmetric DKLDKL for the case in which the individual beta distributions Beta and Beta are symmetric, but have different entropies hh. The value of the Kullback divergence depends on the direction traveled: whether going from a higher entropy to a lower entropy or the other way around. In the numerical example above, the Kullback divergence measures the inefficiency of assuming that the distribution is Beta, rather than Beta. The "h" entropy of Beta is higher than the "h" entropy of Beta because the uniform distribution Beta has a maximum amount of disorder. The Kullback divergence is more than two times higher when measured in the direction of decreasing entropy: the direction that assumes that the Beta distribution is Beta rather than the other way around. In this restricted sense, the Kullback divergence is consistent with the second law of thermodynamics.
The Kullback-Leibler divergence is symmetric DKL = DKL for the skewed cases Beta and Beta that have equal differential entropy h = h.
The symmetry condition:
follows from the above definitions and the mirror-symmetry f = f enjoyed by the beta distribution.

Relationships between statistical measures

Mean, mode and median relationship

If 1 < α < β then mode ≤ median ≤ mean. Expressing the mode, and the mean in terms of α and β:
If 1 < β < α then the order of the inequalities are reversed. For α, β > 1 the absolute distance between the mean and the median is less than 5% of the distance between the maximum and minimum values of x. On the other hand, the absolute distance between the mean and the mode can reach 50% of the distance between the maximum and minimum values of x, for the case of α = 1 and β = 1.
For example, for α = 1.0001 and β = 1.00000001:

Mean, geometric mean and harmonic mean relationship

It is known from the inequality of arithmetic and geometric means that the geometric mean is lower than the mean. Similarly, the harmonic mean is lower than the geometric mean. The accompanying plot shows that for α = β, both the mean and the median are exactly equal to 1/2, regardless of the value of α = β, and the mode is also equal to 1/2 for α = β > 1, however the geometric and harmonic means are lower than 1/2 and they only approach this value asymptotically as α = β → ∞.

Kurtosis bounded by the square of the skewness

As remarked by Feller, in the Pearson system the beta probability density appears as type I. Karl Pearson showed, in Plate 1 of his paper published in 1916, a graph with the kurtosis as the vertical axis and the square of the skewness as the horizontal axis, in which a number of distributions were displayed. The region occupied by the beta distribution is bounded by the following two lines in the plane, or the plane:
or, equivalently,
, Karl Pearson accurately computed further boundaries, for example, separating the "U-shaped" from the "J-shaped" distributions. The lower boundary line is produced by skewed "U-shaped" beta distributions with both values of shape parameters α and β close to zero. The upper boundary line is produced by extremely skewed distributions with very large values of one of the parameters and very small values of the other parameter. Karl Pearson showed that this upper boundary line is also the intersection with Pearson's distribution III, which has unlimited support in one direction, and can be bell-shaped or J-shaped. His son, Egon Pearson, showed that the region occupied by the beta distribution as it approaches this boundary is shared with the noncentral chi-squared distribution. Karl Pearson also showed that the gamma distribution is a Pearson type III distribution. Hence this boundary line for Pearson's type III distribution is known as the gamma line.. Pearson later noted that the chi-squared distribution is a special case of Pearson's type III and also shares this boundary line. This is to be expected, since the chi-squared distribution X ~ χ2 is a special case of the gamma distribution, with parametrization X ~ Γ where k is a positive integer that specifies the "number of degrees of freedom" of the chi-squared distribution.
An example of a beta distribution near the upper boundary is given by α = 0.1, β = 1000, for which the ratio / = 1.49835 approaches the upper limit of 1.5 from below. An example of a beta distribution near the lower boundary is given by α= 0.0001, β = 0.1, for which values the expression / = 1.01621 approaches the lower limit of 1 from above. In the infinitesimal limit for both α and β approaching zero symmetrically, the excess kurtosis reaches its minimum value at −2. This minimum value occurs at the point at which the lower boundary line intersects the vertical axis..
Values for the skewness and excess kurtosis below the lower boundary cannot occur for any distribution, and hence Karl Pearson appropriately called the region below this boundary the "impossible region." The boundary for this "impossible region" is determined by bimodal "U"-shaped distributions for which parameters α and β approach zero and hence all the probability density is concentrated at the ends: x = 0, 1 with practically nothing in between them. Since for α ≈ β ≈ 0 the probability density is concentrated at the two ends x = 0 and x = 1, this "impossible boundary" is determined by a 2-point distribution: the probability can only take 2 values, one value with probability p and the other with probability q = 1−p. For cases approaching this limit boundary with symmetry α = β, skewness ≈ 0, excess kurtosis ≈ −2, and the probabilities are pq ≈ 1/2. For cases approaching this limit boundary with skewness, excess kurtosis ≈ −2 + skewness2, and the probability density is concentrated more at one end than the other end, with probabilities at the left end x = 0 and at the right end x = 1.

Symmetry

All statements are conditional on α, β > 0

Inflection points

For certain values of the shape parameters α and β, the probability density function has inflection points, at which the curvature changes sign. The position of these inflection points can be useful as a measure of the dispersion or spread of the distribution.
Defining the following quantity:
Points of inflection occur, depending on the value of the shape parameters α and β, as follows:
There are no inflection points in the remaining regions: U-shaped: upside-down-U-shaped:, reverse-J-shaped or J-shaped:
The accompanying plots show the inflection point locations versus α and β. There are large cuts at surfaces intersecting the lines α = 1, β = 1, α = 2, and β = 2 because at these values the beta distribution change from 2 modes, to 1 mode to no mode.

Shapes

The beta density function can take a wide variety of different shapes depending on the values of the two parameters α and β. The ability of the beta distribution to take this great diversity of shapes is partly responsible for finding wide application for modeling actual measurements:
Symmetric (''α'' = ''β'')
The density function is skewed. An interchange of parameter values yields the mirror image of the initial curve, some more specific cases:

Transformations

Parameter estimation

Method of moments

Two unknown parameters
Two unknown parameters can be estimated, using the method of moments, with the first two moments as follows. Let:
be the sample mean estimate and
be the sample variance estimate. The method-of-moments estimates of the parameters are
When the distribution is required over a known interval other than with random variable X, say with random variable Y, then replace with and with in the above couple of equations for the shape parameters., where:
Four unknown parameters
All four parameters can be estimated, using the method of moments developed by Karl Pearson, by equating sample and population values of the first four central moments. The excess kurtosis was expressed in terms of the square of the skewness, and the sample size ν = α + β, as follows:
One can use this equation to solve for the sample size ν= α + β in terms of the square of the skewness and the excess kurtosis as follows:
This is the ratio between the previously derived limit boundaries for the beta distribution in a space defined with coordinates of the square of the skewness in one axis and the excess kurtosis in the other axis :
The case of zero skewness, can be immediately solved because for zero skewness, α = β and hence ν = 2α = 2β, therefore α = β = ν/2
.
For non-zero sample skewness one needs to solve a system of two coupled equations. Since the skewness and the excess kurtosis are independent of the parameters, the parameters can be uniquely determined from the sample skewness and the sample excess kurtosis, by solving the coupled equations with two known variables and two unknowns :
resulting in the following solution:
Where one should take the solutions as follows: for sample skewness < 0, and for sample skewness > 0.
The accompanying plot shows these two solutions as surfaces in a space with horizontal axes of and and the shape parameters as the vertical axis. The surfaces are constrained by the condition that the sample excess kurtosis must be bounded by the sample squared skewness as stipulated in the above equation. The two surfaces meet at the right edge defined by zero skewness. Along this right edge, both parameters are equal and the distribution is symmetric U-shaped for α = β < 1, uniform for α = β = 1, upside-down-U-shaped for 1 < α = β < 2 and bell-shaped for α = β > 2. The surfaces also meet at the front edge defined by "the impossible boundary" line. Along this front boundary both shape parameters approach zero, and the probability density is concentrated more at one end than the other end, with probabilities at the left end x = 0 and at the right end x = 1. The two surfaces become further apart towards the rear edge. At this rear edge the surface parameters are quite different from each other. As remarked, for example, by Bowman and Shenton, sampling in the neighborhood of the line , "is dangerously near to chaos", because at that line the denominator of the expression above for the estimate ν = α + β becomes zero and hence ν approaches infinity as that line is approached. Bowman and Shenton write that "the higher moment parameters are extremely fragile. However the mean and standard deviation are fairly reliable." Therefore, the problem is for the case of four parameter estimation for very skewed distributions such that the excess kurtosis approaches times the square of the skewness. This boundary line is produced by extremely skewed distributions with very large values of one of the parameters and very small values of the other parameter. See section titled "Kurtosis bounded by the square of the skewness" for a numerical example and further comments about this rear edge boundary line. As remarked by Karl Pearson himself this issue may not be of much practical importance as this trouble arises only for very skewed J-shaped. The usual skewed-bell-shape distributions that occur in practice do not have this parameter estimation problem.
The remaining two parameters can be determined using the sample mean and the sample variance using a variety of equations. One alternative is to calculate the support interval range based on the sample variance and the sample kurtosis. For this purpose one can solve, in terms of the range, the equation expressing the excess kurtosis in terms of the sample variance, and the sample size ν :
to obtain:
Another alternative is to calculate the support interval range based on the sample variance and the sample skewness. For this purpose one can solve, in terms of the range, the equation expressing the squared skewness in terms of the sample variance, and the sample size ν :
to obtain:
The remaining parameter can be determined from the sample mean and the previously obtained parameters: :
and finally, of course,.
In the above formulas one may take, for example, as estimates of the sample moments:
The estimators G1 for sample skewness and G2 for sample kurtosis are used by DAP/SAS, PSPP/SPSS, and Excel. However, they are not used by BMDP and they were not used by MINITAB in 1998. Actually, Joanes and Gill in their 1998 study concluded that the skewness and kurtosis estimators used in BMDP and in MINITAB had smaller variance and mean-squared error in normal samples, but the skewness and kurtosis estimators used in DAP/SAS, PSPP/SPSS, namely G1 and G2, had smaller mean-squared error in samples from a very skewed distribution. It is for this reason that we have spelled out "sample skewness", etc., in the above formulas, to make it explicit that the user should choose the best estimator according to the problem at hand, as the best estimator for skewness and kurtosis depends on the amount of skewness.

Maximum likelihood

Two unknown parameters
As is also the case for maximum likelihood estimates for the gamma distribution, the maximum likelihood estimates for the beta distribution do not have a general closed form solution for arbitrary values of the shape parameters. If X1,..., XN are independent random variables each having a beta distribution, the joint log likelihood function for N iid observations is:
Finding the maximum with respect to a shape parameter involves taking the partial derivative with respect to the shape parameter and setting the expression equal to zero yielding the maximum likelihood estimator of the shape parameters:
where:
since the digamma function denoted ψ is defined as the logarithmic derivative of the gamma function:
To ensure that the values with zero tangent slope are indeed a maximum one has to also satisfy the condition that the curvature is negative. This amounts to satisfying that the second partial derivative with respect to the shape parameters is negative
using the previous equations, this is equivalent to:
where the trigamma function, denoted ψ1, is the second of the polygamma functions, and is defined as the derivative of the digamma function:
These conditions are equivalent to stating that the variances of the logarithmically transformed variables are positive, since:
Therefore, the condition of negative curvature at a maximum is equivalent to the statements:
Alternatively, the condition of negative curvature at a maximum is also equivalent to stating that the following logarithmic derivatives of the geometric means GX and G are positive, since:
While these slopes are indeed positive, the other slopes are negative:
The slopes of the mean and the median with respect to α and β display similar sign behavior.
From the condition that at a maximum, the partial derivative with respect to the shape parameter equals zero, we obtain the following system of coupled maximum likelihood estimate equations that needs to be inverted to obtain the shape parameter estimates in terms of the average of logarithms of the samples X1,..., XN:
where we recognize as the logarithm of the sample geometric mean and as the logarithm of the sample geometric mean based on, the mirror-image of X. For, it follows that .
These coupled equations containing digamma functions of the shape parameter estimates must be solved by numerical methods as done, for example, by Beckman et al. Gnanadesikan et al. give numerical solutions for a few cases. N.L.Johnson and S.Kotz suggest that for "not too small" shape parameter estimates, the logarithmic approximation to the digamma function may be used to obtain initial values for an iterative solution, since the equations resulting from this approximation can be solved exactly:
which leads to the following solution for the initial values for an iterative solution:
Alternatively, the estimates provided by the method of moments can instead be used as initial values for an iterative solution of the maximum likelihood coupled equations in terms of the digamma functions.
When the distribution is required over a known interval other than with random variable X, say with random variable Y, then replace ln in the first equation with
and replace ln in the second equation with
.
If one of the shape parameters is known, the problem is considerably simplified. The following logit transformation can be used to solve for the unknown shape parameter :
This logit transformation is the logarithm of the transformation that divides the variable X by its mirror-image resulting in the "inverted beta distribution" or beta prime distribution. As previously discussed in the section "Moments of logarithmically transformed random variables," the logit transformation, studied by Johnson, extends the finite support based on the original variable X to infinite support in both directions of the real line.
If, for example, is known, the unknown parameter can be obtained in terms of the inverse digamma function of the right hand side of this equation:
In particular, if one of the shape parameters has a value of unity, for example for , using the identity ψ = ψ + 1/x in the equation, the maximum likelihood estimator for the unknown parameter is, exactly:
The beta has support , therefore, and hence, and therefore
In conclusion, the maximum likelihood estimates of the shape parameters of a beta distribution are a complicated function of the sample geometric mean, and of the sample geometric mean based on , the mirror-image of X. One may ask, if the variance is necessary to estimate two shape parameters with the method of moments, why is the variance not necessary to estimate two shape parameters with the maximum likelihood method, for which only the geometric means suffice? The answer is because the mean does not provide as much information as the geometric mean. For a beta distribution with equal shape parameters α = β, the mean is exactly 1/2, regardless of the value of the shape parameters, and therefore regardless of the value of the statistical dispersion. On the other hand, the geometric mean of a beta distribution with equal shape parameters α = β, depends on the value of the shape parameters, and therefore it contains more information. Also, the geometric mean of a beta distribution does not satisfy the symmetry conditions satisfied by the mean, therefore, by employing both the geometric mean based on X and geometric mean based on, the maximum likelihood method is able to provide best estimates for both parameters α = β, without need of employing the variance.
One can express the joint log likelihood per N iid observations in terms of the sufficient statistics as follows:
We can plot the joint log likelihood per N observations for fixed values of the sample geometric means to see the behavior of the likelihood function as a function of the shape parameters α and β. In such a plot, the shape parameter estimators correspond to the maxima of the likelihood function. See the accompanying graph that shows that all the likelihood functions intersect at α = β = 1, which corresponds to the values of the shape parameters that give the maximum entropy. It is evident from the plot that the likelihood function gives sharp peaks for values of the shape parameter estimators close to zero, but that for values of the shape parameters estimators greater than one, the likelihood function becomes quite flat, with less defined peaks. Obviously, the maximum likelihood parameter estimation method for the beta distribution becomes less acceptable for larger values of the shape parameter estimators, as the uncertainty in the peak definition increases with the value of the shape parameter estimators. One can arrive at the same conclusion by noticing that the expression for the curvature of the likelihood function is in terms of the geometric variances
These variances are much larger for small values of the shape parameter α and β. However, for shape parameter values α, β > 1, the variances flatten out. Equivalently, this result follows from the Cramér–Rao bound, since the Fisher information matrix components for the beta distribution are these logarithmic variances. The Cramér–Rao bound states that the variance of any unbiased estimator of α is bounded by the reciprocal of the Fisher information:
so the variance of the estimators increases with increasing α and β, as the logarithmic variances decrease.
Also one can express the joint log likelihood per N iid observations in terms of the digamma function expressions for the logarithms of the sample geometric means as follows:
this expression is identical to the negative of the cross-entropy. Therefore, finding the maximum of the joint log likelihood of the shape parameters, per N iid observations, is identical to finding the minimum of the cross-entropy for the beta distribution, as a function of the shape parameters.
with the cross-entropy defined as follows:
Four unknown parameters
The procedure is similar to the one followed in the two unknown parameter case. If Y1,..., YN are independent random variables each having a beta distribution with four parameters, the joint log likelihood function for N iid observations is:
Finding the maximum with respect to a shape parameter involves taking the partial derivative with respect to the shape parameter and setting the expression equal to zero yielding the maximum likelihood estimator of the shape parameters:
these equations can be re-arranged as the following system of four coupled equations in terms of the maximum likelihood estimates for the four parameters :
with sample geometric means:
The parameters are embedded inside the geometric mean expressions in a nonlinear way. This precludes, in general, a closed form solution, even for an initial value approximation for iteration purposes. One alternative is to use as initial values for iteration the values obtained from the method of moments solution for the four parameter case. Furthermore, the expressions for the harmonic means are well-defined only for, which precludes a maximum likelihood solution for shape parameters less than unity in the four-parameter case. Fisher's information matrix for the four parameter case is positive-definite only for α, β > 2, for bell-shaped beta distributions, with inflection points located to either side of the mode. The following Fisher information components have singularities at the following values:
. Thus, it is not possible to strictly carry on the maximum likelihood estimation for some well known distributions belonging to the four-parameter beta distribution family, like the uniform distribution, and the arcsine distribution. N.L.Johnson and S.Kotz ignore the equations for the harmonic means and instead suggest "If a and c are unknown, and maximum likelihood estimators of a, c, α and β are required, the above procedure /) can be repeated using a succession of trial values of a and c, until the pair for which maximum likelihood is as great as possible, is attained".

Fisher information matrix

Let a random variable X have a probability density f. The partial derivative with respect to the parameter α of the log likelihood function is called the score. The second moment of the score is called the Fisher information:
The expectation of the score is zero, therefore the Fisher information is also the second moment centered on the mean of the score: the variance of the score.
If the log likelihood function is twice differentiable with respect to the parameter α, and under certain regularity conditions, then the Fisher information may also be written as follows :
Thus, the Fisher information is the negative of the expectation of the second derivative with respect to the parameter α of the log likelihood function. Therefore, Fisher information is a measure of the curvature of the log likelihood function of α. A low curvature, flatter log likelihood function curve has low Fisher information; while a log likelihood function curve with large curvature has high Fisher information. When the Fisher information matrix is computed at the evaluates of the parameters it is equivalent to the replacement of the true log likelihood surface by a Taylor's series approximation, taken as far as the quadratic terms. The word information, in the context of Fisher information, refers to information about the parameters. Information such as: estimation, sufficiency and properties of variances of estimators. The Cramér–Rao bound states that the inverse of the Fisher information is a lower bound on the variance of any estimator of a parameter α:
The precision to which one can estimate the estimator of a parameter α is limited by the Fisher Information of the log likelihood function. The Fisher information is a measure of the minimum error involved in estimating a parameter of a distribution and it can be viewed as a measure of the resolving power of an experiment needed to discriminate between two alternative hypothesis of a parameter.
When there are N parameters
then the Fisher information takes the form of an N×N positive semidefinite symmetric matrix, the Fisher Information Matrix, with typical element:
Under certain regularity conditions, the Fisher Information Matrix may also be written in the following form, which is often more convenient for computation:
With X1,..., XN iid random variables, an N-dimensional "box" can be constructed with sides X1,..., XN. Costa and Cover show that the differential entropy h is related to the volume of the typical set, while the Fisher information is related to the surface of this typical set.
Two parameters
For X1,..., XN independent random variables each having a beta distribution parametrized with shape parameters α and β, the joint log likelihood function for N iid observations is:
therefore the joint log likelihood function per N iid observations is:
For the two parameter case, the Fisher information has 4 components: 2 diagonal and 2 off-diagonal. Since the Fisher information matrix is symmetric, one of these off diagonal components is independent. Therefore, the Fisher information matrix has 3 independent components.

Aryal and Nadarajah calculated Fisher's information matrix for the four-parameter case, from which the two parameter case can be obtained as follows:
Since the Fisher information matrix is symmetric
The Fisher information components are equal to the log geometric variances and log geometric covariance. Therefore, they can be expressed as trigamma functions, denoted ψ1, the second of the polygamma functions, defined as the derivative of the digamma function:
These derivatives are also derived in the section titled "Parameter estimation", "Maximum likelihood", "Two unknown parameters," and plots of the log likelihood function are also shown in that section. The section titled "Geometric variance and covariance" contains plots and further discussion of the Fisher information matrix components: the log geometric variances and log geometric covariance as a function of the shape parameters α and β. The section titled "Other moments", "Moments of transformed random variables", "Moments of logarithmically transformed random variables" contains formulas for moments of logarithmically transformed random variables. Images for the Fisher information components and are shown in the section titled "Geometric variance".
The determinant of Fisher's information matrix is of interest. From the expressions for the individual components of the Fisher information matrix, it follows that the determinant of Fisher's information matrix for the beta distribution is:
From Sylvester's criterion, it follows that the Fisher information matrix for the two parameter case is positive-definite.
Four parameters
If Y1,..., YN are independent random variables each having a beta distribution with four parameters: the exponents α and β, and also a, and c , with probability density function:
the joint log likelihood function per N iid observations is:
For the four parameter case, the Fisher information has 4*4=16 components. It has 12 off-diagonal components =. Since the Fisher information matrix is symmetric, half of these components are independent. Therefore, the Fisher information matrix has 6 independent off-diagonal + 4 diagonal = 10 independent components. Aryal and Nadarajah calculated Fisher's information matrix for the four parameter case as follows:
In the above expressions, the use of X instead of Y in the expressions var = ln is not an error. The expressions in terms of the log geometric variances and log geometric covariance occur as functions of the two parameter X ~ Beta parametrization because when taking the partial derivatives with respect to the exponents in the four parameter case, one obtains the identical expressions as for the two parameter case: these terms of the four parameter Fisher information matrix are independent of the minimum a and maximum c of the distribution's range. The only non-zero term upon double differentiation of the log likelihood function with respect to the exponents α and β is the second derivative of the log of the beta function: ln. This term is independent of the minimum a and maximum c of the distribution's range. Double differentiation of this term results in trigamma functions. The sections titled "Maximum likelihood", "Two unknown parameters" and "Four unknown parameters" also show this fact.
The Fisher information for N i.i.d. samples is N times the individual Fisher information.
The lower two diagonal entries of the Fisher information matrix, with respect to the parameter "a" :, and with respect to the parameter "c" : are only defined for exponents α > 2 and β > 2 respectively. The Fisher information matrix component for the minimum "a" approaches infinity for exponent α approaching 2 from above, and the Fisher information matrix component for the maximum "c" approaches infinity for exponent β approaching 2 from above.
The Fisher information matrix for the four parameter case does not depend on the individual values of the minimum "a" and the maximum "c", but only on the total range. Moreover, the components of the Fisher information matrix that depend on the range, depend only through its inverse, such that the Fisher information decreases for increasing range.
The accompanying images show the Fisher information components and. Images for the Fisher information components and are shown in the section titled "Geometric variance". All these Fisher information components look like a basin, with the "walls" of the basin being located at low values of the parameters.
The following four-parameter-beta-distribution Fisher information components can be expressed in terms of the two-parameter: X ~ Beta expectations of the transformed ratio and of its mirror image, scaled by the range, which may be helpful for interpretation:
These are also the expected values of the "inverted beta distribution" or beta prime distribution and its mirror image, scaled by the range.
Also, the following Fisher information components can be expressed in terms of the harmonic variances or of variances based on the ratio transformed variables as follows:
See section "Moments of linearly transformed, product and inverted random variables" for these expectations.
The determinant of Fisher's information matrix is of interest. From the expressions for the individual components, it follows that the determinant of Fisher's information matrix for the beta distribution with four parameters is:
Using Sylvester's criterion, and since diagonal components and have singularities at α=2 and β=2 it follows that the Fisher information matrix for the four parameter case is positive-definite for α>2 and β>2. Since for α > 2 and β > 2 the beta distribution is bell shaped, it follows that the Fisher information matrix is positive-definite only for bell-shaped beta distributions, with inflection points located to either side of the mode. Thus, important well known distributions belonging to the four-parameter beta distribution family, like the parabolic distribution and the uniform distribution have Fisher information components that blow up in the four-parameter case. The four-parameter Wigner semicircle distribution and arcsine distribution have negative Fisher information determinants for the four-parameter case.

Bayesian inference

The use of Beta distributions in Bayesian inference is due to the fact that they provide a family of conjugate prior probability distributions for binomial and geometric distributions. The domain of the beta distribution can be viewed as a probability, and in fact the beta distribution is often used to describe the distribution of a probability value p:
Examples of beta distributions used as prior probabilities to represent ignorance of prior parameter values in Bayesian inference are Beta, Beta and Beta.

Rule of succession

A classic application of the beta distribution is the rule of succession, introduced in the 18th century by Pierre-Simon Laplace in the course of treating the sunrise problem. It states that, given s successes in n conditionally independent Bernoulli trials with probability p, that the estimate of the expected value in the next trial is. This estimate is the expected value of the posterior distribution over p, namely Beta, which is given by Bayes' rule if one assumes a uniform prior probability over p and then observes that p generated s successes in n trials. Laplace's rule of succession has been criticized by prominent scientists. R. T. Cox described Laplace's application of the rule of succession to the sunrise problem as "a travesty of the proper use of the principle." Keynes remarks "indeed this is so foolish a theorem that to entertain it is discreditable." Karl Pearson showed that the probability that the next trials will be successes, after n successes in n trials, is only 50%, which has been considered too low by scientists like Jeffreys and unacceptable as a representation of the scientific process of experimentation to test a proposed scientific law. As pointed out by Jeffreys Laplace's rule of succession establishes a high probability of success /) in the next trial, but only a moderate probability that a further sample comparable in size will be equally successful. As pointed out by Perks, "The rule of succession itself is hard to accept. It assigns a probability to the next trial which implies the assumption that the actual run observed is an average run and that we are always at the end of an average run. It would, one would think, be more reasonable to assume that we were in the middle of an average run. Clearly a higher value for both probabilities is necessary if they are to accord with reasonable belief." These problems with Laplace's rule of succession motivated Haldane, Perks, Jeffreys and others to search for other forms of prior probability. According to Jaynes, the main problem with the rule of succession is that it is not valid when s=0 or s=n.

Bayesian prior probability (Beta(1,1))

The beta distribution achieves maximum differential entropy for Beta: the uniform probability density, for which all values in the domain of the distribution have equal density. This uniform distribution Beta was suggested by Thomas Bayes as the prior probability distribution to express ignorance about the correct prior distribution. This prior distribution was adopted by Pierre-Simon Laplace, and hence it was also known as the "Bayes-Laplace rule" or the "Laplace rule" of "inverse probability" in publications of the first half of the 20th century. In the later part of the 19th century and early part of the 20th century, scientists realized that the assumption of uniform "equal" probability density depended on the actual functions and parametrizations used. In particular, the behavior near the ends of distributions with finite support required particular attention. Keynes criticized the use of Bayes's uniform prior probability that all values between zero and one are equiprobable, as follows: "Thus experience, if it shows anything, shows that there is a very marked clustering of statistical ratios in the neighborhoods of zero and unity, of those for positive theories and for correlations between positive qualities in the neighborhood of zero, and of those for negative theories and for correlations between negative qualities in the neighborhood of unity. "

Haldane's prior probability (Beta(0,0))

The Beta distribution was proposed by J.B.S. Haldane, who suggested that the prior probability representing complete uncertainty should be proportional to p−1−1. The function p−1−1 can be viewed as the limit of the numerator of the beta distribution as both shape parameters approach zero: α, β → 0. The Beta function approaches infinity, for both parameters approaching zero, α, β → 0. Therefore, p−1−1 divided by the Beta function approaches a 2-point Bernoulli distribution with equal probability 1/2 at each end, at 0 and 1, and nothing in between, as α, β → 0. A coin-toss: one face of the coin being at 0 and the other face being at 1. The Haldane prior probability distribution Beta is an "improper prior" because its integration fails to strictly converge to 1 due to the singularities at each end. However, this is not an issue for computing posterior probabilities unless the sample size is very small. Furthermore, Zellner points out that on the log-odds scale,, the Haldane prior is the uniformly flat prior. The fact that a uniform prior probability on the logit transformed variable ln is equivalent to the Haldane prior on the domain was pointed out by Harold Jeffreys in the first edition of his book Theory of Probability. Jeffreys writes "Certainly if we take the Bayes-Laplace rule right up to the extremes we are led to results that do not correspond to anybody's way of thinking. The rule dx/ goes too far the other way. It would lead to the conclusion that if a sample is of one type with respect to some property there is a probability 1 that the whole population is of that type." The fact that "uniform" depends on the parametrization, led Jeffreys to seek a form of prior that would be invariant under different parametrizations.

Jeffreys' prior probability (Beta(1/2,1/2) for a Bernoulli or for a binomial distribution)

proposed to use an uninformative prior probability measure that should be invariant under reparameterization: proportional to the square root of the determinant of Fisher's information matrix. For the Bernoulli distribution, this can be shown as follows: for a coin that is "heads" with probability p ∈ and is "tails" with probability 1 − p, for a given ∈ the probability is pHT. Since T = 1 − H, the Bernoulli distribution is pH1 − H. Considering p as the only parameter, it follows that the log likelihood for the Bernoulli distribution is
The Fisher information matrix has only one component, therefore:
Similarly, for the Binomial distribution with n Bernoulli trials, it can be shown that
Thus, for the Bernoulli, and Binomial distributions, Jeffreys prior is proportional to, which happens to be proportional to a beta distribution with domain variable x = p, and shape parameters α = β = 1/2, the arcsine distribution:
It will be shown in the next section that the normalizing constant for Jeffreys prior is immaterial to the final result because the normalizing constant cancels out in Bayes theorem for the posterior probability. Hence Beta is used as the Jeffreys prior for both Bernoulli and binomial distributions. As shown in the next section, when using this expression as a prior probability times the likelihood in Bayes theorem, the posterior probability turns out to be a beta distribution. It is important to realize, however, that Jeffreys prior is proportional to for the Bernoulli and binomial distribution, but not for the beta distribution. Jeffreys prior for the beta distribution is given by the determinant of Fisher's information for the beta distribution, which, as shown in the section titled "Fisher information matrix" is a function of the trigamma function ψ1 of shape parameters α and β as follows:
As previously discussed, Jeffreys prior for the Bernoulli and binomial distributions is proportional to the arcsine distribution Beta, a one-dimensional curve that looks like a basin as a function of the parameter p of the Bernoulli and binomial distributions. The walls of the basin are formed by p approaching the singularities at the ends p → 0 and p → 1, where Beta approaches infinity. Jeffreys prior for the beta distribution is a 2-dimensional surface that looks like a basin with only two of its walls meeting at the corner α = β = 0 as a function of the shape parameters α and β of the beta distribution. The two adjoining walls of this 2-dimensional surface are formed by the shape parameters α and β approaching the singularities at α, β → 0. It has no walls for α, β → ∞ because in this case the determinant of Fisher's information matrix for the beta distribution approaches zero.
It will be shown in the next section that Jeffreys prior probability results in posterior probabilities that are intermediate between the posterior probability results of the Haldane and Bayes prior probabilities.
Jeffreys prior may be difficult to obtain analytically, and for some cases it just doesn't exist. Berger, Bernardo and Sun, in a 2009 paper defined a reference prior probability distribution that exists for the asymmetric triangular distribution. They cannot obtain a closed-form expression for their reference prior, but numerical calculations show it to be nearly perfectly fitted by the prior
where θ is the vertex variable for the asymmetric triangular distribution with support . Berger et al. also give a heuristic argument that Beta could indeed be the exact Berger–Bernardo–Sun reference prior for the asymmetric triangular distribution. Therefore, Beta not only is Jeffreys prior for the Bernoulli and binomial distributions, but also seems to be the Berger–Bernardo–Sun reference prior for the asymmetric triangular distribution, a distribution used in project management and PERT analysis to describe the cost and duration of project tasks.
Clarke and Barron prove that, among continuous positive priors, Jeffreys prior asymptotically maximizes Shannon's mutual information between a sample of size n and the parameter, and therefore Jeffreys prior is the most uninformative prior. The proof rests on an examination of the Kullback–Leibler divergence between probability density functions for iid random variables.

Effect of different prior probability choices on the posterior beta distribution

If samples are drawn from the population of a random variable X that result in s successes and f failures in "n" Bernoulli trials n = s + f, then the likelihood function for parameters s and f given x = p, is the following binomial distribution:
If beliefs about prior probability information are reasonably well approximated by a beta distribution with parameters α Prior and β Prior, then:
According to Bayes' theorem for a continuous event space, the posterior probability is given by the product of the prior probability and the likelihood function, normalized so that the area under the curve equals one, as follows:
The binomial coefficient
appears both in the numerator and the denominator of the posterior probability, and it does not depend on the integration variable x, hence it cancels out, and it is irrelevant to the final result. Similarly the normalizing factor for the prior probability, the beta function B cancels out and it is immaterial to the final result. The same posterior probability result can be obtained if one uses an un-normalized prior
because the normalizing factors all cancel out. Several authors thus use an un-normalized prior formula since the normalization constant cancels out. The numerator of the posterior probability ends up being just the product of the prior probability and the likelihood function, and the denominator is its integral from zero to one. The beta function in the denominator, B, appears as a normalization constant to ensure that the total posterior probability integrates to unity.
The ratio s/n of the number of successes to the total number of trials is a sufficient statistic in the binomial case, which is relevant for the following results.
For the Bayes' prior probability, the posterior probability is:
For the Jeffreys' prior probability, the posterior probability is:
and for the Haldane prior probability, the posterior probability is:
From the above expressions it follows that for s/n = 1/2) all the above three prior probabilities result in the identical location for the posterior probability mean = mode = 1/2. For s/n < 1/2, the mean of the posterior probabilities, using the following priors, are such that: mean for Bayes prior > mean for Jeffreys prior > mean for Haldane prior. For s/n > 1/2 the order of these inequalities is reversed such that the Haldane prior probability results in the largest posterior mean. The Haldane prior probability Beta results in a posterior probability density with mean identical to the ratio s/n of the number of successes to the total number of trials. Therefore, the Haldane prior results in a posterior probability with expected value in the next trial equal to the maximum likelihood. The Bayes prior probability Beta results in a posterior probability density with mode identical to the ratio s/n.
In the case that 100% of the trials have been successful s = n, the Bayes prior probability Beta results in a posterior expected value equal to the rule of succession /, while the Haldane prior Beta results in a posterior expected value of 1. Jeffreys prior probability results in a posterior expected value equal to /. Perks points out: "This provides a new rule of succession and expresses a 'reasonable' position to take up, namely, that after an unbroken run of n successes we assume a probability for the next trial equivalent to the assumption that we are about half-way through an average run, i.e. that we expect a failure once in trials. The Bayes–Laplace rule implies that we are about at the end of an average run or that we expect a failure once in trials. The comparison clearly favours the new result from the point of view of 'reasonableness'."
Conversely, in the case that 100% of the trials have resulted in failure, the Bayes prior probability Beta results in a posterior expected value for success in the next trial equal to 1/, while the Haldane prior Beta results in a posterior expected value of success in the next trial of 0. Jeffreys prior probability results in a posterior expected value for success in the next trial equal to /, which Perks points out: "is a much more reasonably remote result than the Bayes-Laplace result 1/".
Jaynes questions the use of these formulas for the cases s = 0 or s = n because the integrals do not converge. In practice, the conditions 0As remarked in the section on the rule of succession, K. Pearson showed that after n successes in n trials the posterior probability that the next trials will all be successes is exactly 1/2, whatever the value of n. Based on the Haldane Beta distribution as the prior probability, this posterior probability is 1. Perks shows that, for what is now known as the Jeffreys prior, this probability is /)/).../, which for n = 1, 2, 3 gives 15/24, 315/480, 9009/13440; rapidly approaching a limiting value of as n tends to infinity. Perks remarks that what is now known as the Jeffreys prior: "is clearly more 'reasonable' than either the Bayes-Laplace result or the result on the alternative rule rejected by Jeffreys which gives certainty as the probability. It clearly provides a very much better correspondence with the process of induction. Whether it is 'absolutely' reasonable for the purpose, i.e. whether it is yet large enough, without the absurdity of reaching unity, is a matter for others to decide. But it must be realized that the result depends on the assumption of complete indifference and absence of knowledge prior to the sampling experiment."
Following are the variances of the posterior distribution obtained with these three prior probability distributions:
for the Bayes' prior probability, the posterior variance is:
for the Jeffreys' prior probability, the posterior variance is:
and for the Haldane prior probability, the posterior variance is:
So, as remarked by Silvey, for large n, the variance is small and hence the posterior distribution is highly concentrated, whereas the assumed prior distribution was very diffuse. This is in accord with what one would hope for, as vague prior knowledge is transformed into a more precise posterior knowledge by an informative experiment. For small n the Haldane Beta prior results in the largest posterior variance while the Bayes Beta prior results in the more concentrated posterior. Jeffreys prior Beta results in a posterior variance in between the other two. As n increases, the variance rapidly decreases so that the posterior variance for all three priors converges to approximately the same value. Recalling the previous result that the Haldane prior probability Beta results in a posterior probability density with mean identical to the ratio s/n of the number of successes to the total number of trials, it follows from the above expression that also the Haldane prior Beta results in a posterior with variance identical to the variance expressed in terms of the max. likelihood estimate s/n and sample size :
with the mean μ = s/n and the sample size ν = n.
In Bayesian inference, using a prior distribution Beta prior to a binomial distribution is equivalent to adding pseudo-observations of "success" and pseudo-observations of "failure" to the actual number of successes and failures observed, then estimating the parameter p of the binomial distribution by the proportion of successes over both real- and pseudo-observations. A uniform prior Beta does not add any pseudo-observations since for Beta it follows that = 0 and = 0. The Haldane prior Beta subtracts one pseudo observation from each and Jeffreys prior Beta subtracts 1/2 pseudo-observation of success and an equal number of failure. This subtraction has the effect of smoothing out the posterior distribution. If the proportion of successes is not 50% values of αPrior and βPrior less than 1 and ) favor sparsity, i.e. distributions where the parameter p is closer to either 0 or 1. In effect, values of αPrior and βPrior between 0 and 1, when operating together, function as a concentration parameter.
The accompanying plots show the posterior probability density functions for sample sizes n ∈ , successes s ∈ and Beta ∈ . Also shown are the cases for n = , success s = and Beta ∈ . The first plot shows the symmetric cases, for successes s ∈ , with mean = mode = 1/2 and the second plot shows the skewed cases s ∈ . The images show that there is little difference between the priors for the posterior with sample size of 50. Significant differences appear for very small sample sizes. Therefore, the skewed cases, with successes s = , show a larger effect from the choice of prior, at small sample size, than the symmetric cases. For symmetric distributions, the Bayes prior Beta results in the most "peaky" and highest posterior distributions and the Haldane prior Beta results in the flattest and lowest peak distribution. The Jeffreys prior Beta lies in between them. For nearly symmetric, not too skewed distributions the effect of the priors is similar. For very small sample size and skewed distribution the Haldane prior can result in a reverse-J-shaped distribution with a singularity at the left end. However, this happens only in degenerate cases and it is not an issue in generic cases of reasonable sample size.
In Chapter 12 of his book, Jaynes asserts that the Haldane prior Beta describes a prior state of knowledge of complete ignorance, where we are not even sure whether it is physically possible for an experiment to yield either a success or a failure, while the Bayes prior Beta applies if one knows that both binary outcomes are possible. Jaynes states: "interpret the Bayes-Laplace prior as describing not a state of complete ignorance, but the state of knowledge in which we have observed one success and one failure...once we have seen at least one success and one failure, then we know that the experiment is a true binary one, in the sense of physical possibility." Jaynes does not specifically discuss Jeffreys prior Beta. However, it follows from the above discussion that Jeffreys Beta prior represents a state of knowledge in between the Haldane Beta and Bayes Beta prior.
Similarly, Karl Pearson in his 1892 book The Grammar of Science maintained that the Bayes uniform prior was not a complete ignorance prior, and that it should be used when prior information justified to "distribute our ignorance equally"". K. Pearson wrote: "Yet the only supposition that we appear to have made is this: that, knowing nothing of nature, routine and anomy are to be considered as equally likely to occur. Now we were not really justified in making even this assumption, for it involves a knowledge that we do not possess regarding nature. We use our experience of the constitution and action of coins in general to assert that heads and tails are equally probable, but we have no right to assert before experience that, as we know nothing of nature, routine and breach are equally probable. In our ignorance we ought to consider before experience that nature may consist of all routines, all anomies, or a mixture of the two in any proportion whatever, and that all such are equally probable. Which of these constitutions after experience is the most probable must clearly depend on what that experience has been like."
If there is sufficient sampling data, and the posterior probability mode is not located at one of the extremes of the domain, the three priors of Bayes, Jeffreys and Haldane should yield similar posterior probability densities. Otherwise, as Gelman et al. point out, "if so few data are available that the choice of noninformative prior distribution makes a difference, one should put relevant information into the prior distribution", or as Berger points out "when different reasonable priors yield substantially different answers, can it be right to state that there is a single answer? Would it not be better to admit that there is scientific uncertainty, with the conclusion depending on prior beliefs?."

Occurrence and applications

Order statistics

The beta distribution has an important application in the theory of order statistics. A basic result is that the distribution of the kth smallest of a sample of size n from a continuous uniform distribution has a beta distribution. This result is summarized as:
From this, and application of the theory related to the probability integral transform, the distribution of any individual order statistic from any continuous distribution can be derived.

Subjective logic

In standard logic, propositions are considered to be either true or false. In contradistinction, subjective logic assumes that humans cannot determine with absolute certainty whether a proposition about the real world is absolutely true or false. In subjective logic the posteriori probability estimates of binary events can be represented by beta distributions.

Wavelet analysis

A wavelet is a wave-like oscillation with an amplitude that starts out at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation" that promptly decays. Wavelets can be used to extract information from many different kinds of data, including – but certainly not limited to – audio signals and images. Thus, wavelets are purposefully crafted to have specific properties that make them useful for signal processing. Wavelets are localized in both time and frequency whereas the standard Fourier transform is only localized in frequency. Therefore, standard Fourier Transforms are only applicable to stationary processes, while wavelets are applicable to non-stationary processes. Continuous wavelets can be constructed based on the beta distribution. Beta wavelets can be viewed as a soft variety of Haar wavelets whose shape is fine-tuned by two shape parameters α and β.

Project management: task cost and schedule modeling

The beta distribution can be used to model events which are constrained to take place within an interval defined by a minimum and maximum value. For this reason, the beta distribution — along with the triangular distribution — is used extensively in PERT, critical path method, Joint Cost Schedule Modeling and other project management/control systems to describe the time to completion and the cost of a task. In project management, shorthand computations are widely used to estimate the mean and standard deviation of the beta distribution:
where a is the minimum, c is the maximum, and b is the most likely value.
The above estimate for the mean is known as the PERT three-point estimation and it is exact for either of the following values of β :
or
skewness =, and excess kurtosis =
The above estimate for the standard deviation σ = /6 is exact for either of the following values of α and β:
Otherwise, these can be poor approximations for beta distributions with other values of α and β, exhibiting average errors of 40% in the mean and 549% in the variance.

Computational methods

Generating beta-distributed random variates

If X and Y are independent, with and then
So one algorithm for generating beta variates is to generate, where X is a gamma variate with parameters and Y is an independent gamma variate with parameters. In fact, here and are independent, and. If and is independent of and, then and is independent of. This shows that the product of independent and random variables is a random variable.
Also, the kth order statistic of n uniformly distributed variates is, so an alternative if α and β are small integers is to generate α + β − 1 uniform variates and choose the α-th smallest.
Another way to generate the Beta distribution is by Pólya urn model. According to this method, one start with an "urn" with α "black" balls and β "white" balls and draw uniformly with replacement. Every trial an additional ball is added according to the color of the last ball which was drawn. Asymptotically, the proportion of black and white balls will be distributed according to the Beta distribution, where each repetition of the experiment will produce a different value.
It is also possible to use the inverse transform sampling.

History

The first systematic modern discussion of the beta distribution is probably due to Karl Pearson FRS, an influential English mathematician who has been credited with establishing the discipline
of mathematical statistics. In Pearson's papers the beta distribution is couched as a solution of a differential equation: Pearson's Type I distribution which it is essentially identical to except for arbitrary shifting and re-scaling. In fact, in several English books and journal articles in the few decades prior to World War II, it was common to refer to the beta distribution as Pearson's Type I distribution. William P. Elderton in his 1906 monograph "Frequency curves and correlation" further analyzes the beta distribution as Pearson's Type I distribution, including a full discussion of the method of moments for the four parameter case, and diagrams of U-shaped, J-shaped, twisted J-shaped, "cocked-hat" shapes, horizontal and angled straight-line cases. Elderton wrote "I am chiefly indebted to Professor Pearson, but the indebtedness is of a kind for which it is impossible to offer formal thanks." Elderton in his 1906 monograph provides an impressive amount of information on the beta distribution, including equations for the origin of the distribution chosen to be the mode, as well as for other Pearson distributions: types I through VII. Elderton also included a number of appendixes, including one appendix on the beta and gamma functions. In later editions, Elderton added equations for the origin of the distribution chosen to be the mean, and analysis of Pearson distributions VIII through XII.
As remarked by Bowman and Shenton "Fisher and Pearson had a difference of opinion in the approach to estimation, in particular relating to moments and maximum likelihood in the case of the Beta distribution." Also according to Bowman and Shenton, "the case of a Type I model being the center of the controversy was pure serendipity. A more difficult model of 4 parameters would have been hard to find."
Ronald Fisher was one of the giants of statistics in the first half of the 20th century, and his long running public conflict with Karl Pearson can be followed in a number of articles in prestigious journals. For example, concerning the estimation of the four parameters for the beta distribution, and Fisher's criticism of Pearson's method of moments as being arbitrary, see Pearson's article "Method of moments and method of maximum likelihood" in which Pearson writes "I read which as far as I am aware is the only case at present published of the application of Professor Fisher's method. To my astonishment that method depends on first working out the constants of the frequency curve by the Method of Moments and then superposing on it, by what Fisher terms "the Method of Maximum Likelihood" a further approximation to obtain, what he holds, he will thus get, "more efficient values" of the curve constants."
David and Edwards's treatise on the history of statistics cites the first modern treatment of the beta distribution, in 1911, using the beta designation that has become standard, due to Corrado Gini, an Italian statistician, demographer, and sociologist, who developed the Gini coefficient. N.L.Johnson and S.Kotz, in their comprehensive and very informative monograph on leading historical personalities in statistical sciences credit Corrado Gini as "an early Bayesian...who dealt with the problem of eliciting the parameters of an initial Beta distribution, by singling out techniques which anticipated the advent of the so called empirical Bayes approach." Bayes, in a posthumous paper published in 1763 by Richard Price, obtained a beta distribution as the density of the probability of success in Bernoulli trials, but the paper does not analyze any of the moments of the beta distribution or discuss any of its properties.