Bessel's correction


In statistics, Bessel's correction is the use of n − 1 instead of n in the formula for the sample variance and sample standard deviation, where n is the number of observations in a sample. This method corrects the bias in the estimation of the population variance. It also partially corrects the bias in the estimation of the population standard deviation. However, the correction often increases the mean squared error in these estimations. This technique is named after Friedrich Bessel.
In estimating the population variance from a sample when the population mean is unknown, the uncorrected sample variance is the mean of the squares of deviations of sample values from the sample mean. In this case, the sample variance is a biased estimator of the population variance.
Multiplying the uncorrected sample variance by the factor
gives an unbiased estimator of the population variance. In some literature, the above factor is called Bessel's correction.
One can understand Bessel's correction as the degrees of freedom in the residuals vector :
where is the sample mean. While there are n independent observations in the sample, there are only n − 1 independent residuals, as they sum to 0. For a more intuitive explanation of the need for Bessel's correction, see.
Generally Bessel's correction is an approach to reduce the bias due to finite sample size. Such finite-sample bias correction is also needed for other estimates like skew and kurtosis, but in these the inaccuracies are often significantly larger. To fully remove such bias it is necessary to do a more complex multi-parameter estimation. For instance a correct correction for the standard deviation depends on the kurtosis, but this again has a finite sample bias and it depends on the standard deviation, i.e. both estimations have to be merged.

Caveats

There are three caveats to consider regarding Bessel's correction:
  1. It does not yield an unbiased estimator of standard deviation.
  2. The corrected estimator often has a higher mean squared error than the uncorrected estimator. Furthermore, there is no population distribution for which it has the minimum MSE because a different scale factor can always be chosen to minimize MSE.
  3. It is only necessary when the population mean is unknown. In practice, this generally happens.
Firstly, while the sample variance is an unbiased estimator of the population variance, its square root, the sample standard deviation, is a biased estimate of the population standard deviation; because the square root is a concave function, the bias is downward, by Jensen's inequality. There is no general formula for an unbiased estimator of the population standard deviation, though there are correction factors for particular distributions, such as the normal; see unbiased estimation of standard deviation for details. An approximation for the exact correction factor for the normal distribution is given by using n − 1.5 in the formula: the bias decays quadratically.
Secondly, the unbiased estimator does not minimize mean squared error, and generally has worse MSE than the uncorrected estimator. MSE can be minimized by using a different factor. The optimal value depends on excess kurtosis, as discussed in mean squared error: variance; for the normal distribution this is optimized by dividing by n + 1.
Thirdly, Bessel's correction is only necessary when the population mean is unknown, and one is estimating both population mean and population variance from a given sample, using the sample mean to estimate the population mean. In that case there are n degrees of freedom in a sample of n points, and simultaneous estimation of mean and variance means one degree of freedom goes to the sample mean and the remaining n − 1 degrees of freedom go to the sample variance. However, if the population mean is known, then the deviations of the observations from the population mean have n degrees of freedom and Bessel's correction is not applicable.

Terminology

This correction is so common that the term "sample variance" and "sample standard deviation" are frequently used to mean the corrected estimators, using n − 1. However caution is needed: some calculators and software packages may provide for both or only the more unusual formulation. This article uses the following symbols and definitions:
The standard deviations will then be the square roots of the respective variances. Since the square root introduces bias, the terminology "uncorrected" and "corrected" is preferred for the standard deviation estimators:

Formula

The sample mean is given by
The biased sample variance is then written:
and the unbiased sample variance is written:

Proof of correctness – Alternative 1


As a background fact, we use the identity which follows from the definition of the standard deviation and linearity of expectation.
A very helpful observation is that for any distribution, the variance equals half the expected value of when are an independent sample from that distribution. To prove this observation we will use that as well as linearity of expectation:
Now that the observation is proven, it suffices to show that the expected squared difference of two observations from the sample population equals times the expected squared difference of two observations from the original distribution. To see this, note that when we pick and via u, v being integers selected independently and uniformly from 1 to n, a fraction of the time we will have u = v and therefore the sampled squared difference is zero independent of the original distribution. The remaining of the time, the value of is the expected squared difference between two independent observations from the original distribution. Therefore, dividing the sample expected squared difference by, or equivalently multiplying by gives an unbiased estimate of the original expected squared difference.

Proof of correctness – Alternative 2


Click to expand


Recycling an identity for variance,
so
and by definition,
Note that, since x1, x2, . . . , xn are a random sample from a distribution with variance σ2, it follows that for each i = 1, 2, . . . , n:
and also
This is a property of the variance of uncorrelated variables, arising from the Bienaymé formula. The required result is then obtained by substituting these two formulae:

Proof of correctness – Alternative 3


Click to expand


The expected discrepancy between the biased estimator and the true variance is
So, the expected value of the biased estimator will be
So, an unbiased estimator should be given by

Intuition

In the biased estimator, by using the sample mean instead of the true mean, you are underestimating each xiµ by xµ. We know that the variance of a sum is the sum of the variances. So, to find the discrepancy between the biased estimator and the true variance, we just need to find the expected value of 2.
This is just the variance of the sample mean, which is σ2/n. So, we expect that the biased estimator underestimates σ2 by σ2/n, and so the biased estimator = × the unbiased estimator = /n × the unbiased estimator.