Empirical distribution function


In statistics, an empirical distribution function is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by at each of the data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.
The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution, according to the Glivenko–Cantelli theorem. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function.

Definition

Let be independent, identically distributed real random variables with the common cumulative distribution function. Then the empirical distribution function is defined as
where is the indicator of event. For a fixed, the indicator is a Bernoulli random variable with parameter ; hence is a binomial random variable with mean and variance. This implies that is an unbiased estimator for.
However, in some textbooks, the definition is given as

Mean

The mean of the empirical distribution is an unbiased estimator of the mean of the population distribution.
which is more commonly denoted

Variance

The variance of the empirical distribution times is an unbiased estimator of the variance of the population distribution.

Mean squared error

The mean squared error for the empirical distribution is as follows.
Where is estimator and a unknown parameter

Quantiles

For any real number the notation denotes the least integer greater than or equal to. For any real number a, the notation denotes the greatest integer less than or equal to.
If is not an integer, then the -th quantile is unique and is equal to
If is an integer, then the -th quantile is not unique and is any real number such that

Empirical median

If is odd, then the empirical median is the number
If is even, then the empirical median is the number

Asymptotic properties

Since the ratio approaches 1 as goes to infinity, the asymptotic properties of the two definitions that are given above are the same.
By the strong law of large numbers, the estimator converges to as almost surely, for every value of :
thus the estimator is consistent. This expression asserts the pointwise convergence of the empirical distribution function to the true cumulative distribution function. There is a stronger result, called the Glivenko–Cantelli theorem, which states that the convergence in fact happens uniformly over :
The sup-norm in this expression is called the Kolmogorov–Smirnov statistic for testing the goodness-of-fit between the empirical distribution and the assumed true cumulative distribution function. Other norm functions may be reasonably used here instead of the sup-norm. For example, the L2-norm gives rise to the Cramér–von Mises statistic.
The asymptotic distribution can be further characterized in several different ways. First, the central limit theorem states that pointwise, has asymptotically normal distribution with the standard rate of convergence:
This result is extended by the Donsker’s theorem, which asserts that the empirical process, viewed as a function indexed by, converges in distribution in the Skorokhod space to the mean-zero Gaussian process, where is the standard Brownian bridge. The covariance structure of this Gaussian process is
The uniform rate of convergence in Donsker’s theorem can be quantified by the result known as the Hungarian embedding:
Alternatively, the rate of convergence of can also be quantified in terms of the asymptotic behavior of the sup-norm of this expression. Number of results exist in this venue, for example the Dvoretzky–Kiefer–Wolfowitz inequality provides bound on the tail probabilities of :
In fact, Kolmogorov has shown that if the cumulative distribution function is continuous, then the expression converges in distribution to, which has the Kolmogorov distribution that does not depend on the form of.
Another result, which follows from the law of the iterated logarithm, is that
and

Confidence intervals

As per Dvoretzky–Kiefer–Wolfowitz inequality the interval that contains the true CDF,, with probability is specified as
As per the above bounds, we can plot the Empirical CDF, CDF and Confidence intervals for different distributions by using any one of the Statistical implementations. Following is the syntax from for plotting empirical distribution.

"""
Empirical CDF Functions
"""
import numpy as np
from scipy.interpolate import interp1d
def _conf_set:
nobs = len
epsilon = np.sqrt / )
lower = np.clip
upper = np.clip
return lower, upper
class StepFunction:
def __init__:
if side.lower not in :
msg = "side can take the values 'right' or 'left'"
raise ValueError
self.side = side
_x = np.asarray
_y = np.asarray
if _x.shape != _y.shape:
msg = "x and y do not have the same shape"
raise ValueError
if len != 1:
msg = "x and y must be 1-dimensional"
raise ValueError
self.x = np.r_
self.y = np.r_
if not sorted:
asort = np.argsort
self.x = np.take
self.y = np.take
self.n = self.x.shape
def __call__:
tind = np.searchsorted - 1
return self.y
class ECDF:
def __init__:
x = np.array
x.sort
nobs = len
y = np.linspace
super.__init__
def monotone_fn_inverter:
x = np.asarray
if vectorized:
y = fn
else:
y =
for _x in x:
y.append
y = np.array
a = np.argsort
return interp1d
if __name__ "__main__":
# TODO: Make sure everything is correctly aligned and make a plotting
# function
from urllib.request import urlopen
import matplotlib.pyplot as plt
nerve_data = urlopen
nerve_data = np.loadtxt
x = nerve_data / 50.0 # Was in 1/50 seconds
cdf = ECDF
x.sort
F = cdf
plt.step
lower, upper = _conf_set
plt.step
plt.step
plt.xlim
plt.ylim
plt.vlines
plt.show

Statistical implementation

A non-exhaustive list of software implementations of Empirical Distribution function includes: